jerdog · Apr 6, 2012 at 05:00 pm

Google Defines the UI of the Future–But Are We Ready?

Augmented reality – a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.

We have been tempted in the movies over the years with augmented reality via heads-up-displays (HUDs), from Terminator to Minority Report, and yet it hasn’t really made the leap from the Silver Screen to real life. Even apps like Layar attempt to bring it to your fingertips. The idea is that we live in a world where information is always around us just waiting to be visualized.

Google X Labs has now stepped into the fray with a project they are calling “Project Glass” with the purpose of it being something that “helps you explore and share your world, putting you back in the moment.” The concept video shows a guy walking around doing normal tasks, and being able to call up—apparently by voice and head gestures—different features and commands and interacting with his environment. Project Glass is a set of Android-powered glasses, which traces its roots to MIT’s MIThril project. Initial drawings and actual pictures of people wearing the initial prototype are available as well, giving us a view into what drives Project Glass now, albeit in a much smaller footprint today.

This new UI gives us a glimpse of what the future could hold, and Google seems to be already well down that road. All of the features they show in this video are already in place outside of this UI like Google Voice Search, Google Maps and Navigation, Google Talk, etc. All signs point to this being an interface to your smartphone, akin to something Borg Seven of Nine or Locutus would wear to connect to The Collective.

Already there are a few videos popping up mocking Project Glass, and it really was only a matter of time. I love the idea of integrating your normal day-to-day tasks all together into something you can easily interact with. I am looking forward to seeing how Google gets past the various hurdles and logistics of this integration.  Google’s co-founder Sergey Brin was recently seen wearing a prototype, and he told TheVerge that he hopes the final product will be able to connect with all sorts of different devices, and would need to pass RF radiation testing and verify there is no SkyNet integration. (OK – that last bit isn’t what he said, but could be a concern for some.)

Will this truly be “putting you back in the moment” as Google desires, or will it take over the moment? Having to wear something else in order to do this, and not to mention if you already need to wear corrective eyewear, to me takes away from the convenience of it. It seems a little bit intrusive and I am not sure how well the general public is going to take to it. I can also only imagine the headaches this will bring on as your eyes will have to constantly be adjusting to things happening at different depths. Let the old SNL skit “Mr. No Depth Perception” sink into your consciousness to get a feel for how bad things could be with this technology. With that being said, I think this holds tremendous promise.

I love Google. I really do. I believe them to be one of the few companies to still truly innovate. Google wouldn’t go through all this trouble to begin the conversation if they weren’t already in the testing and usability stage, so I think we’ll see these pop up within the next year or so. There will be naysayers, but I wouldn’t bet against them on anything. Perhaps the technology is ready to make the leap into prime-time, but are we ready to be assimilated?

 


_________
Want something on the XDA Portal? Send us a tip!
TAGS:

jerdog

jerdog is an editor on XDA-Developers, the largest community for Android users. Jeremy has been an XDA member since 2007, and has been involved in technology in one way or another, dating back to when he was 8 years old and was given his first PC in 1984 - which promptly got formatted. It was a match made in the stars, and he never looked back. He has owned, to date, over 60 mobile devices over the last 15 years and mobile technology just clicks with him. In addition to being a News Editor and OEM Relations Manager, he is a Senior Moderator and member of the Developer and Moderator Committees at XDA. View jerdog's posts and articles here.
Mathew Brack · May 28, 2015 at 09:59 pm · 1 comment

I/O Summary: How Android M Handles Power And Charging

In the spirit of improving the core Android experience, Google is changing Android M to be smarter about managing power. Their new Doze feature comprises of two primary roles which allow Android to use motion detection in order to predict activity, and go into deep sleep at the right time based on accelerometer readings.   In order to extend your screen off battery life, Android M will now monitor your activity levels and if it detects that your device has...

XDA NEWS
Mario Tomás Serrafero · May 28, 2015 at 09:51 pm · 1 comment

I/O Summary: Google Photos App

At I/O 2015, Google tackled the information problem in mobile once more, this time through pictures: “how incredible is it that we all have a camera in our pockets at any moment? (...) These moments tell your story (...)[but] taking more pictures and videos makes it harder to relieve memories due to the sheer volume”. This is why they are revamping Google Photos and centering it around 3 big ideas:   Creating a home for all photos and videos that is...

XDA NEWS
Mario Tomás Serrafero · May 28, 2015 at 09:26 pm · 2 comments

I/O Summary: Development, Play Store, The Next Billion

VP of Engineering Jen Fitzpatrick began talking about what Google is doing to help “the next billion” come online. More and more people are getting their first smartphone, and for many people this first phone will be their first computer.   The majority of the next billion will be Android users and they “want to remove the barriers of smartphone adoption”. While there are huge displays of phones on sale, not all are able to run the latest and greatest...

XDA NEWS
Share This