Google Defines the UI of the Future–But Are We Ready?
Augmented reality – a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.
We have been tempted in the movies over the years with augmented reality via heads-up-displays (HUDs), from Terminator to Minority Report, and yet it hasn’t really made the leap from the Silver Screen to real life. Even apps like Layar attempt to bring it to your fingertips. The idea is that we live in a world where information is always around us just waiting to be visualized.
Google X Labs has now stepped into the fray with a project they are calling “Project Glass” with the purpose of it being something that “helps you explore and share your world, putting you back in the moment.” The concept video shows a guy walking around doing normal tasks, and being able to call up—apparently by voice and head gestures—different features and commands and interacting with his environment. Project Glass is a set of Android-powered glasses, which traces its roots to MIT’s MIThril project. Initial drawings and actual pictures of people wearing the initial prototype are available as well, giving us a view into what drives Project Glass now, albeit in a much smaller footprint today.
This new UI gives us a glimpse of what the future could hold, and Google seems to be already well down that road. All of the features they show in this video are already in place outside of this UI like Google Voice Search, Google Maps and Navigation, Google Talk, etc. All signs point to this being an interface to your smartphone, akin to something Borg Seven of Nine or Locutus would wear to connect to The Collective.
Already there are a few videos popping up mocking Project Glass, and it really was only a matter of time. I love the idea of integrating your normal day-to-day tasks all together into something you can easily interact with. I am looking forward to seeing how Google gets past the various hurdles and logistics of this integration. Google’s co-founder Sergey Brin was recently seen wearing a prototype, and he told TheVerge that he hopes the final product will be able to connect with all sorts of different devices, and would need to pass RF radiation testing and verify there is no SkyNet integration. (OK – that last bit isn’t what he said, but could be a concern for some.)
Will this truly be “putting you back in the moment” as Google desires, or will it take over the moment? Having to wear something else in order to do this, and not to mention if you already need to wear corrective eyewear, to me takes away from the convenience of it. It seems a little bit intrusive and I am not sure how well the general public is going to take to it. I can also only imagine the headaches this will bring on as your eyes will have to constantly be adjusting to things happening at different depths. Let the old SNL skit “Mr. No Depth Perception” sink into your consciousness to get a feel for how bad things could be with this technology. With that being said, I think this holds tremendous promise.
I love Google. I really do. I believe them to be one of the few companies to still truly innovate. Google wouldn’t go through all this trouble to begin the conversation if they weren’t already in the testing and usability stage, so I think we’ll see these pop up within the next year or so. There will be naysayers, but I wouldn’t bet against them on anything. Perhaps the technology is ready to make the leap into prime-time, but are we ready to be assimilated?
Want something on the XDA Portal? Send us a tip!