Two days ago, we wrote about what we felt to be some of the most exciting news from day 1 of Google I/O 2012. Now, we’re back to take a quick look at day 2′s keynote and some of the more interesting sessions on days 2 and 3. While the keynote on day 1 was focused predominantly on Android 4.1 Jelly Bean and the various digital media resources available through the Play store, the second keynote was all about Chrome and how cloud-based technologies such as computation and immersive experiences shape our lives.
We also had the chance to attend sessions covering the next version of the ADK, Project Butter and how it smooths the Android experience, and new NFC features available in Jelly Bean. So what are you waiting for? Read on for more!
Day 2′s keynote began with Vic Gundotra introducing Sundar Pichai, SVP of Chrome. Sundar presents web usage stats. Today, there are 2.3 billion web users, compared to 1 billion in 2005 and 2 billion in 2009. As for Chrome itself, there are 310 million active users today, compared with 160 million in 2011 and 70 million in 2010. Thanks to the high adoption rate, there are 60 billion words typed, 1 TB downloaded, and 13 years saved thanks to reduced keystrokes every day!
Brian Rakowski, VP of Chrome comes on stage to discuss how Chrome enables painless transition between computers thanks to cross device sync. As any Android user running Ice Cream Sandwich can tell you, the ability to access opened tabs, history, preferences, and saved credentials across devices is amazing.
Surprisingly, Chrome is now available on iOS as well. The iOS version’s interface looks almost identical to the Android version, featuring an Omnibox up top, and all the same features we’ve come to expect on the Android version including incognito mode. And according to Brian, “using incognito mode on a touch device is a great experience.”
In 2004, Google launched Gmail, which is widely regarded as pioneering the use of Ajax. Eight years later, there are now 425 million active users monthly who use Gmail as their primary means of communication. However, their cloud prowess has grown, as they now offer calendar, docs, spreadsheets, presentations, drive and more. In fact, 45 states, 66 of the top 100 universities, and 5 million businesses have “gone Google.”
Clay Bavor, Director of Product Management for Google Apps, comes on stage to discuss Google’s new cloud offerings. Using image recognition technology, Google’s cloud services are now able to automatically recognize the contents of images. This enables users to search through images without ever having to enter metadata manually.
Also mentioned is Google Drive’s deep integration in Chrome OS, and how collaborative editing is now possible across devices running every major operating system. However, to aid utility, Google Docs editing now works offline as well. You are free to edit a document as necessary, and changes are synced to the cloud upon regaining Internet connectivity and opening the document again.
Google Compute Engine
Urs Holzle, SVP of Technical infrastructure, then comes on stage to discuss computing without limits. Today, Google App engine supports over 1 million active apps, receives 7.5 billion hits per day, and sees 2 trillion data store operations per month. However, many would still prefer to have their own virtual machines. Google Compute Engine provides this by delivering Linux VMs at Google scale. There are multiple storage options, high performance networking between them, and clusterability.
A live demonstration using data from the Institute for Systems Biology demonstrates the difference between an ISB-built 1,000-node cluster, a Google Compute Engine cluster with 10,000 nodes, and what happens when summoning 600,000 cores. Needless to say, the performance differences are dramatic.
Browser as a Platform
Sundar comes back on stage to discuss the evolution of the Chrome apps and the web. He mentions how Chrome apps are evolving tlo be always available, offer authentic app experiences, and deliver enhanced device access.
BulletStorm is then fired up for an impressive demonstration of browser capabilities. Sound is delivered thanks to new APIs.
Also demonstrated is Cirque du Soleil Moui Kanti Revo. In the demo, the webcam on a Chromebook is used to control the camera. This even works on mobile devices, using the accelerometer rather than the camera.
Last year, we saw the introduction of the Android Accessory Development Kit. As described by Google, the ADK is “designed to help Android hardware accessory builders and software developers create accessories for Android. The ADK 2012 is based on the Arduino open source electronics prototyping platform, with some hardware and software extensions that allow it to communicate with Android devices.” This year’s ADK has seen significant revision. It is now based on the upcoming Arduino Due platform.
In addition to the independent main processing board, ADK 2012 also includes an alarm clock shield containing 64 RGB LEDs, a Type 2 read/write NFC tag that launches the ADK app, and a spattering of sensors (colorimeter, thermometer, barometer, hygrometer, accelerometer, magnetometer, twelve capacitive buttons, and a capacitive slider.
Connectivity is accomplished through Bluetooth and two USB ports, one of which is a USB host. The Bluetooth is of note because it is done natively, rather than being serially connected via UART. The device can accept USB audio for Jelly Bean and above, without the need for any configuration. This is then outputted through a transducer that uses an entire panel as a speaker. While there are not retail pans going forward, the PC board files are available for Eagle, so manufacturers can now make their own versions.
Romain Guy and Chet Haase, Android UI Toolkit Engineers, discuss the seemingly never-ending quest for improved UI performance on Android. They begin by describing jank.
Jank comes from coppy performance (low FPS) and discontinuous / surprising experiences. In order to remedy this, they tasked themselves with lowering latency and increasing framerate.
Jelly Bean changes this by removing the dispatcher operation to parse user input. In practice, this leads to an average of 5 frames of latency on the Nexus 7.
Increasing framerate is accomplished by making drawing time smaller. Before Jelly Bean, Vsync was available, but it was not used in drawing operations. Thus jank happened when frames were repeated. Now, drawing with VSync helps to prevent this. However, in order to achieve a framerate of 60 fps to match the VSync-ed refresh rate of 60 Hz on the Nexus 7, there is only 16 ms available to process subsequent frames.
Triple buffering is now used, as has been employed in various 3D rendering applications for quite some time. This allows for parallel processing, since the device’s CPU can perform UI operations while the GPU is busy. For example, in the diagram to the right, triple buffering allows the CPU to use buffer C while the GPU is busy rendering buffer B.
Views can now also be invalidated individually to prevent view updates. Furthermore, invisible updates are no longer drawn since developers can tell Android to ignore something that won’t be drawn on a given surface if not in view.
Near Field Communication
There are 1 million new NFC-enabled Android devices sold per week. Android currently supports all common passive tag types, and through Beam can deliver context-rich sharing between users. For example, using Beam while viewing a YouTube video sends the video to the destination device at the same point as on the first device.
New in Jelly Bean, however, is NFC-initiated Bluetooth pairing. Rather than futzing with Bluetooth pins such as ’0000′ or ’1234,’ users can simply tap NFC-initiated Bluetooth devices to connect and pair. This could be used on Bluetooth audio devices, for Bluetooth file transfers, and much more.
UI Design and Challenges Designing the Nexus 7
Finally, Android SDK team engineer Tor Norbye, Visual Designer Richard Ngo, and Tech Lead Daniel Lehmann discussed common issues facing UI design. Citing the People app on the Nexus 7 as an example, they covered how on this intermediary from factor, multi-pane view was suitable for landscape but not portrait mode. They also cautioned developers to be aware of the effects of screen rotating so that multiple rotations do not throw the user out of context and force them to have to search for their previous place.
We hope you have enjoyed our coverage of Google I/O 2012. If you haven’t already done so, take a few minutes to watch XDA TV Producer Jordan’s recap, and be sure to stay tuned for our upcoming video where XDA Elite Recognized Developer AdamOutler talks about I/O 2012 from a developer’s perspective.
Want something on the XDA Portal? Send us a tip! -- Join us for xda:devcon 2014!