Google I/O is underway, and the company spent a lot of its main keynote talking about artificial intelligence, the strides the company is making in the field, and what it means to be responsible with AI. That last part is especially interesting, as while other companies like OpenAI and Microsoft are grappling with finding a balance between letting AI loose and reigning it in a bit, Google is focusing on the "reigning in" aspect from the get-go.

James Manyika, who leads Google's Technology and Society department, spoke during Google's keynote about how the company aims to integrate AI responsibly even as it introduced a huge number of new services.

"It is also important to acknowledge that AI has the potential to worsen existing societal challenges — such as unfair bias — and pose new challenges as it becomes more advanced and as new uses emerge," he wrote in a Google blog post. That’s why we believe it’s imperative to take a responsible approach to AI."

Of course, it's easy to be drawn in by the marketing of "responsibility" that Google says is so integral to its work. However, there's another, more cynical interpretation: Google panicked, and it's using that framing to position itself as the superior AI platform.

Google clearly panicked thanks to ChatGPT and Bing

2023-05-10 19.13.01

Watching the Google I/O keynote, it's pretty clear that the company needed to come up with something to compete with ChatGPT and Bing, and Bard clearly wasn't cutting it. And it didn't just announce one or two features and some improvements to Bard; it released new feature after new feature after new feature. The keynote was a constant inundation of generative AI features coming to Search, Google Workspace, and practically every other application in Google's suite. It even launched generative AI features for Google Cloud, holding itself up as the Software-as-a-Service solution in AI thanks to Vertex.

Reading between the lines, it seems like Google didn't just feel threatened; it actually panicked. In retaliation, it made every single product that it could think of, coming up with so many incredible new features that it's hard to keep track. These include:

That's not all, though, because Google wants you to believe that a key reason for being behind the times was that it was being "responsible" and taking its time.

Google claims that responsibility is important

bold-and-responsible-ai-google

Google spent a lot of time talking about "responsibility" in artificial intelligence, something that companies like Microsoft and OpenAI haven't done a huge amount of. The company is actively building tools that will allow it to combat misinformation that its own technology may generate. This includes metadata that can watermark images as being generated, for example.

"Responsibility" is also a great defense to hide behind because while these features are rolled out in a Labs style format that's behind a waitlist, they can say that the features are available, just not to everyone, and with the disclaimer they're not in their final states. It's a great way to bide time until they're ready to roll out to the public.

Manyika's entire speech wasn't just to preach the importance of being careful (which isn't a controversial stance to take). It's a way to signal to companies that Google wants to be seen as the adult in the room. While both Microsoft and OpenAI forge ahead and try to outdo each other with AI language models that they don't even understand the limitations of yet, Google is trying to bring across a sense of calm, that it wants to take things slow and introduce features that are ready to go before deploying them to the general public.

Google's generative AI will still have inaccuracies, too

generative-ai-google-io

If you look in the bottom right of the above image, which shows off examples of generative AI, you'll see that Google still has a warning that says "generative AI may be inaccurate or offensive." While most people wouldn't have thought it was completely accurate, it shows that Google isn't confident in thinking that it has solved the issue of inaccuracy and misinformation that both Microsoft and OpenAI are struggling with. It just wants you to think that it has.

Of course, if Google was on par with OpenAI and Microsoft now (which remains to be seen), then that's already a large leap when compared to how Bard looked just a short time ago. However, being responsible doesn't mean a whole lot if its products are still in the same position as its competitors. Are the disclosures relating to images good to have? Absolutely. However, the ability to generate or receive misinformation in the form of a search result hasn't been solved, and that fact is being deprioritized in these announcements.

Nevertheless, we'll be excited to try out what Google has cooking in the realm of generative AI. These are great improvements that may put Google in the lead, with features aimed at enterprise in the form of Vertex AI on top of genuinely useful features integrated into Search and part of Google Suite... but it may also just be a marketing spin to make up for the shortcomings that Google has faced so far.