Generative AIs are all the rage recently, kickstarted by ChatGPT and followed up by Microsoft Bing. Since then, Google has seemingly sped up its own AI chat model development and released Bard in a limited fashion, and so far, it seems to be an inferior product to the rest of the competition. It struggles with basic questions, uses questionable sources, and in general, appears to be a bit of a half-baked product.

To be completely fair to Google, these growing pains are some that Bing also faced, and Bard has only just released. On top of that, Google hasn't poised Bard as a search engine like Microsoft has talked Bing up to be. Nevertheless, if Bard was good enough to be a search engine, I think Google would probably be advertising it as one. Google doesn't want to be left behind, and if it had a search engine competitor to put up against Bing, it almost certainly would.

With all of that being said, Bard is a glimpse into a terrifyingly misinformed future. It has screwed up on several occasions already, and as Google wrangles its own AI to be less erratic, it becomes difficult to tune it to become useful, too. For AI to be useful, it needs to be dynamic, but that dynamism comes with the peril of the wider internet that it relies on, too.

Google Bard tries and fails to be a search engine

Google consistently says that Bard isn't a search engine and is instead "a complement to Search." It seems that Google wants you to use Bard as a way to help generate emails, write notes, and do other creative things... and not get information.

If it's the case that Google doesn't want Bard to be seen as a search engine, then why is it connected to the internet?

As many users have (humorously) pointed out, you can ask Bard questions that it definitely needs the internet for, and it will search and try to give you an answer, too. If it was designed to simply help with creativity, the company could easily make it a lot more like ChatGPT and simply have it be the case that it can only answer questions on things that it's been trained on. However, that isn't the case, and if you ask it questions that it doesn't quite know the answer to, it will make some ambitious attempts at sourcing.

For example, a user asked Bard how long it would take for Bard to shut down. The answer sourced a joke Hacker News comment to say that the service was already shut down.

Bard has also taken liberties when it comes to the months of the year, something that it seems to struggle with when you give it the first two months of the year.

How I can only assume the above happened is that Bard recognized the "uary" ending of the first two months (along with the misspelling of February) and followed its own pattern recognition instead of what the months of the year actually are.

No matter what, though, it's clear that Bard isn't exactly ready to be a search engine, and I assume that is exactly why it hasn't been advertised as one by Google. You're not supposed to use it as a search engine for the simple reason that, at this current point in time, it doesn't seem capable of being one. It can find things, but it has no way of verifying their authenticity, and if it can source a random Hacker News comment for the entire service being shut down, it's clear that quite a bit of work needs to be done in this area.

Bard, ChatGPT, and Bing versus the future of misinformation

Misinformation and disinformation are already incredibly pervasive topics in modern society, and artificial intelligences that have no frame of reference for the things that they regurgitate to an end user may contribute more than ever to the spread of misinformation. The general populous views an AI as an all-knowing being, capable of comprehension of the entire internet in a way that it can simplify it to mere users like ourselves. That couldn't be further from the truth, and it's only going to get worse.

Bard has already been accused of plagiarizing Tom's Hardware, which, when confronted, then accused the user of lying. If you think that some of this sounds similar to the "I have been a good Bing 😊" debacle, you'd be right. These are the same pitfalls we've already seen Microsoft dealing with, and it's a hard problem to circumvent without limiting your AI too much. When an AI says that something is a lie that isn't a lie, it's directly contributing to misinformation, even if it's mundane.

Bard isn't the only generative AI accused of spreading misinformation, but it's certainly part of the currently ongoing problem. It's nearly impossible to ensure that an AI only gives facts to an end user, especially when it comes to looking for recent information. What do you trust an AI to answer effectively? For me, it's basically anything that the Google Assistant is capable of... primarily the accessing of already-collated data. It's not being put together or collected by an AI for end-user consumption; it's just being read from a database.

We're at a crossroads currently. On the one hand, AI is continuously getting more and more powerful as the days go on. Language models like these are growing in popularity, and most people will only see their effectiveness and begin using them right away; hell, ChatGPT is the fastest-growing consumer application ever in terms of the time it took to reach one hundred million users. Having said that, programs like these have an incredible amount of influence when their content is literally unpoliced and unverified.

That's not to say these AI tools can never be good, but it's very likely that they will be contributors to misinformation and disinformation in the future, and that's going to be hard to combat. Facebook already manages to convince people of all kinds of conspiracy theories, and an AI with Google's (or Microsoft's, or OpenAI's) name on it will almost certainly contribute to those conspiracy theories at some point. For now, the best advice we can give with any of these tools is to re-verify all of the information that they give you, and it's very likely that will be the prevailing advice going forward for a long time to come.