Google finds itself in a precarious position. As the tech giant rushes to catch up with the likes OpenAI, it must also grapple with the thorny issue of how to handle election-related conversations in its chatbot.

The decision to ban election talk altogether may seem like a cop-out, but as Bart Simpson once wisely observed, they're "damned if they do, damned if they don't." Google's recent missteps, such as the inclusion of Nazi soldiers of incorrect parental lineage, have only underscored the challenges of developing a chatbot that can navigate sensitive topics without causing offence and/or spreading misinformation.

As someone who is admittedly not classically trained in machine learning or artificial intelligence, I can still appreciate the gravity of the situation. With deep fakes and misinformation set to proliferate during the upcoming elections in India, the United States, and likely the United Kingdom, the stakes could not be higher.

By taking a conservative approach and banning election talk altogether, Google is essentially trying to stay out of the fray and avoid becoming a target for both the left and the right.The last thing the company needs is for its own chatbot to become a source of inflammatory information or to put it another way, have its own "nuclear weapon" implode on itself.

If a company like Google has the power to influence elections through its chatbot, does it also have an obligation to ensure that its technology is not misused? And if so, how can it balance the need for free speech and open debate with the imperative to prevent the spread of false or harmful information?

These are not easy questions to answer, and they will likely become even more pressing as Gen AI continues to advance. In the meantime, Google's decision to ban election talk in its chatbot may be a temporary solution, but it is not a long-term one.

As for the issue of liability, it's a question that gives me pause, but I’m still no closer to the answer. In cases like the Air Canada Moffatt Chatbot the lines of liability are clear to me. In calls like the open letter regarding Deepfakes that I wrote about a few blogs back the liability of ‘tool’ providers with what their customers chose to do with the tools I fell the other way.  And in this case I'm in two minds. Thankfully, I don’t make these decisions.


Share this post
The link has been copied!