Some Google employees have been sounding the alarm about the Mountian View, California company's artificial intelligence development, the New York Times reported. In a plot twist that rivals a sci-fi thriller, two Google employees tasked with reviewing AI products attempted to halt the release of Google's AI chatbot, Bard.
Their concerns? The chatbot generated dangerous or false statements, per the report.
In a tense March showdown, these two heroic reviewers working under Jen Gennai, the director of Google's Responsible Innovation group, recommended blocking Bard's release in a risk evaluation. However, like a scene from an AI-themed soap opera, The New York Times reported that sources claimed Gennai altered the document to remove the recommendation and downplay the risks of the chatbot.
Gennai told Insider that she had "added to the list of potential risks from the reviewers and escalated the resulting analysis" to a committee of senior product, research, and business leads, which then determined it was appropriate to move forward for a limited experimental launch. She told The Times that reviewers were not supposed to weigh in on whether to proceed.
A representative for Google told Insider: "We're pleased with the early reception of our experiment with Bard, even as we keep improving it, with commentators and users widely recognizing that it's been released conservatively, with significant caution and limits".
In recent months, the tech world has been racing to deploy generative AI products faster than a robot dog on roller skates. The release and viral popularity of OpenAI's ChatGPT seem to have lit the proverbial AI fuse, but the speed of development is raising eyebrows and alarms elsewhere.
In an AI intervention, several heavyweights signed an open letter in March calling for a six-month pause on advanced AI development. The letter expressed concerns that AI companies were locked in an "out-of-control race" and cited profound risks to society from the advanced technology.
John Burden, one of the letter's signatories and research associate at The Centre for the Study of Existential Risk, previously told Insider the rate of AI development had picked up at an unprecedented speed. "Things that five years ago would have seemed unrealistic to expect in the next decade have come and gone," he said. "On a bigger scale, we just aren't ready for the impact that this technology might have — considering we don't really know how these models are doing what they are doing".
So, as the AI saga unfolds, we're left wondering: Are we witnessing the dawn of a new era or the beginning of a dystopian AI uprising? Only time will tell.