Google reportedly had trouble launching its AI chatbot Bard last month, but employees called the tool a “pathological liar” and could “lead to serious injury or death.” Despite explaining that it was easy to spew out certain misinformed responses, he reportedly moved forward.
Current and former employees allege that Google ignored its own AI ethics in a desperate effort to keep up with competitors such as Microsoft-backed OpenAI’s popular ChatGPT. reported by Bloomberg on wednesday.
Google’s push to develop Bard reportedly surged late last year after the success of ChatGPT reportedly prompted Top Brass to declare itself a “competitive code red.” .
Microsoft’s planned integration of ChatGPT into the Bing search engine is widely seen as a threat to Google’s dominant online search business.
Google rolled out Bard as an “experiment” to US users last month.
However, many Google employees expressed concern when ordered to test Bard to identify potential bugs and issues before rollout. This process is known in the tech world as “dogfooding”.
Bard’s testers flagged concerns that the chatbot spewed information ranging from inaccurate to potentially dangerous.
One worker described Byrd as a “pathological liar” after seeing erratic reactions, according to a screenshot of an internal discussion obtained by Bloomberg. , called Byrd’s performance “humiliating”.
In one instance, when a Google employee asked Bird how to land a plane, Bloomberg said, he only gave advice that could lead to a crash.
In another case, Byrd was said to have responded to a prompt about scuba diving with the suggestion that it would “likely lead to serious injury or death.”
Google CEO Sundar Pichai has frowned when he admitted the company “doesn’t fully understand” its technology.
“Look, you don’t quite understand. And you don’t quite understand why it said that, why it got [it] error,” Pichai said in an interview with ’60 Minutes’ last Sunday.
In February, an unnamed Google employee joked on an internal forum that Bard was “more bad than useless” and asked executives not to launch the chatbot in its current state.
“AI ethics are on the back burner,” Meredith Whittaker, a former Google employee and current president of the privacy-focused Signal Foundation, told Bloomberg. “It will not work in the end if ethics is not in a position to take precedence over profit and growth.”
An employee who spoke to the outlet said Google executives chose to call Bard and other new AI products “experiments,” so the public could happily overlook the early struggles.
As Bard approaches a potential launch, Google allegedly relaxed its AI requirements aimed at determining whether a particular product is safe for general use.
In March, Jen Gennai, Google’s AI Principles Operations & Governance Lead, said he reversed an assessment by a member of his team that Bard was not ready for release because it could cause harm. A source told Bloomberg.
Gennai refuted the report in a statement, saying its internal reviewers suggested “risk mitigation and technology adjustments rather than providing recommendations for the final product launch.”
A committee composed of senior leaders from Google’s product, research and business leaders will then decide whether the AI project should move forward and any adjustments needed, Gennai added.
“For this particular review, we have added to the list of potential risks from the reviewers and escalated the analysis of the results to this multidisciplinary council, which will be supported by ongoing pre-training and enhanced We have determined that it is appropriate to move forward toward a limited experimental launch with guardrails and appropriate disclaimers,” Gennai said in a statement to the Post.
“Responsible AI remains a top priority for the company,” said Google spokesman Brian Gabriel.
“We continue to invest in teams working to apply our AI principles to our technology,” Gabriel told The Post.
Bard’s Google website now calls the tool an “experiment.”
The “FAQ” section included on the site The bard publicly declares that it “may display inaccurate information or offensive remarks”.
“Accelerating people’s ideas with generative AI is really exciting, but it’s just the beginning and Bard is an experiment,” the site says.
Bard’s launch has already caused some embarrassment for the tech giant.
Last month, app researcher Jane Manchun Wong posted an exchange in which Bard backed DOJ antitrust officials in a pending lawsuit against Google and declared its creator. “Dominating the digital advertising market”.
In February, a social media user noted: The bard provided an incorrect answer About the James Webb Space Telescope in a request to prompt that was included in the company’s advertisement.
Google’s Bard chatbot has come under increased scrutiny amid widespread debate about the potential risks associated with the unrestrained development of AI technology.
Billionaire Elon Musk and more than 1,000 experts in the field have signed an open letter calling for a six-month moratorium on the development of advanced AI until proper guardrails are put in place.
Despite safety concerns, Musk is rapidly launching his own AI startup as competition intensifies within the industry. Google and Microsoft are just two of his rivals in an increasingly crowded field.
In an interview with 60 Minutes, Pichai declared that AI will eventually affect “every product of every company.”
He also expressed support for government regulation to address potential risks.
“I think we have to be very thoughtful,” said Pichai. “And I think these are all things that society needs to understand as we move forward. It’s not for the company to decide.”
https://nypost.com/2023/04/19/google-launched-bard-chatbot-despite-ethics-concerns-report/ Despite Ethical Concerns, Google Launches Bard Chatbot: Report