Google Acknowledges That “Some Odd, Inaccurate” Results May Come From Their AI Overviews

Google Acknowledges That “Some Odd, Inaccurate” Results May Come From Their AI Overviews

On Thursday, Google acknowledged that there is room for improvement in its AI Overviews product, which applies artificial intelligence to query responses.

Although Google stated that it thoroughly tested the new function before releasing it two weeks ago, the internet search engine did admit that the technology generates “some odd and erroneous overviews.” Examples include recommending the use of adhesive to adhere cheese on pizza or consuming urine to rapidly pass kidney stones.

Many of the cases were trivial, but some search results might be harmful. When the Associated Press asked Google last week which wild mushrooms were edible, the machine-generated response was extensive and mainly accurate in terms of technical details. However, Mary Catherine Aime, a Purdue University mycology and botany professor who examined Google’s answer to the AP’s inquiry, noted that “a lot of information is missing that could have the potential to be sickening or even fatal.”

For instance, she observed that while information regarding puffball mushrooms was “more or less correct,” Google’s summary focused on searching for mushrooms with solid white meat, which is also a feature of many potentially fatal puffball imitators.

Another widely publicized incident occurred when an AI researcher asked Google how many Muslims had served as president of the United States. The intelligent response, which has since been refuted, was, “The United States has had one Muslim president, Barack Hussein Obama.”

The pullback is the most recent example of a tech company releasing an AI product too soon in an attempt to establish itself as a front-runner in the hotly contested market.

Google’s head of search, Liz Reid, stated in a company blog post on Thursday that the business is pulling back its AI Overviews while still making improvements since they occasionally produced useless answers to searches.

“[S]ome odd, inaccurate or unhelpful AI Overviews certainly did show up. And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve,” Reid said.

According to Reid, the absence of relevant, helpful advise online led to the creation of dubious content by AI Overviews in response to absurd queries like “How many rocks should I eat?” She went on to say that the AI Overviews function could also be misinterpreting webpage language to display false information in response to Google queries, as well as accepting caustic remarks from discussion boards at face value.

“In a small number of cases, we have seen AI Overviews misinterpret language on webpages and present inaccurate information. We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies,” Reid wrote.

With the addition of “triggering restrictions for queries where AI Overviews were not proving to be as helpful,” the business is temporarily reducing the amount of overviews produced by AI. Furthermore, according to Google, it attempts to avoid displaying AI Overviews for serious news stories “where freshness and factuality are important.”

Additionally, modifications have been made, according to the business, “to limit the use of user-generated content in responses that could offer misleading advice.”