December 25, 2024

Brighton Journal

Complete News World

Loss of user trust is the final point of no return.

Loss of user trust is the final point of no return.

Google (GOOG, GOOGL) on Wednesday introduced a slew of new generative AI features to its vast ecosystem of products as part of its I/O conference. The event was equal parts way for the company to show off what it’s been working on over the past year and prove to its users and investors that it is it, not Microsoft. (MSFT), is the leading company in the field of artificial intelligence.

One of the major announcements during the show included the addition of an AI compose feature called Help Me Write, which will help you write emails. But because of its use of generative AI, Google Workspace VP Aparna Pappu says the company needs to be careful not to push inappropriate responses or the company risks losing users’ trust.

“The loss of user trust is the ultimate point of no return,” she told Yahoo Finance. “This is our Northstar,” user trust can not be lost. “”

Generative AI has raised a slew of thorny questions from whether it will make jobs like yours obsolete, to whether it should be allowed in schools. How much confidence users can have in technology to produce accurate responses to their prompts has also become a point of contention.

Google CEO Sundar Pichai speaks on stage during the Google I/O keynote session at Shoreline Amphitheater in Mountain View, Calif., on May 10, 2023. (Photo by Josh Edelson/AFP) (Photo by Josh Edelson/AFP) (via Getty Images)

Google CEO Sundar Pichai speaks on stage during the Google I/O keynote session at Shoreline Amphitheater in Mountain View, Calif., on May 10, 2023. (Photo by Josh Edelson/AFP) (Photo by Josh Edelson/AFP) (via Getty Images)

Babu says this is also part of the reason why Google releases its generative AI features to trusted testers before they go public.

See also  Super Mario Bros. The developers are wondering about the timer output

“Before we go in [general availability] We have to test it and put it through the wringer to even get it to Labs [Google’s early user test program] Babu explained that it goes through a lot of rigorous usability testing, liability testing, and safety testing before we let a single third-party user try it out.

Google and Microsoft’s offerings of generative AI specifically indicate that they are in early testing stages or that some of the answers may not be accurate. Furthermore, generative AI in general is prone to “hallucinations,” which is basically an elaborate way of saying that it can form responses to certain seemingly accurate, but false, user queries.

During the “60 Minutes” segment, Google’s chatbot Bard hallucinates a non-existent book. CEO Sundar Pichai explained during the clip that the issue is one that many chatbots have at this point and that engineers are trying to better understand.

To that end, Babu says Google continues to test its systems as a way to try to prevent wrong answers or inappropriate responses.

Sign up for the Yahoo Finance technical newsletter.

Sign up for the Yahoo Finance technical newsletter.

“Responsible and safe AI, it’s not born overnight. It’s years of working on AI and figuring out how to do adversarial testing,” Babu explained. “So there’s basically responsible and safe AI tested in how we build these products.”

One issue that Pappu says is unique to Workspace is that it’s used by billions of users, each with their own understanding of the technology. Bringing a feature like generative AI into the equation without confusing these users is a puzzle in itself.

See also  Company of Heroes: A great PC game that lacks technical ambition

“We have 3 billion users,” she said. “We have a responsibility to make these things really simple and easy to use.”

by Daniel HollyTechnical Editor at Yahoo Finance. follow him @employee

Click here for the latest stock market news and in-depth analysis, including the events that move stocks

Read the latest financial and business news from Yahoo Finance