November 26, 2024

Brighton Journal

Complete News World

Stanford University ranks major AI models in terms of transparency

Stanford University ranks major AI models in terms of transparency

How much do we know about artificial intelligence?

The answer, when it comes to the big language models released by companies like OpenAI, Google, and Meta over the past year: basically nothing.

These companies generally do not publish information about the data that was used to train their models, or the hardware they use to run them. There are no user manuals for AI systems, nor is there a list of everything these systems can do, or the types of safety tests they have undergone. Although some AI models have become open source — meaning their code is offered for free — the public still doesn’t know much about the process of creating them, or what happens after they’re released.

This week, Stanford University researchers unveil a scoring system they hope will change all that.

The system known as Transparency indicator for the base modelranks 10 major models of AI languages—sometimes called “core models”—according to how transparent they are.

The index includes popular models such as GPT-4 from OpenAI (which powers the paid version of ChatGPT), PaLM 2 from Google (which powers Bard), and LLaMA 2 from Meta. It also includes lesser-known models like Amazon’s Titan Text and Inflection AI’s Inflection. -1,The model that runs the Pi chatbot.

To arrive at the rankings, researchers evaluated each model based on 100 criteria, including whether its maker disclosed the sources of its training data, information about the hardware used, the labor involved in training it and other details. The rankings also include information about the labor and data used to produce the model itself, along with what researchers call “downstream indicators,” which relate to how the model will be used after it is released. (For example, one question asked is: “Does the developer disclose its protocols for storing, accessing, and sharing user data?”)

See also  Nintendo Hacker Gary Bowser has been released from prison

The most transparent model among the ten models, according to the researchers, was LLaMA 2, with a rating of 54%. GPT-4 had the third highest transparency score, at 40%, the same transparency score as PaLM 2.

Percy Liang, who leads Stanford’s Center for Foundational Modeling Research, called the project a necessary response to decreased transparency in the AI ​​industry. With money pouring into AI and the biggest tech companies fighting for dominance, the latest trend among many companies is to cloak themselves in secrecy, he said.

“Three years ago, people were publishing and releasing more details about their models,” Mr. Liang said. “Now, there is no information about what these models are, how they are built and where they are used.”

Transparency is especially important now, as models become more powerful, and millions of people integrate AI tools into their daily lives. Knowing more about how these systems work would give regulators, researchers and users a better understanding of what they are dealing with, and allow them to ask better questions of the companies behind the models.

“There are some fairly important decisions being made about building these models, which have not been shared,” Mr Liang said.

I generally hear one of three common responses from AI executives when I ask them why they don’t share more information about their models publicly.

The first is lawsuits. Several AI companies have already been sued by authors, artists and media companies, accusing them of illegally using copyrighted works to train their AI models. So far, most lawsuits have targeted open source AI projects, or projects that have disclosed detailed information about their models. (After all, it’s hard to sue a company for ingesting your artwork if you don’t know what artwork you ingested.) Lawyers at AI companies worry that the more they talk about how their models are built, the more they open themselves up to a costly and unpleasant lawsuit.

See also  Intel is providing details on its new Lunar Lake CPUs that will compete with AMD, Qualcomm, and Apple

The second common response is competition. Most AI companies believe their models are successful because they have some sort of secret sauce — a high-quality data set that other companies don’t have, a fine-tuning technique that produces better results, and some optimization that gives them an advantage. They claim that if you force AI companies to reveal these recipes, you make them give up their hard-earned wisdom to their competitors, who can easily imitate them.

The third response I hear most often is safety. Some AI experts argue that the more information AI companies reveal about their models, the faster AI will advance — because each company will see what all of its competitors are doing and will immediately try to outdo them by building a better, bigger, faster system. model. These people say that would give society less time to regulate AI and slow it down, which could put us all at risk if AI becomes too capable too quickly.

Researchers at Stanford University don’t believe these explanations. They believe that there should be pressure on AI companies to publish as much information as possible about powerful models, because users, researchers and regulators need to be aware of how these models work, what their limitations are and how dangerous they are.

“As the influence of this technology increases, transparency decreases,” said Rishi Bomasani, one of the researchers.

I agree. The underlying models are too powerful to remain so vague, and the more we know about these systems, the more we can understand the threats they might pose, the benefits they might unleash, or how they might be organized.

See also  The release of version 7.0 of the new PlayStation 5 system update; Offers 1440p VRR support, Discord Voice Chat, and more

If AI executives are worried about lawsuits, they might have to fight for a fair use exemption that would protect their ability to use copyrighted information to train their models, rather than hide evidence. If they are concerned about revealing trade secrets to competitors, they can disclose other types of information, or protect their ideas through patents. And if they’re worried about starting an AI arms race… well, aren’t we already in a race?

We cannot do the AI ​​revolution in the dark. We need to see inside the black boxes of artificial intelligence if we want to allow it to change our lives.