November 18, 2024

Brighton Journal

Complete News World

Big Tech’s AI arms race with a new language paradigm is heating up

Big Tech’s AI arms race with a new language paradigm is heating up

Feb. 24 (Reuters) – Meta Platforms Inc (METAO) said on Friday it will release to researchers a new big language model, the base program for a new artificial intelligence system, intensifying the AI ​​arms race as big tech companies rush to integrate the technology into their products. and dazzle investors.

The general battle for control of the AI ​​tech space kicked off late last year with the launch of Microsoft-backed OpenAI’s ChatGPT software, and prompted tech behemoths from Alphabet Inc (GOOGL.O) to China’s Baidu (9888.HK), tout their own offerings. .

Meta LLaMA, short for Large Language Model Meta AI, will be available under a noncommercial license to researchers, government entities, civil society, and academia. Blog.

Large language paradigms mine vast amounts of text in order to summarize information and create content. They can answer questions, for example, with sentences that can be read as if they were written by humans.

Latest updates

View 2 more stories

The model, which Meta said required “significantly less” computing power than previous demos, was trained in 20 languages ​​with an emphasis on those with Latin and Cyrillic alphabets.

“Today’s Meta announcement appears to be a step in testing their generative AI capabilities so they can implement them into their products in the future,” said Jill Luria, senior software analyst at DA Davidson.

“Generative AI is a new application of AI that Meta has less experience with, but is clearly important to the future of their business.”

Artificial intelligence has emerged as a bright spot for investments in the tech industry, whose slow growth has led to widespread layoffs and lowered experimental bets.

See also  ERCOT calls again for energy conservation in Texas - NBC 5 Dallas-Fort Worth

Meta said LLaMA can outperform competitors by examining more parameters, or variables that the algorithm takes into account.

Specifically, he said that the version of LLaMA with 13 billion parameters could outperform GPT-3, which is a recent predecessor to the model on which ChatGPT is built.

He described the LLaMA model of 65 billion variables as “competitive” with Google’s Chinchilla70B and PaLM-540B, which are larger than the model Google used to show Bard’s chat-powered search.

A spokeswoman for Meta attributed the performance to a “cleaner” larger amount of data and “architectural improvements” in the model that enhanced training stability.

Meta released in May last year a large language model OPT-175B, also intended for researchers, that formed the basis for a new iteration of the BlenderBot chatbot.

It later introduced a model called Galactica, which could write scientific articles and solve mathematical problems, but quickly pulled the demo after it generated false, seemingly credible responses.

Additional reporting by Yuvraj Malik and Eva Matthews in Bengaluru and Katie Paul in New York; Editing by Shailesh Cooper and Grant McCall

Our standards: Thomson Reuters Trust Principles.