May 4, 2024

Brighton Journal

Complete News World

Microsoft launches Phi-3, its smallest AI model to date

Microsoft launches Phi-3, its smallest AI model to date

Microsoft has launched the next version of its lightweight AI-powered model, the Phi-3 Mini, the first of three mini models the company plans to release.

Phi-3 Mini measures 3.8 billion parameters and is trained on a smaller dataset compared to Phi-3 Mini Large language models such as GPT-4. It's now available on Azure, Hugging Face, and Ollama. Microsoft plans to release Phi-3 Small (7B parameters) and Phi-3 Medium (14B parameters). Parameters indicate the number of complex instructions the model can understand.

The company launched the Phi-2 in December, which performed similarly to larger models like the Llama 2. Microsoft says the Phi-3 performs better than the previous version and can deliver responses close to a model 10 times larger than it can.

says Eric Boyd, corporate vice president of Microsoft Azure AI Platform the edge Phi-3 Mini has similar capabilities to LLM programs like GPT-3.5 “just in a smaller form factor”.

Compared to their larger counterparts, AI models are small They are often cheaper to run and perform better personally Devices such as phones and laptops. the information I reported earlier this year that Microsoft was building a team specifically focused on lightweight AI models. Along with Phi, the company also built Orca-Math, a model focused on mathematical problem solving.

Developers trained Phi-3 using a “syllabus,” Boyd says. They were inspired by how children learn from bedtime stories, books that contain simpler words, and sentence structures that talk about larger topics.

“There aren't enough children's books out there, so we took a list of over 3,000 words and asked an LLM to create 'children's books' to teach Fai,” says Boyd.

See also  Here's the cost of a 2023 Chevrolet Corvette Z06

He added that Phi-3 simply builds on what previous iterations have learned. While Phi-1 focused on programming and Phi-2 started learning to think, Phi-3 is better at programming and reasoning. While the Phi-3 model suite knows some general knowledge, it can't beat GPT-4 or another LLM in terms of breadth – there's a big difference in the kind of answers you can get from a fully online trained LLM versus a smaller model like Phi -3.

Boyd says companies often find that smaller models like the Phi-3 work better for their custom applications, since, for many companies, their internal data sets will be on the smaller side anyway. Because these models use less computing power, they are often much more affordable.