scorecardresearch
Clear all
Search

COMPANIES

No Data Found

NEWS

No Data Found
Sign in Subscribe
Microsoft launches cost-effective and lightweight AI model Phi-3-mini

Microsoft launches cost-effective and lightweight AI model Phi-3-mini

Phi-3-mini marks the initial release among a trio of small language models (SLMs) by Microsoft.

Sebastien Bubeck, Microsoft vice president of Generative AI research who has led the company’s efforts to develop more capable small language models. (Photo by Dan DeLong for Microsoft) Sebastien Bubeck, Microsoft vice president of Generative AI research who has led the company’s efforts to develop more capable small language models. (Photo by Dan DeLong for Microsoft)

On Tuesday, tech giant Microsoft unveiled its latest innovation: a lightweight artificial intelligence model dubbed Phi-3-mini. This move signals the company's strategic efforts to cater to a broader clientele by offering more affordable options in the rapidly evolving landscape of AI technology.

Phi-3-mini marks the initial release among a trio of small language models (SLMs) by Microsoft. The company is betting big on these models, recognising their potential to revolutionise various industries and redefine how people engage with technology in their professional lives.

Related Articles

Speaking on the cost-effectiveness of Phi-3-mini, Sébastien Bubeck, Microsoft's vice president of GenAI research, highlighted its significant price advantage over comparable models in the market. "Phi-3 is not slightly cheaper, it's dramatically cheaper," Bubeck stated, noting a cost reduction of up to tenfold compared to its competitors with similar capabilities.

Designed to handle simpler tasks, SLMs like Phi-3-mini offer practical solutions tailored for companies operating with limited resources. This strategic focus aligns with Microsoft's commitment to democratise AI and make it more accessible to businesses of all sizes.

Graphic illustrating how the quality of new Phi-3 models, as measured by performance on the Massive Multitask Language Understanding (MMLU) benchmark, compares to other models of similar size. (Image courtesy of Microsoft)
Graphic illustrating how the quality of new Phi-3 models compares to other models of similar size (Image: Microsoft)

Phi-3-mini is now readily available on Microsoft's cloud service platform Azure's AI model catalogue, as well as on the machine learning model platform Hugging Face and Ollama, a framework for local machine model deployment. Moreover, the SLM is optimised for Nvidia's graphics processing units (GPUs) and integrated with Nvidia's software tool Nvidia Inference Microservices (NIM), further enhancing its accessibility and performance.

Jaspreet Bindra, Founder, TechWhisperer UK Limited commented, "The newest such ‘Small Language Model’ comes from Microsoft with Phi-3 Mini. It is a lightweight artificial intelligence model developed as a part of its small language model (SLM) family. It is the first of three SLMs that Microsoft plans to launch in the near future, with the other two being Phi-3 Small and Phi-3 Medium. Phi-3 Mini has a capacity of 3.8 billion parameters and is designed to perform simpler tasks, making it more accessible and affordable for businesses with limited resources. Phi-3 Mini has been trained on a dataset that is smaller than that of large language models such as GPT-4. It is a part of Microsoft's broader initiative to introduce a series of SLMs that are tailored for simpler tasks, making them ideal for businesses with fewer resources. This approach promises to reduce costs, make the models faster as they are on the edge, and promise more enterprise and consumer use cases of Generative AI."

Paramdeep Singh, Co-Founder of Shorthills AI said, “All good things come in small packages. Most of the improvements in AI and Large Language Models (LLM) have been linked to larger models. GPU and compute are some of the biggest bottlenecks for Generative AI progress.  Microsoft has reversed the trend and released a Small Language Model (SLM) called Phi-3-mini. This model is performing as well as some of the models that are 100 times its size. This model does not require large compute hardware like GPU and can literally run on your cell phone. The cherry on the cake is that it is free and open-source for academic and commercial use! This would be a real game changer for Generative AI."

Just last week, Microsoft injected $1.5 billion into G42, an AI firm based in the UAE. Additionally, Microsoft has forged strategic partnerships with innovative startups like Mistral AI, facilitating the integration of cutting-edge AI models into its Azure cloud computing platform.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Apr 24, 2024, 9:00 AM IST
IN THIS STORY
×
Advertisement