Intel AI Platforms Speed Up Microsoft Phi-3 Generative AI Models

Intel AI Platforms Speed Up Microsoft Phi-3 Generative AI Models

Intel collaborated with Microsoft to integrate support for Phi-3 models into their CPUs, GPUs, and Gaudi accelerators. They also worked together to design the accelerator abstraction in DeepSpeed and extend automatic tensor parallelism support for Phi-3 and other models in Hugging Face. The Phi-3 models are ideal for on-device inference, enabling the development of lightweight models for AI PCs and edge devices. Intel’s client hardware is supported by software frameworks like PyTorch and Extension for PyTorch for research and development, as well as the OpenVINO Toolkit for model inference and deployment.

Pallavi Mahajan, the general manager of data centers and AI software at Intel, highlighted the importance of collaboration with Microsoft and other partners in the AI software ecosystem to expand the reach of AI technology. She emphasized the significance of ensuring that Intel hardware supports a variety of new Phi-3 models across data centers, edge devices, and client devices.

Article Source
https://www.ndtvprofit.com/technology/intel-ai-platforms-accelerate-microsoft-phi-3-generative-ai-models