The PyTorch Framework Advances with New Release and Mobile/Edge Projects

The open source machine learning (ML) framework PyTorch is making significant strides with its latest release, and it also introduces a project that enables AI inference on mobile devices and at the edge. These exciting developments were announced at the PyTorch Conference, which correlates with the one year anniversary of the formation of the PyTorch Foundation at the Linux Foundation.

PyTorch 2.1 Update

Technical details about the PyTorch 2.1 update, which was released on October 4th, were discussed during the event. Notably, the announcement of new mobile and edge efforts with PyTorch Edge and the open sourcing of ExecuTorch by Meta Platforms (formerly Facebook) took center stage. ExecuTorch is a technology that facilitates the deployment of AI models on mobile and edge devices for on-device inference.

Meta Platforms has already demonstrated the effectiveness of ExecuTorch, utilizing it in the latest generation of Ray-Ban smart glasses and the Quest 3 VR headset. By integrating this technology with the open source PyTorch project, the aim is to push the boundaries and enable a new era of on-device AI inference capabilities.

“At the Linux Foundation we host over 900 technical projects, PyTorch is one of them,” said Ibrahim Haddad, executive director of the PyTorch Foundation during his opening keynote at the PyTorch Conference. “There are over 900 examples of how a neutral open home for projects help projects grow and PyTorch is a great example of that.”

Expanding Application of PyTorch

While PyTorch has long been a widely used tool for AI training, including the development of popular large language models such as GPT models from OpenAI and Meta’s Llama, its usage for inference has historically been limited. However, this is changing rapidly.

IBM has contributed to PyTorch 2.1, enhancing inference for server deployments. Performance enhancements, including support for automatic dynamic shapes, minimize the need for recompilations due to tensor shape changes. Additionally, the translation of NumPy operations into PyTorch enables acceleration of various numerical calculations commonly used in data science.

“ExecuTorch is a new end-to-end solution for deploying AI on mobile and edge devices,” said Mergen Nachin, Software Engineer at Meta during a keynote session at the PyTorch Conference. “Today’s AI models extend beyond servers, reaching edge devices such as mobile, AR/VR headsets, wearables, embedded systems, and microcontrollers. ExecuTorch tackles the challenges posed by these restricted edge devices by providing a comprehensive workflow from PyTorch models to optimized native programs.”

ExecuTorch starts with a standard PyTorch module and transforms it into an exporter graph. It then undergoes further optimization and compilation to target specific devices. The portability of ExecuTorch allows it to run on both mobile and embedded devices, offering consistency and improved developer productivity through the use of consistent APIs and software development kits.

“Today we are open sourcing ExecuTorch and it’s still very early, but we’re open sourcing because we want to get feedback from the community and embrace the community,” Nachin stated. The goal is to address fragmentation in deploying AI models to a wide array of edge devices collaboratively.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts