Meta Announces Trial of Multimodal AI for Ray Ban Meta Smart Glasses

Meta Platforms, the parent company of Facebook, Instagram, WhatsApp, and Oculus VR, has made another significant announcement. Following the recent release of the voice cloning AI called Audiobox, the company is now introducing a new multimodal AI designed specifically for its Ray Ban Meta smart glasses.

The trial for the new Meta multimodal AI will be conducted in the United States and is set to begin this week. According to Andrew Bosworth, the Chief Technology Officer of Meta (formerly known as Facebook), the public launch of the AI is scheduled for 2024. In an Instagram video post, he stated, “Next year, we’re going to launch a multimodal version of the AI assistant that takes advantage of the camera on the glasses in order to give you information not just about a question you’ve asked it, but also about the world around you.”

The multimodal AI will allow users to gather information about their surroundings, going beyond just answering questions. Bosworth expressed his excitement about the testing phase, which will be conducted through an early access program in the US. However, he did not provide details on how to participate in the program.

The Ray Ban Meta smart glasses, introduced at Meta’s annual Connect conference in September, have a starting price of $299. These glasses already come with a built-in AI assistant, but its capabilities are limited, unable to respond intelligently to videos or photographs, or provide a live view of the wearer’s perspective, despite having built-in cameras. As of now, the assistant can only be controlled through voice commands, similar to Amazon’s Alexa or Apple’s Siri.

Bosworth showcased a new capability of the multimodal version by sharing a video clip on Instagram. In the clip, he can be seen wearing the glasses and observing a wall art piece depicting the state of California. Additionally, he seemed to be holding a smartphone, suggesting that the AI might require pairing with a smartphone in order to function effectively. The user interface displayed on the screen of the Meta multimodal AI successfully identified the art as a wooden sculpture and described it as beautiful.

The introduction of a multimodal AI by Meta aligns with the company’s broader adoption of AI across its products and platforms. While other companies, such as Humane with its “Ai Pin” and OpenAI with GPT-4V, have also ventured into dedicated AI devices, Meta’s approach is unique as it integrates the AI into smart glasses.

It’s worth noting the parallels to Google’s previous attempt with Google Glass, which faced criticism for its design and limited practicality. However, Meta’s multimodal AI may benefit from changing consumer preferences and attitudes towards incorporating cameras into wearable devices.

In conclusion, Meta’s trial of the multimodal AI for Ray Ban Meta smart glasses marks a significant milestone in the evolution of AI-assisted hardware. With its advanced capabilities and the integration of camera technology, these glasses have the potential to revolutionize the way users interact with their surroundings.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts