
Meta has just dropped its new standalone Meta AI app, giving users a dedicated space to interact with their AI assistant. Built using the latest Llama 4 model, the aim is to create a more personal and conversational AI experience accessible directly from your phone.
While many already use Meta AI across WhatsApp, Instagram, Facebook, and Messenger, this new app provides a focused interface. It’s designed primarily around voice interactions, aiming for a more seamless and natural feel compared to just typing prompts.
Voice control is clearly a major focus here, with Meta claiming Llama 4 delivers more relevant and conversational responses. They’re even letting Aussies test an experimental full-duplex voice demo for a more natural back-and-forth, though it’s early days for this specific tech.
Here’s a look at some key features:
- Web Search Integration The AI can search the web to answer questions, provide recommendations, or help you research topics directly within the chat.
- Discover Feed Explore how others are using the AI, see popular prompts shared by the community, and even remix them for your own use.
- Image Generation & Editing Create and modify images directly through voice or text commands within your conversation with the AI assistant.
- Experimental Full-Duplex Voice (Australia First) Test a more natural voice interaction where the AI generates speech directly, though it lacks real-time web access for now and may have inconsistencies.
- Document Handling (Testing) (Australia First) Meta is testing features to generate text and image documents (exportable as PDF) and import documents for analysis, directly within the AI.
Big news for Ray-Ban Meta smart glasses owners: this new Meta AI app officially replaces the existing Meta View companion app. Your glasses settings, paired devices, and saved media should automatically transfer over to a new ‘Devices’ tab in the updated app.
This integration allows some continuity, letting you start a chat using Meta AI on your glasses and then view the history later in the app or on the web. You can also chat back and forth between the app and the web interface, picking up where you left off.
The Meta AI web experience (meta.ai) is also getting an upgrade to align with the new app features. It now includes the same voice interaction capabilities and the Discover feed for prompt inspiration.
The web interface also benefits from an improved image generation experience with more presets and options, making it more useful for desktop workflows. The document generation and analysis features being tested in Australia are also accessible via the web.
You remain in control of how you interact, with settings to toggle default voice activation (‘Ready to talk’) if you prefer hands-free use. The app itself appears to be free to download and use.
While the app is rolling out globally, Australia is notably one of the first regions to get access to the core voice conversation features, the experimental full-duplex demo, and the document handling tests.
This move clearly signals Meta’s ambition to make its AI a central part of its ecosystem, bridging mobile, web, and wearable experiences. It will be interesting to see how users adopt this dedicated app interface.
For more information, head to meta.ai