Skip to main content

OpenAI’s Advanced Voice Mode can now see your screen and analyze videos

Advanced Santa voice mode
OpenAI

OpenAI’s “12 Days of OpenAI” continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT’s Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick’s dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Recommended Videos

Select Santa’s voice from the popup menu, confirm your choice, and start chatting. I, for one, am not entirely clear on why you’d want to talk to a large language model masquerading as a fictional religious figure, much less shell out $20 for the privilege, but OpenAI seems to believe it holds value. Note that the system will not log your chats with Santa, they won’t be saved to your chat history, nor will they impact the memory of ChatGPT .

Just in time for the holidays, video and screensharing are now starting to roll out in Advanced Voice in the ChatGPT mobile app. pic.twitter.com/HFHX2E33S8

— OpenAI (@OpenAI) December 12, 2024

The company is also rolling out a long-awaited feature for Advanced Voice Mode : the ability to analyze video and screen shares through the mobile AVM interface. With it, you’ll be able to share your screen or video feed with ChatGPT, hold real-time conversations, and get it to answer questions about what you see, without needing to describe your surroundings or upload photos.

The new feature is rolling out to Plus and Pro subscribers “in most countries” according to the company, as well as to all Teams users. Stringent privacy laws are delaying the feature’s release in the EU, Switzerland, Iceland, Norway, and Liechtenstein, though the company hopes to get it to Plus and Pro subscribers in those regions “soon.” Enterprise and Edu users will have to wait until January to try it for themselves. If you have access, you can launch the new feature by opening voice mode, then tapping the video camera icon in the lower left. To launch screen share, just tap the three-dot menu and select “Share Screen.”

Wednesday’s announcement marks the fourth day of OpenAI’s live stream event. The company has already unveiled its fully-functional 01 reasoning model , its Sora video generation model , a $200/month Pro subscription tier , and updates to ChatGPT’s Canvas .

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI makes its most advanced coding model available to paid ChatGPT users
ChatGPT models list.

OpenAI has made GPT-4.1 more widely available, as ChatGPT Plus, Pro, and Team users can now access the AI model.

On Wednesday, the brand announced that it brought the model to its direct chatbot service following its original launch, where it was unveiled as an API in April. Its popularity among developers urged OpenAI to make the model available for paid users. It also plans to roll out GPT-4.1 for ChatGPT Enterprise and Edu users in the coming weeks.

Read more
ChatGPT models explained: How to use each, according to OpenAI
ChatGPT models list.

Although the entire AI boom was triggered by just one ChatGPT model, a lot has changed since 2022. New models have been released, old models have been replaced, updates roll out and roll back again when they go wrong -- the world of LLMs is pretty busy. At the moment, we have six OpenAI LLMs to choose from and, as both users and Sam Altman are aware, their names are completely useless.

Most people have probably just been using the newest model they can get their hands on, but it turns out that each of the six current models is good at different things -- and OpenAI has finally decided to tell us which model to use for which tasks.

Read more
Meta’s new AI app lets you share your favorite prompts with friends
Meta AI WhatsApp widget.

Meta has been playing the AI game for a while now, but unlike ChatGPT, its models are usually integrated into existing platforms rather than standalone apps. That trend ends today -- the company has launched the Meta AI app and it appears to do everything ChatGPT does and more.

Powered by the latest Llama 4 model, the app is designed to "get to know you" using the conversations you have and information from your public Meta profiles. It's designed to work primarily with voice, and Meta says it has improved responses to feel more personal and conversational. There's experimental voice tech included too, which you can toggle on and off to test -- the difference is that apparently, full-duplex speech technology generates audio directly, rather than reading written responses.

Read more