Skip to main content

Google is giving free access to two of Gemini’s best AI features

Gemini Advanced on the Google Pixel 9 Pro Fold.
Andy Boxall / Digital Trends

Google’s Gemini AI has steadily made its way to the best of its software suite, from native Android integrations to interoperability with Workspace apps such as Gmail and Docs. However, some of the most advanced Gemini features have remained locked behind a subscription paywall.

That changes today. Google has announced that Gemini Deep Research will now be available for all users to try, alongside the ability to create custom Gem bots. You no longer need a Gemini Advanced (or Google One AI Premium) subscription to use the aforementioned tools.

Recommended Videos

The best of Gemini as an AI agent

Deep Research is an agentic tool that takes over the task of web research, saving users the hassle of visiting one web page after another, looking for relevant information. With Deep Research, you can simply put a natural language query as input, and also specify the source, if needed.

Using Gemini Deep Research on a smartphone.
Nadeem Sarwar / Digital Trends

Deep Research will break down the query in numerous stages and then seeks a final plan approval before it jumps into action. After completing the research work, which usually takes a few minutes, it presents a neatly formatted document, divided across headlines, tables, bullet points, and other relevant stylistic elements.

It’s a fantastic tool for conducting research as a student, journalist, finance planner, academic, and more. I have extensively used this feature for digging into scientific papers, and it has been so helpful that I pay for a Gemini Advanced subscription solely to access Deep Research.

“Now anyone will be able to try it across the globe and in 45+ languages,” writes Dave Citron, Senior Director of Product Management for the Gemini app. Aside from giving free access to all users, Google is also upgrading the underlying infrastructure to the more advanced Gemini 2.0 Flash Thinking Experimental AI model .

Response provided by Gemini Deep Research.
Gemini serves your answers in a report that looks like this. Nadeem Sarwar / Digital Trends

Do keep in mind that you won’t get unlimited access, since it’s a very compute-intensive process. Google says free users can try Deep Research a “few times per month.”

The strategy is not too different compared to what Perplexity has to offer with its own Deep Research tool . OpenAI chief Sam Altman has also confirmed that free ChatGPT users will also be able to launch Deep Research queries twice a month.

Creating custom versions of Gemini

Another freebie announced by Google today is Gems. These are essentially custom chatbots, which can be trained to perform a specific task. From drafting detailed email responses with a simple “yes” or “no” as input to a coding assistant, users can create one that best suits their workflow.

Interacting with a custom Gem created with Gemini.
Screenshot Google

The best part is that you don’t need any coding knowledge to create a personalized Gem for your daily use, as all the operational instructions can be given in natural language sentences. So far, the ability to create Gems has been limited to paying users.

Now, Gems are rolling out widely to all Gemini users, without any subscription requirement. Gems are available for free in the Gemini mobile app, but to create them, you need to visit the Gemini desktop client. The behavior of Gems can also be customized later on.

Just like the regular Gemini assistant, Gems can also process data based on files uploaded by users. I have created a handful of Gems, which take the drudgery out of boring tasks and save me a lot of time.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Google is giving Drive users nudges to make use of Gemini features
The Google Drive app logo.

Google Drive users are being introduced to a recent tool in an easy to understand way, as Google is adding prompts to make use of its Gemini AI assistant technology. Though Gemini has already been accessible in Drive, the new prompts, or "nudges", appear prominently displayed and offer quick and simple ways to start using the technology.

As reported by Android Authority, the nudges include suggestions like “Learn about Gemini in Drive,” “Summarize a folder,” or “Learn about a file.” These prompts appear at the top of your Google Drive page, beneath the "Welcome to Drive" message and above the suggested folders.

Read more
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more
Chromebooks are about to get a lot smarter, and more accessible
Acer Chromebook Spin 513 top down view showing display and keyboard deck.

Google recently announced that Gemini will soon replace Google Assistant everywhere, from your phone and smartwatches to smart home speakers. ChromeOS has now joined the transition bandwagon, starting today.
The company has kicked off the stable rollout of Chrome OS M134, and it marks the silent exit of Google Assistant. “When triggering Assistant, you will automatically be directed to the Gemini app on your Chromebook,” Google says in a community update note.
Google says the feature update will be rolling out in a phased manner, so you might not be able to access the Gemini interface immediately after installing the latest software. Just to clear any confusion here, Gemini has been accessible on Chrome OS, but with the new build, it replaces the Google Assistant.

Once the transition takes effect, users will see the sparkly Gemini icon in the top-right corner of the launcher window. For now, support for the “Hey Google” hotword for summoning Gemini is absent, even though it works fine on mobile platforms where Google Assistant is in the phase-out process.
Another noteworthy aspect is that Chrome OS will offer Gemini as a Progressive Web App (PWA), instead of a native application experience. That’s not necessarily a bad thing, considering you get access to a whole new world of capabilities with Gemini.

Read more