- AiNexaVerse News
- Posts
- Google AI Mode Now Understands Images in Search
Google AI Mode Now Understands Images in Search
Hello AI Lovers!
Today’s Topics Are:
- Google AI Mode Now Understands Images in Search
- Meta Upgrades Llama AI Amid Market Jitters Over Tariffs
Google AI Mode Now Understands Images in Search

Quick Summary:
Google's AI-powered search just got a major upgrade: AI Mode can now interpret images as part of user queries. Leveraging the Gemini model and Google Lens, this multimodal search feature allows users to upload or snap photos for more intelligent, context-rich results.
Key Points:
Google’s AI Mode now accepts images alongside text for search queries.
Powered by the Gemini LLM and supported by Google Lens for object recognition.
Uses a “fan-out technique” to generate multiple sub-queries for deeper understanding.
Currently available to select Google Labs users in the U.S.
Aims to replace traditional search with smarter, more conversational responses.
Story:
Google’s search experience is continuing its evolution with a big leap into multimodal capabilities. After rolling out AI Mode earlier this year, Google has now added the ability to process images in searches. Users can snap a photo or upload an image directly in the AI Mode search bar, enabling the system to understand visual content alongside text.
This new functionality is powered by Gemini, Google’s latest large language model, which works in tandem with Google Lens. Lens identifies objects within an image and sends that context to Gemini, which then fans out the query into multiple related questions. For example, if you upload a photo of a few books and ask for similar titles, Lens identifies the books, and AI Mode responds with personalized recommendations.
Google is positioning AI Mode as a smarter, more intuitive way to search the web—moving beyond lists of links to direct, helpful answers. It reports that users are inputting twice as much text compared to traditional search, suggesting deeper engagement.
Conclusion:
By merging text and image inputs, Google is redefining the way we search. While currently limited to Google Labs users, this feature signals a shift toward more natural, visual-first queries—and a potential future where AI Mode becomes the standard search experience.
Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
Meta Upgrades Llama AI Amid Market Jitters Over Tariffs

Quick Summary:
Meta has launched new Llama 4 AI models, enhancing its Meta AI platform, even as tech stocks face pressure from newly announced U.S. tariffs. CEO Mark Zuckerberg says open-source AI is taking the lead—with Meta aiming to be at the forefront.
Key Points:
Meta released Llama 4 Scout and Llama 4 Maverick, claiming superior performance to rivals.
Models are accessible via WhatsApp, Messenger, Instagram, and the Meta AI site.
Two more models—Llama 4 Reasoning and Llama 4 Behemoth—are in development.
Meta stock fell 11% after Trump’s tariff announcement, reflecting wider market concerns.
Meta plans to spend $60B–$65B on AI and infrastructure this year.
Story:
Despite volatility in tech markets following former President Donald Trump’s April 2 tariff announcement, Meta Platforms is charging ahead with its artificial intelligence strategy. Over the weekend, CEO Mark Zuckerberg unveiled a major update to Meta AI: the release of the Llama 4 model family.
Llama 4 Scout, built to run on a single Nvidia H100 GPU, is reportedly Meta's most efficient model yet. Llama 4 Maverick boasts better performance than OpenAI’s ChatGPT-4o and Google’s Gemini 2.0 Flash across various benchmarks, especially in reasoning and coding.
These models are available now in Meta’s AI tools across popular platforms like WhatsApp and Instagram. Zuckerberg also teased two upcoming releases: Llama 4 Reasoning and Llama 4 Behemoth—the latter of which could be the largest model in the world.
Still, investor anxiety remains high. Meta’s stock has dropped 11% since the tariff announcement, matching the Nasdaq Composite’s slide. Analysts worry that escalating trade tensions could spark a recession, causing investors to back away from high-growth tech stocks.
Conclusion:
Meta is doubling down on open-source generative AI as global economic uncertainty rattles the market. With the Llama 4 rollout, Meta not only positions itself as a leader in AI innovation but also signals its long-term confidence in the sector—despite short-term market turbulence.
That was it for this Weeks News, We Hope this was informative and insightful as always!
We Will Start Something Special Within a Few Months.
We Will Tell you more soon!
But for now, Please refer us to other people that would like our content!
This will help us out Big Time!
Did You Like The News? |