Google’s vision to research the era of artificial intelligence started to gather
We finally got a good glimpse of what Google looks like when it is completely transformed with artificial intelligence.
I/O brought an attack from artificial intelligence advertisements and a feeling that the research giant was trying to prove that he still had juice to lead this race, and explodes many different products and experimental offers. This year, a clearer picture emerges on how Google sees the future of its basic products, including what was called by the CEO of Sundar Pichai “Return” to search at a round table with Press before the event. This includes more than one conversation research called artificial intelligence situation and in the end Amnesty International’s assistant understands the world around you.
Google faced a big dilemma: The search announcement generates a lion’s share of the company’s billion dollars, although Google knows that she cannot stay steadily and allow competitors to lunch. It tries to build artificial intelligence in the primary research product before someone else does it better. But no Very fast It risk harms the company’s profit engine.
Google has moved forward with an artificial intelligence overview, and this week also launches artificial intelligence to search for everyone. Although artificial intelligence overview summarizes the response at the top of the regular search page, artificial intelligence mode allows users to click on a new tab, which opens a conversation experience that will show a variety of sources, all depend on the Google Search Index. Users can also ask follow -up questions. Google says it offers a wide -scale artificial intelligence mode for users in the United States that begins this week.
“The artificial intelligence situation is not only the experience of the experience of Amnesty International, but also a glimpse of what will happen in the research in general,” said Liz Reid, the Google Research Chairman.
AI Mode uses what Google calls the “query fan” technology, which means that it works to operate multiple queries in parallel and restore results again. Google says it will make the search better and allow users to ask more sophisticated questions.
Today’s artificial intelligence mode is just the beginning of how Google’s search. Google announces a bag of new tricks that you keep in laboratories at the present time, so it will only be available to the first laboratories. However, they show what Google sees as the future of research.
One example is deep search, which allows users to punch a very long and complex question and will re -report fully cited, such as the “deep search” feature of Google in Gemini. There is also a version that will re -data and perceptions in actual time (consider plans for sports teams statistics, for example).
Google will also allow users to give artificial intelligence mode access to other Google applications and their search record so that it can return more answers and recommendations designed.
Reed said that Google will nourish some features of artificial intelligence mode in the standard search engine and the general curriculum of Amnesty International, the idea is that the standard search experience of Google takes advantage of the jumps you make in the basic artificial intelligence models.
“I have put all this together. This really builds the future of research,” said Reed. “The search begins to feel effortlessly.”
Will Google Devision AI be default one day? This is the implicit meaning here, although the company will closely see the next few months to know the number of people who click on the AI Mode tab.
Help everything
Google also has a vision for an artificial intelligence assistant with you all the time.
Please help improve our business coverage and technology and cover innovation by sharing your role a little – will help us to adapt the content more to people like you.
What is your job title?
(1 of 2)
What products or services can you agree to buy in your role?
(2 of 2)
By providing this information, you agree that Business Insider may use these data to improve your site experience and the targeted advertisement. By continuing, you agree that you accept the conditions of service and privacy policy.
Thanks for sharing ideas about your role.
If you have seen the Google Astra project, an Amnesty International Undersecretary uses the vision to see the world around it, you already have a good idea of what Google thinks here. Google wants to create an assistant with you at all times – whether it is in your phone or in augmented reality functions – you can see the world and answer questions and transfer information to you within seconds. Or it may only help you a symbol.
In I/O, Google announces the expansion of its range from Gemini 2.5 Pro Model to be a “global model”, which really means that it will be able to understand what you see, says Google says, plans. In Ai Speak, it makes it more agent.
Google DeepMind CEO said these updates are “critical steps” towards creating a “global AI assistant” who can better understand the user and take action on their behalf.
“This is our final goal of the Gemini application: Amnesty International, which is personal, pre -emptive and strong,” added Hasabis.
Google will provide the Camera and the Gemini Live screen available to everyone with the Gemini app and the VEO 3, a new video generation model that includes support for the combination of sound effects.
Google needs to build quickly here. Although the Wrong Artificial Intelligence is not after an important work in the search method, the company said that the Gemini app now has more than 400 million active users per month. Google’s internal analysis found that Gemini is still lagging behind Openai and Meta applications from this year earlier this year, According to the documents shown in the court.
Do you have anything to share? Call this report via email on hlangley@busINESINSIDER.COM Or a signal in 628-228-1836. Use a personal email address and a non -work device; Here is our guidance to share information safely.