Google I/O 2024: everything announced

Google I/O 2024: everything announced
Share on facebook
Share on twitter
Share on pinterest

Google I/O just ended — and it was packed with AI announcements. As expected, the event focused heavily on Google’s Gemini AI models, along with the ways they’re being integrated into apps like Workspace and Chrome.

If you didn’t get to tune in to the event live, you can catch up with all the latest from Google in the roundup below.

Google Lens now lets you search by recording a video

Google Lens already lets you search for something based on images, but now Google’s taking things a step further with the ability to search with a video. That means you can take a video of something you want to search for, ask a question during the video, and Google’s AI will attempt to pull up relevant answers from the web.

Gemini will answer questions about your photo library

Google is rolling out a new feature this summer that could be a boon for just about anyone with years — or even more than a decade — of photos to sift through. “Ask Photos” lets Gemini pore over your Google Photos library in response to your questions, and the feature goes beyond just pulling up pictures of dogs and cats. CEO Sundar Pichai demonstrated by asking Gemini what his license plate number is. The response was the number itself, followed by a picture of it so he could make sure that was right.

Google’s flagship Gemini model gets faster and more capable

Google has introduced a new AI model to its lineup: Gemini 1.5 Flash. The new multimodal model is just as powerful as Gemini 1.5 Pro, but it’s optimized for “narrow, high-frequency, low-latency tasks.” That makes it better at generating fast responses. Google also made some changes to Gemini 1.5 that it says will improve its ability to translate, reason, and code. Also, Google says it has doubled Gemini 1.5 Pro’s context window (how much information it can take in) from 1 million to 2 million tokens.

Gemini joins users in Workspace

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Screenshot: YouTube

Google is rolling its latest mainstream language model, Gemini 1.5 Pro, into the sidebar for Docs, Sheets, Slides, Drive, and Gmail. When it rolls out to paid subscribers next month, it will turn into more of a general-purpose assistant within Workspace that can fetch info from any and all of the content from your Drive, no matter where you are. It will also be able to do things for you, like write emails that incorporate info from a document you’re currently looking at or remind you later to respond to an email you’re perusing. Some early testers already have access to these features, but Google says it’s rolling it out to all paid Gemini subscribers next month.

Project Astra is Google’s Star Trek AI future

Google’s Project Astra is a multimodal AI assistant that the company hopes will become a do-everything virtual assistant that can watch and understand what it sees through your device’s camera, remember where your things are, and do things for you. It’s powering many of the most impressive demos from I/O this year, and the company’s aim for it is to be an honest-to-goodness AI agent that can’t just talk to you but also actually do things on your behalf.

Veo brings Sora-style video generation to creators

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Google

Google’s answer to OpenAI’s Sora is a new generative AI model that can output 1080p video based on text, image, and video-based prompts. Videos can be produced in a variety of styles, like aerial shots or timelapses, and can be tweaked with more prompts. The company is already offering Veo to some creators for use in YouTube videos but is also pitching it to Hollywood for use in films.

Gems bring custom chatbot creation to Gemini

Google is rolling out a custom chatbot creator called Gems. Just like OpenAI’s GPTs, Gems lets users give instructions to Gemini to customize how it will respond and what it specializes in. If you want it to be a positive and insistent running coach with daily motivations and running plans — aka my worst nightmare — you’ll be able to do that soon (if you’re a Gemini Advanced subscriber).

Gemini becomes a better conversation partner

The new Gemini Live feature aims to make voice chats with Gemini feel more natural. The chatbot’s voice will be updated with some extra personality, and users will be able to interrupt it mid-sentence or ask it to watch through their smartphone camera and give information about what it sees in real time. Gemini is also getting new integrations that let it update or draw info from Google Calendar, Tasks, and Keep, using multimodal features to do so (like adding details from a flyer to your personal calendar).

Circle to Search can help solve math problems now

If you’re on an Android phone or tablet, you can now circle a math problem on your screen and get help solving it. Google’s AI won’t solve the problem for you — so it won’t help students cheat on their homework — but it will break it down into steps that should make it easier to complete.

Google Search gets an AI overhaul

Google will roll out “AI Overviews” — formerly known as “Search Generative Experience,” a mouthful — to everyone in the US this week. Now, a “specialized” Gemini model will design and populate results pages with summarized answers from the web (similar to what you see in AI search tools like Perplexity or Arc Search).

Android gets AI-powered scam detection

Using on-device Gemini Nano AI smarts, Google says Android phones will be able to help you avoid scam calls by looking out for red flags, like common scammer conversation patterns, and then popping up real-time warnings like the one above. The company promises to offer more details on the feature later in the year.

Android devices are about to get smarter AI

Google says that Gemini will soon be able to let users ask questions about videos on-screen, and it will answer based on automatic captions. For paid Gemini Advanced users, it can also ingest PDFs and offer information. Those and other multimodal updates for Gemini on Android are coming over the next few months.

Google Chrome is getting an AI assistant

Google announced that it’s adding Gemini Nano, the lightweight version of its Gemini model, to Chrome on desktop. The built-in assistant will use on-device AI to help you generate text for social media posts, product reviews, and more from directly within Google Chrome.

Google upgrades its SynthID AI watermarking

Google says it’s expanding what SynthID can do — the company says it will embed watermarking into content created with its new Veo video generator and that it can now also detect AI-generated videos.

Update, May 14th: Added “Ask Photos.”

Source

Subscribe to our Newsletter

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

Leave a Reply

Your email address will not be published. Required fields are marked *