Many searches are already well served by these results. If you’re going to Google and searching “Facebook” to land on facebook.com or you’re looking for the height of the Empire State Building, you’re already good to go.
But there’s a set of queries for which Google has never quite worked, which is where the company is hoping AI can come in. Queries like “Where should I go in Paris next week?” or “What’s the best restaurant in Tokyo?” These are hard questions to answer because they’re not actually one question. What’s your budget? What days are all the museums open in Paris? How long are you willing to wait? Do you have kids with you? On and on and on.
There’s a set of queries for which Google has never quite worked, which is where the company is hoping AI can come in
“The bottleneck turns out to be what I call ‘the orchestration of structure,’” says Prabhakar Raghavan, the SVP at Google who oversees Search. Much of that data exists somewhere on the internet or even within Google — museums post hours on Google Maps, people leave reviews about wait times at restaurants — but putting it all together into something like a coherent answer is really hard. “People want to say, ‘plan me a seven-day vacation,” Raghavan says, “and they believe if the language model outputs, it should be right.”
One way to think about these is simply as questions with no right answer. A huge percentage of people who come to Google aren’t looking for a piece of information that exists somewhere. They’re looking for ideas, looking to explore. And since there’s also likely no page on the internet titled “Best vacation in Paris for a family with two kids, one of whom has peanut allergies and the other of whom loves soccer, and you definitely want to go to the Louvre on the quietest possible day of the week,” the links and podcasts and TikToks won’t be much help.
Because they’re trained on a huge corpus of data from all over the internet, large language models can help answer those questions by essentially running lots of disparate searches at once and then combining that information into a few sentences and a few links. “Lots of times you have to take a single question and break it into 15 questions” to get useful information from search, Reid says. “Can you just ask one? How do we change how the information is organized?”
That’s the idea, but Raghavan and Reid are both quick to point out that SGE still can’t do these completely creative acts very well. Right now, it’s going to be much more handy for synthesizing all the search data behind questions like “what speaker should I buy to take into the pool.” It’ll do well with “what were the best movies of 2022,” too, because it has some objective Rotten Tomatoes-style data to pull from along with the internet’s many rankings and blog posts on the subject. AI appears to make Google a better information-retrieval machine, even if it’s not quite ready to be your travel agent.
One thing that didn’t show up in most SGE demos? Ads. Google is still experimenting with how to put ads into the AI snapshots, though rest assured, they’re coming. Google’s going to need to monetize the heck out of AI for any of this to stick.
At one point in our demo, I asked Reid to search only the word “Adele.” The AI snapshot contained more or less what you’d expect — some information about her past, her accolades as a singer, a note about her recent weight loss — and then threw in that “her live performances are even better than her recorded albums.” Google’s AI has opinions! Reid quickly clicked the bear claw and sourced that sentence to a music blog but also acknowledged that this was something of a system failure.