Google works by crawling billions of web pages, indexing that content, and then ranking it in order of most relevant responses. Then it spits out a list of links to click on. ChatGPT offers something more enticing for harried Internet users: a single answer based on your own research and synthesis of that information. ChatGPT has been trained on millions of websites to collect not only the skill of having a human conversation, but also the information itself, as long as it was posted on the internet before the end of 2021.(1)
I went through my Google search history for the past month and entered 18 of my Google queries into ChatGPT, cataloging the responses. Then I went back and ran the queries through Google once more, to jog my memory. The bottom line was, in my judgment, that ChapGPT’s answer was more helpful than Google’s in 13 of the 18 examples.
“Useful” is obviously subjective. What do I mean by the term? In this case, answers that were clear and comprehensive. A question about whether condensed milk or evaporated milk was better for pumpkin pie on Thanksgiving elicited a detailed (if slightly verbose) response from ChatGPT explaining how condensed milk would lead to a sweeter pie. (Of course, it was superior.) Google mostly provided a list of recipe links I should click, with no clear answer.
This underscores ChatGPT’s main threat to Google across the board. Provides a one-time, instant response that requires no additional crawling of other websites. In Silicon Valley, there is talk of a “frictionless” experience, something of a holy grail when online consumers overwhelmingly prefer quick and easy-to-use services.
Google has its own version of summary answers to some questions, but they are collections of the highest ranked web page and are usually short. It also has its own proprietary language model, called LaMDA, which is so good that one of the company’s engineers thought the system was sentient.
So why doesn’t Google generate its own singular answers to questions, like ChatGPT? Because anything that prevents people from crawling search results will hurt Google’s transactional business model of getting people to click on ads. About 81 percent of Alphabet Inc.’s $257.6 billion in 2021 revenue came from advertising, much of it made up of Google’s pay-per-click ads, according to data compiled by Bloomberg.
“It’s all designed with the purpose of ‘Let’s click a link,'” says Sridhar Ramaswamy, who oversaw Google’s business and advertising between 2013 and 2018, and who says generative search from systems like ChatGPT will disrupt the traditional seeks “massively” business.
“It’s just a better experience,” he added. “The goal of Google search is to get you to click on links, ideally ads, and all other text on the page is just filler.” Ramaswamy co-founded a subscription-based search engine called Neeva in 2019, which is planning to roll out its own generative search feature that can summarize web pages, with footnotes, in the coming months.
ChatGPT does not disclose the sources of its information. In fact, there’s a good chance its creators themselves can’t tell how it generates the responses it provides. This points to one of his biggest weaknesses: At times, his answers are clearly wrong.
Stack Overflow, a Q&A site for programmers, temporarily banned its users from sharing advice from ChatGPT on Monday, saying the thousands of answers programmers posted from the system were often incorrect.
My experience confirms this. When I fed my 12-year-old daughter’s English essay question into the system, it offered a long, eloquent analysis that seemed authoritative. But the answer was also full of errors, such as stating that a literary character’s parents were dead when they weren’t.
What’s disturbing about this flaw is that inaccuracies are hard to spot, especially when ChatGPT seems so secure. System responses “typically look good,” according to Stack Overflow. And by OpenAI’s own admission, they often sound plausible. OpenAI had originally trained its system to be more cautious, but as a result it rejected questions it knew the answer to. Going the other way, the result is something like a college fraternity student bluffing his way through an essay after failing to study. Flowing nonsense.
It’s unclear how common ChatGPT errors are. One estimate circulating on Twitter is a rate of 2% to 5%. Could be more. This will make Internet users wary of using ChatGPT for important information. Another Google selling point: It primarily makes money on transactional search queries for products and navigational searches to other sites, like people typing in “Facebook” or “YouTube.” Those types of queries made up many of Google’s top 100 searches of 2022. As long as ChatGPT doesn’t offer links to other sites, it isn’t encroaching too deeply into Google territory. But both of these issues could evolve over time. ChatGPT may become more accurate as OpenAI expands its model training to more current parts of the web. To that end, OpenAI is working on a system called WebGPT, which it hopes will lead to more accurate answers to search queries, which will also include source citations. A combination of ChatGPT and WebGPT could be a powerful alternative to Google. And ChatGPT is already providing more accurate answers than OpenAI’s previous systems.
ChatGPT amassed 1 million users in about five days. This is an extraordinary milestone: it took Instagram 2.5 months to reach that number and ten months for Facebook. OpenAI isn’t speculating publicly about its future applications, but if its new chatbot starts sharing links to other websites, especially those that sell things, that could spell real danger for Google.
More from Bloomberg’s opinion:
Creative AI is generating some messy problems – Parmy Olson
ChatGPT could make democracy even messier: Tyler Cowen
AI panned my script. Can it rock Hollywood?: Trung Phan
(1) ChatGPT was powered by a model from OpenAI’s GPT-3.5 series of large language models, which was trained on a combination of text and code prior to Q4 2021.
This column does not necessarily reflect the opinion of the editorial board or of Bloomberg LP and its owners.
Parmay Olson is a Bloomberg Opinion columnist covering technology. She is a former reporter for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.
More stories like this can be found at bloomberg.com/opinion