This Is What It Looks Like When AI Eats the World (2024)

Technology

The web itself is being shoved into a great unknown.

By Charlie Warzel
This Is What It Looks Like When AI Eats the World (1)

This Is What It Looks Like When AI Eats the World (2)

Listen to this article

00:00

12:42

Produced by ElevenLabs and News Over Audio (NOA) using AI narration.

Tech evangelists like to say that AI will eat the world—a reference to a famous line about software from the venture capitalist Marc Andreessen. In the past few weeks, we’ve finally gotten a sense of what they mean.

This spring, tech companies have made clear that AI will be a defining feature of online life, whether people want it to be or not. First, Meta surprised users with an AI chatbot that lives in the search bar on Instagram and Facebook. It has since informed European users that their data are being used to train its AI—presumably sent only to comply with the continent’s privacy laws. OpenAI released GPT-4o, billed as a new, more powerful and conversational version of its large language model. (Its announcement event featured an AI voice named Sky that Scarlett Johansson alleged was based on her own voice without her permission, an allegation OpenAI’s CEO Sam Altman has denied. You can listen for yourself here.) Around the same time, Google launched—and then somewhat scaled back—“AI Overviews” in its search engine. OpenAI also entered into new content partnerships with numerous media organizations (including The Atlantic) and platforms such as Reddit, which seem to be operating on the assumption that AI products will soon be a primary means for receiving information on the internet. (The Atlantic’s deal with OpenAI is a corporate partnership. The editorial division of The Atlantic operates with complete independence from the business division.) Nvidia, a company that makes microchips used to power AI applications, reported record earnings at the end of May and subsequently saw its market capitalization increase to more than $3 trillion. Summing up the moment, Jensen Huang, Nvidia’s centibillionaire CEO, got the rock-star treatment at an AI conference in Taipei this week and, uh, signed a woman’s chest like a member of Mötley Crüe.

The pace of implementation is dizzying, even alarming—including to some of those who understand the technology best. Earlier this week, employees and former employees of OpenAI and Google published a letter declaring that “strong financial incentives” have led the industry to dodge meaningful oversight. Those same incentives have seemingly led companies to produce a lot of trash as well. Chatbot hardware products from companies such as Humane and Rabbit were touted as attempts to unseat the smartphone, but were shipped in a barely functional state. Google’s rush to launch AI Overviews—an attempt to compete with Microsoft, Perplexity, and OpenAI—resulted in comically flawed and potentially dangerous search results.

Read: A devil’s bargain with OpenAI

Technology companies, in other words, are racing to capture money and market share before their competitors do and making unforced errors as a result. But though tech corporations may have built the hype train, others are happy to ride it. Leaders in all industries, terrified of missing out on the next big thing, are signing checks and inking deals, perhaps not knowing what precisely it is they’re getting into or if they are unwittingly helping the companies who will ultimately destroy them. The Washington Post’s chief technology officer, Vineet Khosla, has reportedly told staff that the company intends to “have A.I. everywhere” inside the newsroom, even if its value to journalism remains, in my eyes, unproven and ornamental. We are watching as the plane is haphazardly assembled in midair.

As an employee at one of the publications that has recently signed a deal with OpenAI, I have some minor insight into what it’s like when generative AI turns its hungry eyes to your small corner of an industry. What does it feel like when AI eats the world? It feels like being trapped.

There’s an element of these media partnerships that feels like a shakedown. Tech companies have trained their large language models with impunity, claiming that harvesting the internet’s content to develop their programs is fair use. This is the logical end point of Silicon Valley’s classic “Ask for forgiveness, not for permission” growth strategy. The cynical way to read these partnerships is that media companies have two choices: Take the money offered, or accept OpenAI scraping their data anyway. These conditions resemble a hostage negotiation more than they do a mutually agreeable business partnership—an observation that media executives are making in private to one another, and occasionally in public, too.

Publications can obviously turn down these deals. They have other options, but these options are, to use a technical term, not great. You can sue OpenAI and Microsoft for copyright infringement, which is what The New York Times has done, and hope to set a legal precedent where extractive generative-AI companies pay fairly for any work they use to train their models. This process is prohibitively costly for many organizations, and if they lose, they get nothing but legal bills. Which leaves a third option: Abstain on principle from the generative-AI revolution altogether, block the web-crawling bots from companies such as OpenAI, and take a justified moral stand while your competitors capitulate and take the money. This third path requires a bet on the hope that the generative-AI era is overhyped, that the Times wins its lawsuit, or that the government steps in to regulate this extractive business model—which is to say, it’s uncertain.

The situation that publishers face seems to perfectly illustrate a broader dynamic: Nobody knows exactly what to do. That’s hardly surprising, given that generative AI is a technology that has so far been defined by ambiguity and inconsistency. Google users encountering AI Overviews for the first time may not understand what they’re there for, or whether they’re more useful than the usual search results. There is a gap, too, between the tools that exist and the future we’re being sold. The innovation curve, we’re told, will be exponential. The paradigm, we’re cautioned, is about to shift. Regular people, we’re to believe, have little choice in the matter, especially as the computers scale up and become more powerful: We can only experience a low-grade disorientation as we shadowbox with the notion of this promised future. Meanwhile, the ChatGPTs of the world are here, foisted upon us by tech companies who insist that these tools should be useful in some way.

But there is an alternative framing for these media partnerships that suggests a moment of cautious opportunity for beleaguered media organizations. Publishers are already suppliers for algorithms, and media companies have been getting a raw deal for decades, allowing platforms such as Google to index their sites and receiving only traffic referrals in exchange. Signing a deal with OpenAI, under this logic, isn’t capitulation or good business: It’s a way to fight back against platforms and set ground rules: You have to pay us for our content, and if you don’t, we’re going to sue you.

Read: Generative AI is challenging a 234-year-old law

Over the past week, after conversations with several executives at different companies who have negotiated with OpenAI, I was left with the sense that the tech company is less interested in publisher data to train its models and far more interested in real-time access to news sites for OpenAI's forthcoming search tools. (I agreed to keep these executives anonymous to allow them to speak freely about their companies’ deals.) Having access to publisher-partner data is helpful for the tech company in two ways: First, it allows OpenAI to cite third-party organizations when a user asks a question on a sensitive issue, which means OpenAI can claim that it is not making editorial decisions in its product. Second, if the company has ambitions to unseat Google as the dominant search engine, it needs up-to-date information.

Here, I’m told, is where media organizations may have leverage for ongoing negotiations: OpenAI will, theoretically, continue to want updated news information. Other search engines and AI companies, wanting to compete, would also need that information, only now there’s a precedent that they should pay for it. This would potentially create a consistent revenue stream for publishers through licensing. This isn’t unprecedented: Record companies fought platforms such as YouTube on copyright issues and have found ways to be compensated for their content; that said, news organizations aren’t selling Taylor Swift songs. (Spokespeople for both OpenAI and The Atlantic did clarify to me that The Atlantic’s contract, which is for two years, allows the tech company to train its products on Atlantic content. But when the deal ends, unless it is renewed, OpenAI would not be permitted to use Atlantic data to train new foundation models.)

Zoom out and even this optimistic line of thinking becomes fraught, however. Do we actually want to live in a world where generative-AI companies have greater control over the flow of information online? A transition from search engines to chatbots would be immensely disruptive. Google is imperfect, its product arguably degrading, but it has provided a foundational business model for creative work online by allowing optimized content to reach audiences. Perhaps the search paradigm needs to change and it’s only natural that the webpage becomes a relic. Still, the magnitude of the disruption and the blithe nature with which tech companies suggest everyone gets on board give the impression that none of the AI developers is concerned about finding a sustainable model for creative work to flourish. As Judith Donath and Bruce Schneier wrote recently in this publication, AI “threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.” Follow this logic and things get existential quickly: What incentive do people have to create work, if they can’t make a living doing it?

If you feel your brain start to pretzel up inside your skull, then you are getting the full experience of the generative-AI revolution barging into your industry. This is what disruption actually feels like. It’s chaotic. It’s rushed. You’re told it’s an exhilarating moment, full of opportunity, even if what that means in practice is not quite clear.

Read: It’s the end of the web as we know it

Nobody knows what’s coming next. Generative-AI companies have built tools that, although popular and nominally useful in boosting productivity, are but a dim shadow of the ultimate goal of constructing a human-level intelligence. And yet they are exceedingly well funded, aggressive, and capable of leveraging a breathless hype cycle to amass power and charge head-on into any industry they please with the express purpose of making themselves central players. Will the technological gains of this moment be worth the disruption, or will the hype slowly peter out, leaving the internet even more broken than it is now? After roughly two years of the most recent wave of AI hype, all that is clear is that these companies do not need to build Skynet to be destructive.

AI is eating the world is meant, by the technology’s champions, as a triumphant, exciting phrase. But that is not the only way to interpret it. One can read it menacingly, as a battle cry of rapid, forceful colonization. Lately, I’ve been hearing it with a tone of resignation, the kind that accompanies shrugged shoulders and forced hands. Left unsaid is what happens to the raw material—the food—after it’s consumed and digested, its nutrients extracted. We don’t say it aloud, but we know what it becomes.

Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.

This Is What It Looks Like When AI Eats the World (2024)

FAQs

Is it true that AI will take over the world? ›

If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

Why won't AI take over? ›

The current model of AI could never pass a certain point because it would never encounter new, innovative ideas. Eventually, AI would run out of new ideas to share, and our collective knowledge would stagnate.

What is possible with AI? ›

AI excels in data analysis, language translation, and image recognition. However, it struggles with creativity, genuine understanding, and nuanced human emotions. AI lacks true consciousness and ethical judgment, limiting its ability to make complex moral decisions.

What did Stephen Hawking say about AI? ›

"I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans," he told the magazine.

Is AI a threat to humanity? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Could AI overthrow humans? ›

In a survey of 2,700 AI experts, a majority said there was an at least 5% chance that superintelligent machines will destroy humanity.

How long until AI replaces us? ›

The World Economic Forum has estimated that artificial intelligence will replace some 85 million jobs by 2025.

Will AI become self-aware? ›

We don't know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here's the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option.

What job cannot be replaced by AI? ›

Now let's look at 15 jobs AI can't replace in 2024!
  • Jobs Requiring Human Interaction and Empathy. ...
  • Therapists and Counselors. ...
  • Social Work and Community Outreach Roles. ...
  • Musicians. ...
  • High-Level Strategists and Analysts. ...
  • Research Scientists and Engineers. ...
  • Performing Arts. ...
  • Judges.
Jun 12, 2024

Who is the father of AI? ›

John McCarthy is considered as the father of Artificial Intelligence. John McCarthy was an American computer scientist. The term "artificial intelligence" was coined by him.

What is the next big thing after AI? ›

Quantum computing will optimise routes, improve efficiency, and reduce costs by doing sophisticated computations that regular computers cannot. Quantum computing has several interesting applications that might change whole industries. While quantum computing has great potential, it also has drawbacks.

Will the world be ruled by AI? ›

Addressing the Risks

Moreover, stringent regulations and oversight mechanisms are essential to guide the responsible deployment of AI technologies. Despite the apprehensions, it's crucial to acknowledge that the current state of AI is far from achieving the level of intelligence required for world domination.

How many years until AI takes over? ›

In a paper published last year, titled, “When Will AI Exceed Human Performance? Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.

Will AI take over the world by 2030? ›

By 2030, AI will be unfathomably more powerful than humans in ways that will transform our world. It will also continue to lag human capabilities in other ways.

Will AI completely replace humans? ›

The short answer is NO. However, it can augment and expedite development. For instance, AI could generate a diagram outlining the major components of a specific device. Engineers spend a lot of time manually selecting components and discussing them with manufacturers.

Top Articles
Latest Posts
Article information

Author: Golda Nolan II

Last Updated:

Views: 6300

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Golda Nolan II

Birthday: 1998-05-14

Address: Suite 369 9754 Roberts Pines, West Benitaburgh, NM 69180-7958

Phone: +522993866487

Job: Sales Executive

Hobby: Worldbuilding, Shopping, Quilting, Cooking, Homebrewing, Leather crafting, Pet

Introduction: My name is Golda Nolan II, I am a thoughtful, clever, cute, jolly, brave, powerful, splendid person who loves writing and wants to share my knowledge and understanding with you.