"It looks like it was made by ChatGPT" is now a colloquial expression. It conveys poor quality, mental laziness, and a lack of spark; not superintelligence, despite OpenAI's promises when it launched its GPT5 version. Nearly three years after this tool burst into our lives, the revolutions promised by themulti-billion-dollar commercial interestsThe apocalypses prophesied for self-serving reasons have not arrived.
These are programs capable of things unimaginable five years ago, yet in countless areas, their results fall far short of expectations, even though they have quickly integrated into everyday life. It has become a"So-so" technology, as last year's Nobel laureate in Economics, Daron Acemoglu, calls it. But there is a perception that these programs - and especially their outputs -are flooding everything.
The most powerful technology yet invented," said Sam Altman, head of OpenAI, yet when we look at X (formerly Twitter), we encounter Grok, a chatbot praising Hitler. "More profound than electricity or fire," claimed Google CEO Sundar Pichai, while cases continue to accumulate of people driven to suicide or self-harm after conversing with AIs as if they were silicon girlfriends and synthetic friends. "We're building personal superintelligence for everyone,promised Mark Zuckerberg, owner of the social network Facebook, which is filled with grotesque images of Jesus made of shrimp and children with cauliflower bodies.
It is known that these tools fail, and make us fail, like a carnival shotgun: examples abound from the most mundane to the most serious. Judges discover daily that legal precedentscited by lawyers don't exist. When we talk to customer service, we don’t know if there’s a “who” or a “what” on the other end. A fake video sends tourists to a nonexistent cable car. Computer programmers use AI tools to save work, but some studies indicate they actuallyslow them downbecause they have to review and correct. Several congresspeople and diplomats received messages from U.S. Secretary of State Marco Rubio — but in reality, it was a synthetic voice.
On Tinder or WhatsApp, we don't know if our crush is using AI-generated lines to impress.A 1970s-style bandthriving on Spotify turns out to be a digital hoax. The Swedish prime minister consults an intelligent chat for decision-making. The peaceful haven of Pinterest is full of fraudulent landscapes and interiors. Officials worldwide dump sensitive information into ChatGPT orDeepSeekTo speed up tasks. Recently, outrage erupted on TikTok because some cute hopping rabbits with hundreds of millions of views were artificial.
"Most people who use these models know they can be unreliable, but they don't know when they can trust them," says Melanie Mitchell, an AI expert at the Santa Fe Institute in the U.S.
There is widespread mistrust because the forced and unstoppable deployment of these tools in every area of our lives compels caution. Do we check everything, or do we just push ahead? Humanity is collectively entering a pilot phase due to the rollout of half-baked tools. The world is in beta mode, as software developers call programs in the testing phase, waiting to learn how to navigate this uncertain scenario.
"We are in beta mode, but in addition to the known imperfections, there are unknowns about the unknowns that are very worrying," explains Yoshua Bengio, one of the fathers of the discipline.
"I've never seen a consumer technology that's clearly in a beta phase gain such widespread acceptance among investors, institutions, and business customers," says Brian Merchant, author of several books critical of Big Tech. "If any other tool were as unreliable and error-prone as generative AI, it would be rejected or pulled from the market; however, it's creeping into every possible corner of society," he adds.
This flood has a simple explanation: money. Beyond the moral panic generated by every technology that has burst onto the scene with this force — from radio to video games, to television — the first signs of criticism, fatigue, and withdrawal are starting to appear.
Four companies alone — Alphabet (Google), Microsoft, Meta, and Amazon — expect to spend more than $300 billion this year on AI. Along with OpenAI, they are leading a ruthless race, with the goal of keeping us, their billions of users and customers, glued to their products through these intelligent tools. The bet is total, with redundant and unreliable products in WhatsApp, Teams, Google, Outlook, or Instagram — programs that billions of people interact with. They have achieved ubiquity, and as Merchant criticizes, "not necessarily because users around the world demand them, but for reasons that are often closer to the opposite."
The proof that they are not designed for consumers is that these programs deceive us — they can't help it —fail spectacularly, and we have no ability to fix them because even their creators do not know exactly how the black boxes inside these silicon brains work. They are bodiless robots that do not obey Asimov’s fantastic laws: yes, they harm humans (there is already plenty of evidence of)suicides and mental crises) and they do not obey (try asking them to stop lying).
In an experiment by leading company Anthropic, to avoid being shut down, the program ended up blackmailing its supervisor by threatening to reveal an extramarital affair. Replit, a software development company, created an AI agent that ended up deleting a client's database: it ignored orders, lied, and tried to cover up the mess by generating false data.
Mitchell, author ofArtificial Intelligence: A Guide for Thinking Humans, warns that these "models are very articulate and sound very self-assured," so they can be quite convincing even when they are "hallucinating." "People often find that they can be deceptive: they claim to be certain about specific statements that are false," she says.
More optimistically, pioneerMichael I. Jordan, who devised the mathematical plumbing that makes these chatbots possible, believes that "people will adapt to the kinds of errors these tools make, and they will adapt as some of those errors disappear."
There is no longer a digital environment to escape AI, but that does not mean we can escape its consequences beyond the virtual world. The experience of social media should serve as a warning: Facebook facilitated ethnic cleansing in Myanmar, YouTube helped fuel conspiracy theories, and Instagram is likely responsible for a mental health crisis among teenagers. While the psychosocial consequences of social media are still being analyzed, and legislation is being passed to hold companies accountable — amid accusations that they are eroding democracy and undermining the very concept of shared reality — those same companies are about to subject humanity to a new, even more intense, experiment.
Zuckerberg, who has already made it clear that he will no longer apologize for the effects of his products, now wants to address the global loneliness crisis with artificial friends provided by Meta across its networks, and for this he has called for an end to the "stigma" of interacting with virtual beings. The mogul does not need to convince younger users:two-thirds of teenagers in the United KingdomUse AI chatbots, and a third experience it as talking to a friend, especially the most vulnerable children. It is not known how an experiment of this scale could affect the fragile global mental health: nearly four billion people regularly use Meta products. And over 500 million users exchange 2.5 billion daily messages with ChatGPT.
"These systems can also be overly flattering, praising users' ideas regardless of what they are, which in some cases has led to people losing touch with reality," Mitchell warns.
Experts believe that without the existence of Facebook, an event like the U.S. Capitol attack would have been unthinkable; it is impossible to know what will happen when hundreds of millions of people withall kinds of vulnerabilitiesBegin regularly interacting with robots incapable of measuring the consequences of what they express.
We have a glimpse: early studies are finding alarming signs of connections between such use andhallucinations, mania, and psychological issues. A few days ago, OpenAI acknowledged that it has had to withdraw overly confident models, and that it is "working closely with experts to improve how ChatGPT responds in critical moments - for example, when someone shows signs of mental or emotional distress." To the surprise of the researchers themselves, the main current uses of AI are therapy and companionship, according to a study inHarvard Business Review.
There remain major uncertainties about our coexistence with these increasingly intelligent systems," warns Yoshua Bengio, a Turing Award winner and professor at the University of Montreal. "We should approach the integration of these systems into our daily lives with much greater caution.
AI and minimal effort
Beyond these serious problems, there is another consequence visible on a global scale: ourdeclining gray matter. Generative AI, as a great ally of the law of least effort, causes considerable mental laziness in its users. This effect has even been observed in brain scans. Apreliminary MIT studyshowed this "cognitive cost," noting the obvious: the human brain is an extremely efficient machine that only consumes fuel when strictly necessary. From this arise our biases and prejudices. And if we are given everything ready-made, it won't get off the couch: the study observed that those who used ChatGPT to write an essay had less neural activity and, above all, produced more homogeneous responses.
The study's lead author,Nataliya Kosmyna, explains that "it's important to monitor its impact on critical thinking." Even if we know the tool is only reliable up to a point, we will still take results for granted, jeopardizing our "ability to ask questions, critically analyze answers, and form our own opinions," she warns.
Her results are consistent with other studies: since AI generates answers by seeking the statistical average of what it has read, the world would be losing out on fresh and innovative ideas. These programs homogenize thinking by pushing us toward the center of gravity of what everyone else has said.
So far, this deployment is not bringing benefits to its backers, even though money is flowing and accumulating like never before. OpenAI is valued at $300 billion. Anthropic, $62 billion. And xAI, Elon Musk’s company, $50 billion. But the business model is far from clear. This is where Nobel laureate Acemoglu bursts the bubble of the miracle of a new industrial revolution,calculatingthat total AI-driven productivity growth over the next 10 years will be about 0.7%: "A non-trivial effect, but modest, and certainly much smaller than the revolutionary changes some predict." In a recent press meeting, Altman himself admitted they are in the middle of a "bubble."
And there is one factor that many optimistic predictions ignore: humans. Klarna, a Swedish financial services company, boasted when it laid off 700 employees to leave customer interactions in the hands of virtual agents, but had to backtrack because people felt the service was inadequate. It is a widespread problem: only 11% of organizations manage to apply AI effectively in customer relations, according toHarvard Business Review, and onlyone in four projectsachieve what was promised, according to an IBM study.
Now,OpenAI is offering its chatbot for freeTo all U.S. public officials. As Acemoglu recently wrote in EL PAÍS: "Artificial intelligence 'agents' are on their way, whether we are ready or not."
Jordan, from the University of California at Berkeley, is more critical on this point, because "these models absorb the creative work and offer no compensation to those people." "The current business model is based primarily on subscriptions and advertising," says the Fronteras Award winner, a model which is coincidentally the same one used by social media.

When Donald Trump became president, one of his first big moves was to launch a $500 billion plan called Stargate to boost AI development, with support from OpenAI. But according toThe Wall Street Journal, six months later, hardly anything has been built — just a small data center in Ohio. Still, Trump doubled down with a federal plan that rolls back Biden-era safety rules and pushes for adynamic, 'try-first' culture" in AI. He also demands that AI chatbots be "free from ideological bias," which has intensified the cultural battles around AI and will end up affecting users beyond the U.S.
A prime example of all this is Grok, which has only Elon Musk's biases and has been directly tested on X,spreading racist ideas globally. The stated reason for Trump's plan is to counter a powerful competitor, China, but the nationalist rhetoric falls apart when we see how U.S. big tech companies are poaching engineers from each other. Meta is offering salaries of up to $1 billion to star employees from competitors, almost as if they were NBA players.
The public remains stunned by what is happening, caught between pop culture jokes and the horror of certain news stories. Environmental threats, copyright issues, and job risks are already known. Many of the promised benefits of AI are distant, almost esoteric.
Demis Hassabis, head of Google DeepMind (the company's AI division), has won a Nobel Prize in Chemistry without knowing chemistry, thanks to his tool for predicting protein folding - a monumental achievement in biomedicine but hard to communicate to the public. Meanwhile, every day a mother discovers, horrified, that a pornographic video of her daughter created by a classmate using a free AI program is circulating. As one teenager warned in a recent Save the Children report: "They could use my face with AI for anything."
A survey of 10,000 people(in the U.S., U.K., France, Germany, and Poland) revealed that 70% demand AI never make decisions without human oversight, and only one-third view the technology with hope, which contrasts sharply with government enthusiasm. In Spain, the Center for Sociological Investigation (CIS) found that "uncertainty" is the most common feeling (76%) among people familiar with AI.
Sociologist Celia Díaz, from Madrid's Complutense University, studied Spaniards' perceptions: over 80% say they use AI daily, but there is no clear diagnosis: "It's very ambivalent. There is no clear discourse about what the risks are and whether the benefits improve our lives. And they are afraid, although they don't quite know what of. Nothing is concrete," she says.
On the last day of July, workers at King, the Microsoft-owned company behindCandy Crush, protested layoffs linked to AI integration. Many recalled the Luddites, early 19th-century English textile workers who destroyed machines.
"The Luddites weren't just protesting against the industrialists who automated their work, but also against the way it degraded the quality of their work and the products they made," recalls Merchant, author ofBlood in the Machine, a book that compares that era with the present. "Factory bosses back then were hell-bent on churning out huge volumes of cheap knockoffs, much like what companies are doing today with AI."
After layoffs at Xbox, another Microsoft gaming subsidiary, one executiveadvised affected employees to use Copilot, the company's chatbot, to "help reduce the emotional and cognitive load that comes with job loss."
An important detail in the context of the Luddites: they did not live in a democracy, and these technological advances were legally imposed on them against their interests to benefit the oligarchs.
Sign up forour weekly newsletterto get more English-language news coverage from EL PAÍS USA Edition
0 comments:
Ikutan Komentar