The story develops so quickly that, at first view, it may appear to be predetermined. Chungin “Roy” Lee claims that after moving to Columbia last autumn, he utilized artificial intelligence (AI) to cheat his way through school, to get offers from Amazon and Meta for internships, and to promote his tool on social media throughout the winter. He skipped college last spring to launch a start-up after being suspended and put on probation for being more interested in artificial intelligence than education. Cluely, the start-up, advertises that it can “cheat on everything” by utilizing an AI assistant that operates in the background during sales calls or meetings. The renowned venture-capital company Andreessen Horowitz sponsored a $15 million funding round that concluded last month. (On the record, Columbia, Meta, and Amazon declined to comment about Lee’s case.)
Lee unreservedly thinks that all jobs will soon be automated by bots and that omniscient AI is imminent. In the meantime, Lee informed that the use of the word “cheating” is actually merely a provocative tactic to get everyone on board with the notion. “There is nothing we can do but keep saying: Don’t think it’s cheating,” he added. Every time technology advances, the world becomes alarmed. Then it adjusts. Then it forgets. Then all of a sudden, it’s normal,” Cluely says on its website. While some may find it unfair that others may utilize AI to “be 1,000 times better or more efficient,” Lee stated that this would eventually just be the way things are done in the world. “Every single white-collar job in America should essentially be gone already,” Lee stated, even if ChatGPT didn’t become even slightly more capable than it is now (or, more conservatively, 20 to 30 percent of them). “I would wager my life that artificial intelligence will improve at an exponential rate.”
Over Zoom, Lee’s voice started to seem familiar as he occasionally munched on a corn chip and expressed his opinions about superintelligence. He had a striking resemblance to Sam Altman, CEO of OpenAI. Selling a product is viewed by both founders as evangelizing a faith. According to Altman’s new piece, the singularity—the time when technology surpasses human knowledge and control—has already started. Altman remarked, “The pace of technological advancement will continue to accelerate, and people will continue to be able to adapt to almost anything.” “There will be very difficult aspects, such as entire classes of jobs disappearing, but the world will be so much richer so quickly that we will be able to seriously entertain new policy ideas we never could before.”
AI fanatics are all over the Bay Area. Folks who feel that AI’s rapid rise is unavoidable and by far the most significant event on our planet. (Some say it’s the only thing worth worrying about at all.) Their perspective is rather optimistic—the notion, however foolish, is that superintelligence would eventually make life better for everyone—allowing them to easily overlook the current drawbacks (such as job loss and resource guzzling). AI startups offer “full automation of the economy,” “unbounded connection” with millions of AI personalities, “limitless” memory, and a cure for “all disease.” In recent weeks, numerous AI researchers and startups have informed that they’re reconsidering the usefulness of education: One entrepreneur told that today’s bots may already be more academically proficient than his teenage son will ever be, which makes him question the value of a traditional education.
The radicalizing consequences of AI, however, extend beyond those who support the technology. Like atheists jeering from the aisles at Mass, AI critics have increased their argument to match Silicon Valley’s growing intensity. They declare the inevitable demise of AI, dismissing it as overhyped and essentially worthless. The computational linguist Emily Bender, one of the industry’s main critics, suggests calling chatbots “stochastic parrots” or “a racist pile of linear algebra”—a reference to well-established algorithmic biases against people of color—in her new book, The AI Con. Gary Marcus, a cognitive scientist at NYU and another well-known opponent of the AI sector, recently gave a summary of one of his main arguments. Are chatbots smart? He remarked, you could say your calculator thinks, depending on how you define the word thinking”.
The frequency of direct confrontation between the two factions is increasing. Marcus had started his most recent online argument with the AI business a few days before to our conversation when he uploaded an altered picture of Altman’s face superimposed over a picture of the infamous Elizabeth Holmes. Altman responded with a joke, “true performance art.” Prominent AI critic Ed Zitron recently penned a roughly 7,000-word article in which he expressed his frustration with those who are “sick and tired of everybody pretending that generative AI is the next big thing.” Political analyst Nate Silver characterized Zitron’s writing as “old man yells at cloud vibes” and “detached from reality.”
This fight has gone beyond truth and maybe proof to become a contest of cosmologies. There are now two parallel AI realms, and most of us are left to fill the void between them.
Boosters and skeptics have disagreed for as long as AI has existed. However, in recent months, the dispute has heated up as the sector aggressively advances into the digital world. Billions of individuals are now going to come into contact with generative AI on a daily basis via Google, Facebook, Instagram, X, their iPhones, Amazon review summaries, different voice assistants, and other platforms—not necessarily because they want to, but because they can’t avoid it. Additionally, a lot of individuals are actively looking for the tools. According to reports, over 130 million individuals utilized OpenAI’s new image generator in its first week, placing a tremendous amount of load on the company’s infrastructure. ChatGPT is currently the fifth most popular website in the world. (One of those individuals was the person in charge of the White House X account, who shared an AI-generated meme of an immigrant crying while being held by ICE.)
As the technology and its results become more commonplace, leaders in AI have been more assertive, even brash, about the risks involved. Congress was forewarned two weeks ago by Anthropic co-founder Jack Clark that it would take up to 18 months before “truly transformative technology”—AI systems that are considerably more advanced than any chatbot or brain now in use—arrives. In a letter to the president the day after his second inauguration, Alexandr Wang, the newly appointed top AI officer of Meta, said that China and the United States were engaged in an “AI war.”
There is excessive expenditure to go along with the extreme rhetoric. Since the introduction of ChatGPT, the tech sector has spent hundreds of billions of dollars creating the necessary physical infrastructure and training increasingly potent AI systems, and there are no signs that this trend will abate. In an apparent attempt to catch up in the AI race, Mark Zuckerberg, the CEO of Meta, has been hiring like crazy in recent weeks, allegedly offering top researchers nine-figure compensation packages. (Meta claims the figures have been inflated or distorted.) It’s unclear exactly how generative AI will generate revenue, but tech corporations appear to believe that once the technology has radicalized the globe, the money will come in. “I don’t think the tech industry is ready for how many people are going to take genuine pleasure in it when the AI bubble bursts,” Zitron wrote last week in response to the critics.
Perhaps no clearer example of the rift can be found in the reaction to a recent study called “The Illusion of Thinking,” which was written by a group at Apple. Advanced AI algorithms, referred to as “large reasoning models,” from OpenAI, Anthropic, and DeepSeek were given a variety of objectives by the researchers. These tasks included, for example, restacking blocks in the fewest feasible movements or rearranging checkers in a pattern. All of the puzzles could be solved by applying the same fundamental reasoning, regardless of how long they were; moving a large number of blocks doesn’t alter the procedure for rearranging the blocks. But when the problems became big enough, these “reasoning” AI models fell apart. Subbarao Kambhampati, a computer scientist at Arizona State University who was not engaged in the study told that it’s like a child saying, “I’m a great mathematician, but I can’t add these numbers that you’re asking me to add because I don’t have enough toes and fingers.”
For Kambhampati and other academics who share his views, the Apple study confirmed long-held skepticism. Kambhampati has been at the forefront of investigating the capabilities and limitations of “reasoning” models. “It’s true what I’ve been warning about for the past 30 years as the field’s Achilles’ heel,” Marcus informed. “I will admit that there is some validation in that.” According to this theory, generative-AI models are statistical approximators rather than “thinking” things. They excel in reapplying patterns seen in their training data, but they are limited in its other capabilities. The original ChatGPT had trouble counting, and it still struggles with certain simple challenges.
Nonetheless, many AI supporters gleefully dismissed the Apple study. In one meme posted with a major AI discussion group on Facebook, enormous robots burn down a city while a bunch of humans gather nearby and remark, “But they’re not actually’reasoning.'” Who cares whether AI “thinks” like a human if it outperforms you in your job? If anything, several of the paper’s skeptics contended that the data only proved how humanlike AI models are despite their limitations. (Who among us hasn’t struggled to tackle a lengthy, hard problem on occasion?)
Marcus’ bragging over the study on X made him a target for others who believe AI’s talents are undeniable, like Altman, who commented, “We deliver, he keeps ordering us off his lawn.” Kevin Roose, a tech journalist at The New York Times, fired out at Marcus in response to Altman’s post: “A man predicts 85 of the last 0 AI crashes, and this is how you treat him?”
Roose’s response was really insightful; while he doesn’t share Altman’s admiration for technology, he does see it as present and strong. His latest work for the Times has been centered on topics like whether AI would endanger human existence in a few years and what to do if AI systems develop consciousness. He is now working on a book about the “race to build artificial general intelligence,” which is a technological advancement that is on par with or better than human intellect. He has more recently compared some those who doubt AI to “an antinuclear movement that didn’t admit fission was real.” I contacted Roose to inquire about this rather inflexible position, and he said, “I feel like more and more people who are denying the capabilities of these models are just telling feel-good bedtime stories to people who don’t want to believe that change is coming.”
The fight between AI believers and atheists may continue for some time. Generative AI is complex, and the labels used to describe it are ambiguous—is it “intelligent” or “conscious,” or both or neither, and does it matter? The companies behind the technology are likewise hesitant to establish clear definitions or defined benchmarks for “generally” or “super” intelligent capabilities. “We don’t know how to even ask the questions about the best way to understand these things,” says Kambhampati. Without questions or answers, faith fills the gap. Anything can be twisted to benefit either side of the issue.
Independent and commercial research, conducted by Kambhampati, Bender, Apple researchers, and many others, has consistently demonstrated chatbots’ inability to do a wide range of tasks, including basic math, logic, conceptual thinking, and more. For the same jobs, however, tech companies also frequently create chatbots that are superior—sometimes significantly so. Is generative AI rapidly advancing toward unbounded progress, or does it have a serious, systemic flaw? Either way, you could create an argument based on the same exact data, and people do it all the time.
The radicalization of AI is problematic because it encourages individuals to see beyond the current material state of the planet. In actuality, AI models are accelerating software engineering and scientific research while simultaneously creating false information and driving humans insane. Ignoring the chatbot age or claiming that the technology is pointless diverts attention from more complex conversations about how it affects relationships, education, the environment, jobs, and more. Even worse, acknowledging that superintelligence is imminent allows for the trivialization of almost all concerns regarding the current state of the technology.
Beneath the various layers of digital hostility, there may be opportunity for agreement between the two groups. For all of his bombast online, Marcus has remarked that today’s chatbots are a true development, albeit far from the breakthrough; for all of Altman’s petulance, OpenAI’s current massive reasoning models rely on new techniques not dissimilar to Marcus’s own, decades-old concepts. Kevin Roose warned me that AI may be both incredibly strong and very harmful. “What I am not saying is: We should take the industry at its word,” he said. If OpenAI is indeed “confident we know how to build AGI,” as Altman stated earlier this year, he must show it.
The current state of generative AI was not inevitable, after all. When the area of “artificial intelligence” first arose in the 1950s, there were two primary schools of thought. The “Connectionists” believed that intelligence could be produced by digital “neural networks” that progressively learned from data. According to “Symbolists,” knowledge, reasoning, and hard-coded rules are the sole sources of intelligence. Neural networks prevailed because they form the basis of many current digital companies and the chatbots of today.
In the 2010s, companies like Google and Meta built ever-larger data centers and neural networks to power search engines, social media, digital ads, shopping algorithms, and more. As customers were drawn to these goods, the IT companies amassed vast amounts of data, which they then used to generate enormous profits. Currently, the datasets are a goldmine for chatbot training.
According to MIT researchers from 2023, nearly all of the biggest and most potent AI models are corporate, and 70% of those with Ph.D.s in AI end up in industry. These companies cannot afford to display any indications of weakness now that hundreds of billions of dollars have been invested in generative-AI technologies, and profitability appears to be years away. At least in part, they have become radicalized because they want their vision to be realized. Near the end of our Cluely talk, Lee acknowledged to some cynicism: “Sure, it is a ploy to gain the attention of venture capitalists, but that’s only downstream of getting the attention of hundreds of millions of regular people.” He reminded me of Altman, whose ability to communicate and capitalize on stories has converted OpenAI from a research lab into a factory for new AI products.
While discussing radicalization, Lee brought up another factor, “What if half of America had openly embraced technology and the internet, and the other half had moralized against it?” he said. While the other half of the country would be flooded with affluence, the other half would “be living as if electricity was never invented.” According to Lee, “there would be such a massive gap in outcomes.” Living in a dystopian society is what this is. This kind of inequity is crazy.
Of course, half the country did not reject the internet, much less power. And “crazy” injustice will have occurred long before the supposed emergence of superintelligence, with technology playing a significant role. According to one economist, automation accounts for at least half of the nation’s increasing wage disparity over the last 40 years. Tens of millions of Americans, as well as billions of others worldwide, do not have access to broadband internet. Whole business classes have been decimated by platforms like Amazon, Uber, Airbnb, and others without providing obvious, well compensated alternatives. Together, the wealth of the top 10 tech billionaires is close to $2 trillion, which is greater than the GDP of all but 11 nations worldwide. Singularity or not, Silicon Valley has created a parallel world.






