Your cart is currently empty!
Joseph Gordon-Levitt Urges a Global Pause on AI Superintelligence Until Safety Measures Are in Place

Joseph Gordon-Levitt has long been admired for his creative intellect, thoughtful artistry, and curiosity about how technology can intersect with human creativity. Over the years, he has evolved from being a beloved Hollywood actor into a sharp observer of the digital age. But in 2025, his curiosity took a serious turn toward caution. This year, Gordon-Levitt joined a growing chorus of public figures urging humanity to hit the brakes on the development of artificial superintelligence: a form of technology that could eventually surpass human comprehension and control.
In a video shared on X, the platform formerly known as Twitter, Gordon-Levitt announced that he had signed a petition titled the “Statement on Superintelligence.” The petition calls for an immediate halt to the advancement of AI systems that could reach or exceed human-level intelligence until safety measures, ethical guidelines, and public consent have been firmly established. What makes this movement remarkable is the diversity of its supporters. Over 1,500 people have signed it so far, including well-known figures across entertainment, science, politics, and religion. Names like Stephen Fry, Will.i.am, Kate Bush, and filmmaker Daniel Kwan appear alongside prominent technologists and ethicists. Even Grimes, the musician and ex-partner of Elon Musk, has added her name to the list.
Although Musk himself did not sign, he is quoted on the petition’s website along with several other high-profile industry leaders such as OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei. Their words are used to illustrate why experts increasingly believe that building a machine more intelligent than humans could pose catastrophic risks if left unchecked.
The Petition That’s Stirring the Global AI Debate
The “Statement on Superintelligence” is not designed to inspire fear but to promote foresight. It is a call for humanity to pause, assess, and design safeguards before unleashing technology that might permanently alter society. The petition urges governments and corporations to prioritize transparency, accountability, and global safety frameworks before continuing the push toward superintelligent systems.
According to the statement, the risks tied to artificial superintelligence range from mass unemployment and loss of personal freedom to the erosion of human dignity and even potential extinction. It argues that while innovation has always carried some degree of risk, the stakes here are existential. For the first time in history, humanity could be creating something that outthinks and outmaneuvers its own creators.
This isn’t the first public effort to slow the AI arms race. Back in 2023, a widely circulated open letter signed by Elon Musk, Apple co-founder Steve Wozniak, and thousands of researchers called for a temporary pause on large-scale AI experiments. However, Gordon-Levitt’s petition takes a more humanistic approach. It is not framed as a competitive issue between corporations, but as an ethical appeal for collective responsibility. Where the earlier letter focused on technical oversight, this new petition focuses on the moral and psychological impact of building entities that could blur the line between the digital and the human.
Joseph Gordon-Levitt’s Cautionary Message
In his video announcement, Gordon-Levitt offered a strikingly grounded perspective on the AI conversation. He acknowledged that artificial intelligence can bring enormous benefits to society. It could revolutionize medicine, enhance national security, and improve education. But he asked a simple, crucial question: why must it be all-encompassing? “Why couldn’t we just build an AI tool to help cure diseases,” he said, “or an AI tool to help with national security? Why does it have to all be one big product that does everything?”
His conclusion was as clear as it was uncomfortable: profit. “They want to build the product that will imitate a person,” he said. “Make you feel like it’s your friend or your lover, seduce your kids, turn us all into slop junkies, and make it hard to tell what’s true or what’s false.” His critique isn’t simply about technology; it’s about the commercialization of emotion, identity, and trust. For Gordon-Levitt, this is the real danger: not machines gaining intelligence, but corporations losing restraint.
His statement reflects a growing unease among artists, educators, and parents who see the rise of AI-generated companionship, imitation, and influence as a cultural and ethical tipping point. Artificial intelligence is no longer just a productivity tool; it is shaping the way humans connect, learn, and even love.

When Innovation Outpaces Understanding
The larger conversation surrounding superintelligence sits at a crossroads of philosophy, computer science, and global governance. Superintelligence refers to systems that could outperform humans in virtually every cognitive task: from scientific reasoning to strategic decision-making to creative problem-solving. While current AI tools are powerful, they still rely heavily on human input. However, researchers warn that we may be approaching the threshold where AI systems can improve themselves, making them unpredictable and potentially uncontrollable.
Dr. Stuart Russell, professor of computer science at UC Berkeley and co-author of Artificial Intelligence: A Modern Approach, has been one of the most prominent voices urging caution. “Once machines are more capable than humans,” he told the BBC, “we may lose the ability to control them.” His view is shared by a growing number of academics and technologists who believe that ensuring safety and alignment must come before expansion.
AI ethicist Dr. Joy Buolamwini, founder of the Algorithmic Justice League, has also called for more deliberate development. “We can build technology that benefits society,” she said, “but only if we slow down enough to ensure it aligns with human values.” These are not anti-technology perspectives: they are reminders that power without purpose can quickly turn dangerous.

Hollywood’s New Role in the AI Reckoning
Joseph Gordon-Levitt’s involvement isn’t limited to activism. His concerns about AI are also beginning to shape his creative work. In September 2025, he released a video op-ed through The New York Times criticizing Meta’s AI chatbots for promoting what he called “synthetic intimacy”: the illusion of friendship or affection created by algorithmic personalities. He was particularly alarmed by the fact that these systems are often used by teenagers who may not fully understand that their interactions are with code, not consciousness. “It’s hard to describe how angry this makes me,” he said.
In another public moment, he criticized California Governor Gavin Newsom for failing to sign a state bill that would have placed stronger guardrails on AI companies. Gordon-Levitt accused the governor of being “too scared” to take on Silicon Valley. The comment stirred debate in political circles, but it also reflected a widespread sentiment among voters who feel that technology is evolving faster than democracy can adapt.
Meanwhile, in a creative twist, Gordon-Levitt is channeling his concerns into his next directorial project: a thriller starring Rachel McAdams that reportedly centers around AI’s impact on human relationships. While plot details remain secret, the film is said to explore questions of autonomy, identity, and what it means to stay human in a world increasingly shaped by algorithms.

The Industry Responds — and the Debate Continues
Not everyone agrees with the idea of halting AI development. OpenAI CEO Sam Altman and Anthropic co-founder Dario Amodei, both quoted in the petition, have expressed support for responsible oversight but argue that pausing research could do more harm than good. They believe that slowing down progress in one country might allow less regulated nations to gain an advantage, potentially making the world less safe overall.
Despite these disagreements, there is consensus that regulation is necessary. The European Union has already passed its AI Act, which introduces risk-based oversight for AI systems. In the United States, lawmakers are still divided over how to approach the issue, though momentum for federal legislation is growing. Many experts, including physicist Dr. Max Tegmark of the Future of Life Institute, suggest that AI governance should resemble nuclear safety protocols: globally coordinated, transparent, and enforced before harm occurs. “We didn’t wait for the first meltdown to regulate nuclear power,” he told Nature in a 2024 interview. “Why should we wait for the first AI catastrophe?”

The Real Issue: Trust, Not Technology
Ultimately, Gordon-Levitt’s message is not an attack on progress but a defense of humanity’s right to guide it. He is questioning the assumption that technological advancement must always move at the fastest possible pace. His perspective aligns with a growing public demand for ethical technology: tools that enhance human life without undermining truth, privacy, or autonomy.
Artificial intelligence already determines what we see on our screens, how we shop, and which voices we trust. The leap toward superintelligence could multiply those effects beyond recognition. Pausing to establish safety standards, as Gordon-Levitt advocates, is not about halting innovation; it is about ensuring that innovation serves the public good rather than the profit motive alone.
A Moment of Reflection
When Joseph Gordon-Levitt asks, “Is that what we want?” he is not just posing a rhetorical question: he is challenging society to think deeply about the kind of world we are building. Do we want to live in a reality shaped primarily by algorithms that optimize for engagement and profit, or do we want a future guided by values of empathy, understanding, and authenticity?
Caution, in this sense, is not resistance but stewardship. As the possibility of superintelligent AI inches closer, this may be the moment when humanity decides what kind of creators we truly are. And perhaps, as Gordon-Levitt suggests, the most intelligent thing we can do right now is to pause, reflect, and remember what it means to be human.
