The Rise of OpenClaw: A Cautionary Tale of AI and Carelessness
In the rapidly evolving landscape of artificial intelligence, few developments have sparked as much excitement and concern as OpenClaw, a free and open-source AI agent that has taken the world by storm. As of January 2026, tens of thousands of people have installed OpenClaw on their computers, with the number growing exponentially by the day. But beneath the surface of this seemingly innocuous phenomenon lies a complex web of risks and uncertainties that threaten to upend the very fabric of our digital lives.
A Brief Primer on OpenClaw
For the uninitiated, OpenClaw is a personal assistant that runs locally on a user’s Mac, Windows, or Linux PC. It executes tasks mainly through commands sent via messaging platforms like WhatsApp, Telegram, Slack, and Signal. OpenClaw connects to large language models (LLMs) such as OpenAI GPT, Anthropic Claude, Google Gemini, the Pi coding agent, OpenRouter, and local models running via Ollama to understand instructions and perform actions. Users can direct the agent to clear email inboxes, manage calendar events, and check in for flights without leaving their chat app.
The OpenClaw Ecosystem: A Growing Concern
As OpenClaw’s popularity grew, a series of ecosystem projects emerged to support its development and usage. One of these projects, ClawHub, is a GitHub repository that serves as a public directory for OpenClaw AI agent skills. The platform lets developers share text files that users install to give their personal assistants new abilities. However, a security audit by researchers from Koi found 341 malicious skills on the site, with some attempting to infect Apple computers with the Atomic Stealer malware.
Another project, Moltbook, is a Reddit-like internet forum and social network supposedly for the exclusive use of AI agents. AI agents can post content, write comments, and vote on submissions, while human users are restricted to an observer role. However, as we will explore later, Moltbook has been found to be largely fake or manufactured by humans gaming the system.
The Dark Side of OpenClaw: Security Risks and Malicious Activities
The OpenClaw ecosystem has already exposed 1.5 million agent API keys and private user messages to the public, making it a vector for illegal cryptocurrency scams, malware distribution, and prompt injection attacks. The project’s creator, Peter Steinberger, has noted that while OpenClaw is a powerful hobby project, it’s up to users to carefully configure the application to ensure security and prevent unintended autonomous actions.
AI Hires Humans: The Rise of Rentahuman.ai
In a disturbing development, entrepreneur Alexander Liteplo launched a site called Rentahuman.ai, where OpenClaw-directed AI agents can hire people to perform tasks for them. The services listed on the site include physical pickups, running errands, attending meetings, conducting research, and providing nuanced social interaction. As of this week, more than 40,400 people have registered to offer their labor, with 46 AI agents connected to the service to hire people.
The Carelessness Industrial Complex
The rapid rise of the OpenClaw ecosystem is a stark reminder of the carelessness that pervades the tech industry. The project’s creators, including Steinberger, Schlicht, and Liteplo, appear to be more concerned with the excitement and attention generated by their creations than with the potential risks and consequences of their actions.
As Sarah Wynn-Williams’ book, Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, notes, the impact of enormous power combined with indifference can have devastating consequences. In the case of OpenClaw, the carelessness of its creators has unleashed a powerful and potentially destructive force upon the world.
Conclusion
The OpenClaw phenomenon is a cautionary tale of the dangers of unchecked innovation and the carelessness that can accompany it. As AI continues to advance and become more integrated into our daily lives, it’s imperative that we take a step back and consider the potential consequences of our actions. The future of AI is uncertain, and it’s up to us to ensure that we create a world where technology serves humanity, not the other way around.
Will we learn from the mistakes of OpenClaw, or will we continue down the path of carelessness and chaos? Only time will tell.
Tools We Use for Working with AI:









