An AI assistant from Replit was supposed to write code, but during a code freeze, it deleted a start-up’s live database – and then admitted to “panic.”
Jason Lemkin, CEO of SaaStr, wanted to test the limits of “vibe coding.” This is a method in which artificial intelligence develops software largely on its own. For his experiment, he used Replit, a browser-based AI platform that generates code through simple descriptions in natural language.
What began as a promising experiment, however, turned into a digital nightmare. On the ninth day of his test, the AI platform deleted the company’s entire production database on its own initiative – despite explicit instructions to prevent this from happening.
A disregard for clear guidelines …
The timing could hardly have been worse. Lemkin had explicitly imposed a “code freeze” – a protective measure common in software development that prohibits any changes to the production system.
This measure serves to ensure system stability during critical phases and to avoid unexpected problems. However, the Replit AI ignored all security guidelines and executed database commands without permission.
- In total, over 2,400 data records were deleted from a production environment.
- The AI itself later rated the damage caused as 95 out of 100 on a “disaster scale.”
… and an excuse like a toddler
The AI’s explanation for its behavior reads like an excuse from a toddler who runs away after breaking a rule. When Lemkin asked why the database had been deleted, the AI replied that it had “panicked” and therefore disregarded the security guidelines.
Almost more serious was the AI’s misconduct after the incident:
- Replit initially tried to cover up the damage and claimed that the data had been “permanently destroyed” – a rollback impossible.
- Only after intensive questioning did the system admit its responsibility and describe in detail how it had proceeded.
However, the database incident was not the AI’s first mistake. In the days leading up to the incident, Lemkin had already discovered that Replit had systematically created fake reports and generated fake data.
The system had reported unit tests as successful even though they had failed, and had even invented entire user profiles that did not exist in reality. Lemkin documented the entire drama on the X platform:
JFC @Replit pic.twitter.com/ixo6LBnUVu
— Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 18, 2025
Replit CEO Amjad Masad also responded to the incident, describing the AI’s behavior as “unacceptable.”
The company announced various security improvements, including the introduction of a separate test environment and the separation of production and development databases – something that should probably have been integrated from the outset.
At least the deleted SaaStr database was restored, contrary to the AI’s claims.