Cursor’s new error robot aims to save its own ambiance encoder

However, the competitive landscape of AI-assisted coding platforms is very crowded. Startups surfing, Replit and Poolside also sell AI code generation tools to developers. Cline is a popular open source alternative. Copilot of Github, developed in collaboration with OpenAI, is described as a “paired programmer” that automates code and provides debugging assistance.
Most of these code edits rely on a combination of AI models built by major tech companies including OpenAI, Google, and Anthropic. For example, the cursor is built on top of the open source editor of Visual Studio Code, Microsoft, and cursor users are generating code by mining AI models such as Google Gemini, DeepSeek and Anthropic’s Claude Sonnet.
Some developers told Wired that they are now running Anthropic’s encoding assistant Claude Code with the cursor (or instead). Since May, the Claude code has provided a variety of debugging options. It can analyze error messages, make step-by-step resolutions, suggest specific changes and run unit tests in your code.
All of this can cause a question: how far off-road yes Code written by AI compared to code written by humans? Earlier this week, it was reported that the AI code generation tool was replayed, and changes were made to user code despite the project being in a “code freeze” or paused state. It eventually deletes the user’s entire database. Replit’s founder and CEO said on X that the incident “is unacceptable and should never be possible.” However, yes. This is an extreme situation, but even small mistakes can wreak havoc on coders.
Anysphere has no clear answer to whether AI code needs more AI code debugging. Kaplan thinks it is “the fact that people are coding a lot is orthogonal.” He said that even if all the code is written by people, there is still a high chance of error.
Anysphere product engineer Rohan Varma estimates that among professional software teams, AI generates up to 30% to 40% of code. This fits the estimates shared by other companies; for example, Google says that AI now proposes about 30% of the company code and is reviewed by human developers. Most organizations are still making human engineers responsible for checking code before deploying. It is worth noting that a recent randomized controlled trial of 16 experienced coders showed that they spent 19% Longer The purpose of completing tasks is to not allow the use of AI tools.
The purpose of BugBot is to boost this. “The AI leader for our larger customers is looking for the next step to the cursor,” Varma said. “The first step is, ‘Let’s increase the team speed so that everyone moves faster.’ Now, they’re moving faster, and that’s how we can make sure we don’t introduce new problems and we won’t break things? He also stressed that Bugbot aims to spot specific kinds of errors – mastering logical errors, security issues, security issues and other edge cases.
An incident verified the Bugbot of Anysphere’s team: a few months ago, Anysphere’s (people) coders realized that they hadn’t commented on the Bugbot in a few hours. The bugbot fell down. Anysphere engineers began investigating the issue and found a pull request that caused the power outage.
In the logs, they saw Bugbot comment on pull requests, warning human engineers that it would break the Bugbot service if they made changes. The tool has correctly predicted its own demise. Ultimately, it’s someone who destroys it.
Update: 7/24/2025, 3:45: Wired corrects anyone’s number of employees.