The world of AI programming is undergoing a profound paradigm shift, and the rise of Cursor is a powerful footnote to this trend. From its founder’s review, we can see how AI programming tools are reshaping the development process and the key to continuing to create exceptional products that stand out from the competition.
Cursor, built by Michael Truell, co-founder and CEO of AnySphere, is not only one of the fastest-growing AI programming products today, but also an early form of the future “post-code era”.
With a team of 60 people, Cursor achieved $100 million in annualized recurring revenue just 20 months into launch, growing to $300 million in two years, making it one of the fastest-growing development tools ever. This achievement is supported not only by the improvement of code generation capabilities, but also by the restructuring and redefinition of the entire software development process.
Michael is a technologist who has been honing his skills in AI for a decade, majoring in mathematics and computer science at MIT and then working in research-based engineering at Google. He has a deep understanding of AI technology pathways and business history.
In a conversation with overseas tech blogger Lenny, he clearly depicted a future that is different from mainstream assumptions: code will not be completely replaced, but it will no longer be the main output object for humans. People will express their ideas about the functionality and behavior of the software in a way that is close to natural language, and the system will be responsible for translating those intentions into executable program logic.
He pointed out that there are two mainstream visions of the future of AI programming today. One is to believe that the development method will basically maintain the status quo and continue to rely on TypeScript, Go, Rust and other languages to build programs; The other believes that in the future, the entire development process can be completed just by talking to the chatbot.
A multi-form development method
Speaking about Cursor’s beginnings, Michael reflects on two key moments:
The first is their first exposure to the internal beta version of GitHub Copilot. This is the first time they have used a truly practical AI development tool, which is not only not a gimmick, but also a significant improvement in work efficiency.
The second moment was when they studied a series of Scaling Law papers published by OpenAI and other research institutions. These papers made them realize that even without new algorithms, AI will continue to evolve as long as model parameters and data scale continue to be expanded.
In late 2021 and early 2022, they firmly made a judgment: the era of AI products has truly arrived. However, unlike most entrepreneurs who revolve around “building large models”, Michael and his team try to think backwards from the perspective of knowledge work and think backwards about how each specific work scenario will evolve under AI enhancement.
To achieve these three challenges, product managers will only continue to appreciate
Good product managers are very scarce, and product managers who understand users, business, and data are still in demand when they go out of the Internet. On the contrary, if you only do simple communication, inefficient execution, and shallow thinking, I am afraid that you will not be able to go through the torrent of the next 3-5 years.
View details >
At that time, they chose a seemingly unpopular direction – mechanical engineering. They thought that this field was less competitive and the problem was clear, so they started automating CAD tools. But they soon found that it was very difficult to advance development due to a lack of enthusiasm for mechanical engineering and a lack of data corpus.
Eventually, they decided to return to the area they were most familiar with: programming. Although there were products such as Copilot and CodeWhisperer on the market at that time, they believed that no one really pushed the vision to the extreme. Although this is one of the hottest and most competitive directions, they still judge that the “ceiling” is extremely high enough to support a breakthrough product company. They abandoned the strategy of “avoiding hot zones” and chose the route of “going deep in hot zones”.
One of Cursor’s core decisions was not to do plugins, but to build a complete IDE. In their view, the existing IDE and editor architecture is not suitable for future development methods and human-computer interaction logic.
“We wanted to have control over the entire interface and redefine the interface between developers and the system.” This is not only to achieve a more natural control granularity, but also to build a system base that can truly carry the next generation of programming paradigms.
Michael also believes that the future development method will be multi-form. Sometimes AI is like an assistant, completing tasks in Slack or issue tracker; Sometimes it’s an IDE front office interaction; It may also be that a process is run in the background and then iterative control is carried out in the foreground. These are not contradictory, as long as the user can flexibly switch between fully automatic and manual control, it is a qualified system.
He also expressed reservations about the current “agent fever” in the industry. Leaving tasks entirely to AI can turn developers into “engineering managers” who constantly review, approve, and modify the output of a group of “stupid interns.” “We don’t believe in that path. The most effective way is to break down the task into multiple steps and let the AI complete it step by step while the human remains in control. ”
Early versions of Cursor were developed completely from the ground up and did not rely on any existing editors. It took them only five weeks to build a usable prototype and quickly replace their original development tools. It took only three months from zero code to launch. The unexpected user feedback after the release prompted them to iterate quickly, eventually finding a balance between performance, experience, and development speed to refactor based on the VS Code framework.
But in Michael’s view, the real success is not the speed of the first release, but the continuous optimization that followed. He admitted: “The first three months of the version are actually not easy to use, the key is that we always maintain a paranoid pace of improvement. “This continuous optimization rhythm has resulted in a very stable growth trajectory for Cursor. Although there was no obvious feeling of “taking off” in the early stage, the cumulative effect of the exponential curve finally broke out after many iterations.
Run in the right direction every day
While the Cursor explosion may seem like it stemmed from a key feature or product decision, Michael Truell says the real secret is simple: “Stay on the right track every day.”
It sounds ordinary, but it is extremely difficult to do it consistently. Every decision, every iterative detail, is from the user’s perspective, constantly close to the actual scene, and constantly subtracts and optimizes. They never hope for a one-time hit, but firmly believe that the value of the product must withstand the polishing of continuous use and real feedback.
Echoing this philosophy is the choice of technology path behind Cursor. Michael mentioned that when the team initially built Cursor, they had no intention of training the model themselves. In his view, there were already strong enough open source and commercial foundation models at that time, and investing computing power, money and manpower to build new models from scratch was not only costly, but also deviated from their real focus: building useful tools and solving specific problems.
However, as the product iterated deeper, they gradually realized that the existing foundation model, while powerful, could not meet the key scenarios in Cursor. Most of these models are trained for general conversations, question answering, or text tasks, and lack a native understanding of issues like “multi-file structured code editing.”
So, they began to try to develop their own models internally. Initially, a specific function has extremely high latency requirements, and the call of the existing model is not feasible. After trying to train on my own, I found that the results far exceeded expectations. Since then, the self-developed model has gradually become the core component of Cursor, not only supporting key functions, but also becoming an important direction for team recruitment.
One of the key features of Cursor is the prediction of “next editing behavior”. This is difficult to achieve in writing, but it is very possible in code scenarios. Because programs are highly contextually coherent—when a developer modifies a function or file, the next few actions are often predictable.
Cursor’s model is based on this contextual logic, inferring the files, locations, and structures that users might modify next, and providing completion suggestions at a rate of near-inductive latency. This is not just token-level completion, but structured snippet-level predictions that rely entirely on self-developed models trained specifically for this scenario, rather than general-purpose foundation models.
In the reality that the cost of model call is extremely high, this kind of self-developed model can also greatly reduce the threshold for product use. To do this, the model must have two characteristics: fast response and low cost.
Cursor requires that each completion inference must be completed within 300 milliseconds, and it must not consume too many resources in long-term continuous use. This rigid constraint makes them have to control the design and deployment of the model itself.
In addition to its core interactive functions, Cursor’s self-developed models also undertake another important task – acting as an “orchestrator” to assist in calling large models. For example, when the codebase is large, it can be difficult for large models to know which files, which modules, and which contexts to focus on.
Cursor’s model performs a search and induction, extracts relevant information from the entire codebase, and feeds it to the main model. It’s like building a professional “information feeding pipeline” for large models like GPT, Claude, and Gemini, making their performance more accurate.
On the model output side, these sketch-style code modification suggestions will also be processed and rewritten by Cursor’s self-developed model to transform into truly executable and structured patches.
This system architecture with multiple models working together and performing their own duties is what OpenAI calls “model integration”. Rather than being obsessed with building models from scratch, Michael is extremely pragmatic about using ready-made open-source models like LLaMA as a starting point.
In some scenarios, they also work with closed-source vendors to fine-tune model parameters to suit specific tasks. He emphasized that the important thing is not whether the underlying structure of the model is in his own hands, but whether he can obtain operational training and customization rights to serve the actual needs of the product.
As the technical system continues to improve, another question gradually emerges: Where is the moat of Cursor on such a rapidly evolving track? Michael’s answer to this question was extremely sober. He does not believe that “product binding” and “contract locking” can build a real long-term defense.
Unlike traditional B2B software, the threshold for AI tools market changes drastically, with low user trial and error costs and high acceptance of new tools. He admitted that this is not a market that is beneficial to traditional giants, but a soil that encourages new companies to continue trial and error, iterate rapidly, and compete for user choice.
From this perspective, the moat that Cursor can rely on is not model control or data monopoly, but “the ability to continue to build the best products”.
This industry is more like the search engine of the 90s, or the PC industry in the early days, where every improvement can bring significant benefits, and the barriers to competition come from the “deep inertia” formed by continuous iteration, as well as the advantages and disadvantages of team organization and product polishing systems.
Michael makes a core point: continuous R&D is the biggest moat in itself when there are still a large number of unmet needs in a market and a large number of technical structures that can be optimized. It does not need to rely on binding users, but relies on its own continuous evolution to obtain cumulative advantages in time and quality.
He stressed that this “evolutionary moat” does not exclude competition, nor does it mean that there is only one winner in the market. However, under the proposition of “building a global software construction platform”, a huge super company may eventually be born.
While multiple products are likely to coexist in the future, if the question is “who can carry the world’s largest code logic translation task”, then there may only be one company left. The reason is not that other companies are not doing well, but that users naturally tend to use the most versatile, stable, and contextually sensitive platforms. In this field, product quality and speed of evolution determine market concentration.
He further pointed out that the fragmented experience of the traditional IDE market cannot be used to judge the pattern of this round of technological evolution. In the 2010s, the IDE market “no one made a lot of money” because the editor power of that era was close to the limit, and the only things that could be optimized were basic features such as syntax highlighting, debugger integration, and quick navigation. But today, developer tools are at a new paradigm starting point, and the goal of this starting point is no longer to optimize an editor, but to reshape the entire task flow and expression structure of knowledge workers.
The essence of AI programming tools is not to replace code, but to enhance the expression of human instructions and compress the path from idea to implementation. This is a much larger market than traditional development tools, and it is also a future channel with platform attributes. In this channel, whoever can provide the smoothest, most reliable, and most contextually understanding programming experience has the opportunity to become synonymous with the next generation of “software building infrastructure”.
Lenny’s mention of Microsoft Copilot also brings up a typical current question: Does the first company to market have the ability to continue to lead? Michael acknowledges that Copilot has been an inspiration for the entire industry, especially when the initial version was released, bringing unprecedented ways to interact with development.
But he believes that Microsoft has not really continued its initial momentum, which has both historical and structural problems. The core team that first developed Copilot has frequent personnel changes, making it difficult to form a unified direction in large organizations, and the product path is easily diluted by internal games and process complexity.
More fundamentally, this market itself is not friendly to incumbents. It doesn’t rely on integrations and bindings like enterprise CRM or ERP systems, and it doesn’t have the highly sticky “switch-proof cost.” User choices are based solely on experience differences, which determine that “product strength” rather than “sales ability” will be the deciding factor. In such a dynamic, open, high-frequency trial and error market, the companies that can really win are those entrepreneurial teams that can iterate and improve every month with their products, and continue to sprint to the upper limit of technology.
Cursor’s current sense of direction and product rhythm is a response to this context. It does not rely on “closure”, but on the simple, clear but extremely difficult mission of “continuously building the world’s best development tool”, which attracts developers to take the initiative. Sequoia USA’s Roelof Botha on the VC Observation Model in the AI Era – AI doesn’t weaken centralization like the Internet, but there are still structural opportunities
How to use Cursor correctly?
In building an AI IDE platform for developers around the world, Michael Truell was not concerned with the limits of the model’s capabilities, but how users could understand and utilize them.
When asked what advice he would give if he could sit next to every first-time Cursor user, he did not explain features or operating techniques, but emphasized the establishment of a mindset – an instinctive judgment of what a model “can’t do” or not.
He candidly admits that the current Cursor product itself does not do enough to guide users in understanding model boundaries. Without a clear prompt track and interactive feedback mechanism, many users tend to fall into two extremes: either they have high expectations for the model and try to solve complex problems with a single prompt; or give up completely because the first result is not satisfactory.
The way he suggests is task disassembly, that is, gradually advancing through the “small prompt – small generation” method, and continuous two-way interaction with the AI to achieve more stable and higher-quality results.
Another suggestion is more strategic. He encourages users to “go for it” in side projects without business pressure and try to push AI capabilities to their limits.
Without affecting the main line of work, through a set of experimental projects, you can feel how much the model can really accomplish and where the boundaries of failure are. This “wrestling-style exploration” helps developers build more accurate intuition and allows them to be more confident when facing formal projects in the future.
As model versions continue to be updated, such as GPT-4.0 or Claude iterations are launched, this judgment needs to be updated accordingly. He hopes that in the future, Cursor’s products will have a built-in guidance mechanism so that users don’t have to grope for the “temper” and boundaries of the model every time. But today, this is still a skill that users need to accumulate subjectively.
To another frequently asked question about whether these tools are better for junior or senior engineers – Michael gives a precise classification description.
He noted that junior developers tend to be “completely reliant on AI” and try to use it to complete the entire development process. Senior engineers, on the other hand, may underestimate AI due to their experience and fail to fully explore its potential. The problem of the former is “too much dependence”, and the problem of the latter is “too little exploration”.
He also emphasizes that some of the company’s senior technical teams, especially architect-level talent focused on Developer Experience, are actually the most active embrace of these tools. They understand both system complexity and tool efficiency, so they often achieve the best results in AI programming scenarios.
In his opinion, the ideal user persona is not a beginner or a veteran of a solidified process, but a “senior but not yet rigid” mid-level engineer who has a system understanding and is curious and open to new methods.
How to build a world-class team?
When asked what advice he would give himself if he could go back to the year he started Cursor, Michael chose a non-technical answer – recruitment. He repeatedly emphasized that “finding the right people” is the most important task after the product itself.
Especially in the early stages, building a world-class engineering and research team is not only a guarantee of product quality, but also a decisive factor in the focus, rhythm and culture of the organization. He was looking for someone who had a combination of technical curiosity, a willingness to experiment, and the ability to judge calmly in an impetuous environment.
He recalled that Cursor had taken many detours in the hiring process. At first, they placed too much emphasis on “highlight resumes” and tended to recruit young people from prestigious schools, young people with a standard path to success. But eventually they realized that the real right talent was often not in these traditional templates. On the contrary, those who have a slightly later career, a high degree of experience, and mature technical judgment are the key forces that drive the team’s leap.
In the recruitment process, they gradually established a set of proven methods. At the heart of this is a two-day “work test” system. Candidates need to spend time with the team and complete a task that is highly relevant to the real project.
This process may seem burdensome, but in practice, it is not only scalable, but also significantly improves the team’s judgment accuracy. It not only examines the candidate’s coding skills, but also tests collaborative communication, mindset, and hands-on skills, and even helps the candidate determine whether they want to work with this team for a long time.
The “collaborative interview” mechanism evolved into part of Cursor’s team culture. They view the hiring process as a two-way option rather than a one-way evaluation. When the company is not widely recognized by the market and the product is not yet mature, the team itself is the most important attraction.
He admitted that many early employees joined the company because of one or more coexistence experiences, rather than judgments on salary or valuation. Today, this system is still retained and applied to every new candidate. Cursor’s team size is still around 60 people, a number that has been slimmed down for many SaaS companies.
Michael noted that they intentionally maintained this lean configuration, especially with restraint in the expansion of non-technical roles. He acknowledges that the team will definitely expand in the future to enhance customer support and operational capabilities, but for now, they are still a highly engineering, research, and design-driven company.
When it comes to staying focused in the AI industry’s ever-changing release pace, Michael doesn’t rely on complex organizational structures.
He believes that the foundation of organizational culture lies in recruiting itself. If you can recruit people who are rational, focused, and not swayed by hot emotions, the team will naturally have a good sense of rhythm. He admits that Cursor still has room for improvement, but overall, they have achieved good results in their culture of “focusing only on making great products”.
Many companies try to solve problems through process and organizational design, but they can actually be avoided in advance by “finding the right person”. Their development process is extremely simple, and they are established because of the general self-discipline and collaborative spirit of the team members. He particularly emphasized a common psychological trait: having “immunity” to the hustle and disturbance of the outside world.
This immunity does not exist naturally, but is gradually formed through long-term industry experience. As early as 2021 and 2022, the Cursor team has been exploring the direction of AI programming. At that time, GPT-3 did not have an Instruct version, and DALL· E and Stable Diffusion have not yet been made public, and the entire generative AI industry is still in its infancy.
They have experienced the explosion of image generation, the popularity of conversational models, the release of GPT-4, the evolution of multimodal architectures, and the rise of video generation…… But among these seemingly lively technological trends, very few really have a substantial impact on products.
This ability to distinguish between “structural innovation” and “surface noise” has become an important psychological basis for them to maintain focus. He draws an analogy to the evolution of the deep learning research community over the past decade: while countless new papers come out every year, it’s a handful of elegant and fundamental structural breakthroughs that really drive AI forward.
Looking back at the entire technological paradigm shift, Michael believes that the current development of AI is at a very profound turning point.
The outside world often falls into two extremes: some people believe that the AI revolution is coming, disrupting everything almost overnight; Others see it as hype, bubbles, and inadequacies. And his judgment is that AI will be a more profound paradigm shift than personal computers, but this process will be a continuous evolution for “many years”. From I/O to iO, Jony Ive will drive a new design movement – AI is rewriting computing paradigms and hardware definitions, and it is also a new battlefield after large models
This evolution does not depend on a single system or a certain technical route, but consists of independent solutions to a large number of subdivided problems. Some are scientific questions, such as how the model understands more data types, runs faster, and learns more efficiently; Some are interaction issues, such as how humans collaborate with AI, how to define permission boundaries, and how to establish trust mechanisms. Some are application problems, such as how the model can really change the real workflow and how to provide controllable output in uncertainty.
In this evolution, he believes that a key category of enterprises will emerge – AI tool companies that focus on a certain knowledge work scenario. These companies will deeply integrate the foundation model and may also develop core modules in-house, while building the most suitable human-machine collaboration experience. They will not just be “model callers”, but will polish the technology and product structure to the extreme, so as to grow into a new generation of platform enterprises. These companies will not only improve user efficiency, but are also likely to be a major force driving the evolution of AI technology.
Michael hopes that Cursor will be one of these companies, and he also looks forward to seeing a group of AI entrepreneurs who are equally focused, solid, and have both technical depth and product acumen in more areas of knowledge work such as design, law, and marketing. The future does not belong to the speculators, but to those who truly deconstruct problems, reinvent tools, and understand the relationship between people and technology.
He also pointed out that the two most important things for Cursor in 2025 are to build the best product in the industry and to scale it out. He described the current state as a “land war”: the vast majority of people in the market have not yet been exposed to such tools or are still using slow-updating alternatives. Therefore, they are increasing their investment in marketing, sales, customer support, etc., while also continuing to look for talented people who can push the boundaries of their products from a technical level.
When it comes to the impact of AI on engineering jobs, Michael’s answer is quite calm. He doesn’t think engineers will be replaced quickly, instead, he believes that engineers will be more important than ever in an AI-driven future.
In the short term, programming will change dramatically, but it’s hard to imagine software development suddenly becoming a “just enter a requirement, and the system does it automatically”. AI can indeed free humans from low-level cumbersome implementations, but core decisions such as direction, intent, and structural design must still be controlled by professional developers.
This judgment also means that with the significant improvement in software construction efficiency, the elasticity of the demand side will be completely released. In other words, the software itself will become easier to build, the cost will drop significantly, and ultimately the entire market will expand. More problems can be modeled, more processes can be systematized, and more organizations will try to customize their internal tools rather than accepting a generic approach.
He illustrated this with a personal experience. In his early years at a biotech company, the team urgently needed to build a system of tools that fit internal processes, but the solutions on the market did not match, and the efficiency of their own development was very limited, resulting in most of the demand being put on hold.
This scenario is still prevalent in all walks of life, indicating that the threshold for software development is still high. If one day making software is as simple as moving files and editing slides, then it will be a new era of application.
Finally, he emphasized that AI will not reduce the number of engineers, but will change the structure of engineering positions. Engineers who are good at collaborating with AI, understand system logic, and have product intuition will play a greater role in the next generation of work. a16z platform strategy past and present life: from VC “reluctant to wipe ass” to “full-stack service”