a16z’s latest sharing: 9 emerging development models in the AI era

With the rapid development of artificial intelligence technology, the field of software development is undergoing a profound transformation. Traditional development models and tools are being redefined to accommodate the new needs brought about by AI agents and intelligent systems.

AI is profoundly changing the traditional way products are developed, and this progress is far beyond your imagination.

Previously, YC managing partner Jared Friedman revealed that 1/4 of startups in W25 use AI to generate codebases.

With the rise of more and more AI programming tools, AI has become not only a tool for developers to write code, but even an infrastructure for software construction.

Not long ago, a16z published an article about AI changing the software development model. The article explores 9 emerging development models that address user pain points well, and while they are still in their infancy, they have a lot to offer.

These patterns range from rethinking version control for AI-generated code to LLM-powered user interfaces and documentation.

Next, let’s take a look.

01 AI-Native Git: Reimagine version control for AI agents

As AI agents increasingly write or modify large amounts of code in applications, the focus of developers has also begun to change. We are no longer obsessed with how each line of code is written, but more concerned with whether the output behaves as expected: did this change pass the test? Is the app still working as expected?

This subverts a long-standing thinking model: Git was designed to track precise historical changes in manually written code, but this fine-grained tracking began to make less sense after the introduction of coding agents.

Developers usually don’t go through every line-by-line review of every variance – especially if the changes are large or automatically generated by AI – they only care about whether the new behavior is as expected. As a result, Git SHA, once the authoritative identifier for “codebase state”, began to lose its semantic significance.

A SHA can only tell you that something has changed, but it cannot say why or tell if it is correct. In an AI-centric development process, a more useful “real unit” might be a combination of generating a prompt for this code and a test to verify its behavior.

To achieve these three challenges, product managers will only continue to appreciate
Good product managers are very scarce, and product managers who understand users, business, and data are still in demand when they go out of the Internet. On the contrary, if you only do simple communication, inefficient execution, and shallow thinking, I am afraid that you will not be able to go through the torrent of the next 3-5 years.

View details >

In this mode, “code state” may be better represented by inputs that generate code (e.g., prompts, specifications, constraints) and a set of passed assertions, rather than a fixed commit hash. In fact, we may end up packaging prompt+test into version-controlled, standalone units and letting Git track those packaged units, not just the source code.

Looking further: In AI agent-driven workflows, the source of facts may shift upstream to prompts, data structures, API contracts, and architectural intent.

The code becomes a byproduct of these inputs, more like a compilation product than a human-written source code. In this world, Git’s functionality gradually shifts from a collaborative workspace to a product log — a place to record not only “what’s changed”, but also “why” and “who did it”.

We may start to include richer metadata such as which agent or model made changes, which parts are protected, where human review is needed – or when an AI reviewer like Diamond should step in the process.

To make this concept more specific, here’s a simulation diagram showing what an AI-native Git workflow might look like in practice:

02 Dashboard-> Synthesis: Dynamic AI-driven interface

For years, dashboards have been the primary interface for interacting with complex systems such as observability stacks, analytics tools, cloud consoles (compared to AWS), and more.

But their design often suffers from user experience overload: too many knobs, charts, and tabs, forcing users to both “find information” and “figure out how to use it.”

Especially for non-professional users or cross-team collaboration, these dashboards can seem daunting or inefficient. Users know what they want to achieve, but they don’t know where to start or what filters to apply to find the answer.

The latest generation of AI models brings us a possible transformation. Instead of thinking of dashboards as fixed canvases, search and interaction capabilities are layered on top of them.

Today’s large language models (LLMs) can help users find the right control options (e.g., “Where should I adjust the throttling settings for this API?”). ”); Data from across the screen can be synthesized into easy-to-understand insights (e.g., “Summarize error trends for all pre-release services in the last 24 hours”); It can even reveal issues that users are not yet aware of (e.g., “Based on what you know about my business, generate a list of key metrics I should focus on this quarter”).

We’ve seen technical schemes like Assistant UI that enable agents to leverage React components as tools. Just as content becomes dynamic and personalized, the UI itself can become responsive and conversational.

Compared to a natural language-driven interface that can be reconfigured according to user intent, a purely static dashboard can quickly become obsolete. For example, instead of manually clicking on a five-layer filter to filter through a specific metric, users can simply say, “Show anomalous data for Europe over the last weekend.” The dashboard is automatically reorganized to present the view, with aggregated trends and related logs.

Even more powerfully, users can also ask, “Why did our NPS score drop last week?” “AI may pull up survey sentiment data, correlate it with a product launch, and generate a short diagnostic analysis.

In the bigger picture, if agents are now also consumers of software, we also need to rethink the nature of the “dashboard” and who it is intended for. For example, dashboards can be rendered into a view that is more suitable for the agent experience – a structured, programmable, accessible interface that helps agents sense system status, make decisions, and perform actions.

This can lead to a two-mode interface: one for human users and one for agents, both sharing the same state but optimized for different usage methods.

In a way, agents are taking on the roles that would otherwise be played by alerts, scheduled tasks, or condition-based automation, but with greater contextual understanding and flexibility. Unlike traditional preset logic (for example: “Send an alert if the error rate is > threshold”), the agent can say, “The error rate is rising.” Here are the possible causes, the affected services, and the recommended remediation. “In this new world, dashboards are no longer just places to observe systems, but spaces where humans and agents work together, integrate information, and take action.

▲ The evolution of dashboards: how to better serve the dual needs of human users and AI agents

03 Documentation, a combination of tools, indexes and interactive knowledge bases

Developer behavior when working with documentation is changing.

Instead of starting from a table of contents or browsing from top to bottom, users now start with a question: “How do I complete a task?” This shift in the mental model means that the role of the document has changed from “let me learn this specification” to “please help me reorganize this information in the way I like it.”

This subtle shift—from passive reading to active query—is reshaping what documents should be.

Documents are no longer just static HTML or Markdown pages, but evolve into interactive knowledge systems supported by indexing, embeddings, and tool-aware AI agents.

As a result, we’re seeing the rise of products like Mintlify: they not only structure documents into semantically searchable databases, but also become important sources of context for cross-platform coding agents.

Today, Mintlify’s content is often referenced by AI coding agents in AI integrated development environments (AI IDEs), VS Code plugins, or terminal agents, as these agents use the latest documentation as the basis for generating code.

This fundamentally changes the purpose of the documentation: it no longer just serves human readers, but also becomes important “consumer” content for AI agents. In this new relationship, the documentation interface is more like an instruction to the AI agent, not only showing the original content, but also explaining how to use a system correctly.

▲ Screenshot of Mintlify, users can use the cmd+k shortcut key to open the AI chat window and answer questions about Mintlify documents

04 From static templates to intelligent generation: Vibe programming is replacing the era of Create React Apps

In the past, starting a new project meant choosing a static template, such as a boilerplate GitHub repository, or using a command-line tool like create-react-app, next init, rails new.

These templates provide a basic framework for new applications, bringing uniformity but little room for customization. Developers can only accept the default configuration provided by the framework, or risk a lot of manual refactoring.

But now, with the rise of “text-to-app” platforms like Replit, Same.dev, Loveable, Convex’s Chef and Bolt, and AI integrated development environments like Cursor, this paradigm is shifting.

Developers simply describe what they want (e.g., “a TypeScript API service using Supabase, Clerk, and Stripe”), and the AI can generate a tailored project structure in seconds. The resulting launch template is no longer generic but personalized, targeted, reflecting the developer’s intentions and the tech stack they choose.

This opens up a new distribution model for the ecosystem. In the future, we may no longer rely on a few mainstream frameworks to dominate the long-tail market, but instead see more composable, technology-specific stack-specific ways to build that dynamically mix and match tools and architectures.

The focus is no longer on “choosing a framework” but on “describing a goal”, and then the AI builds the right technology stack around that goal. One engineer can create an app with Next.js and tRPC, and the other starts with Vite and React, but they both get the initial structure ready to use.

Of course, this change comes with trade-offs. The standard technology stack does have many benefits, such as improving team efficiency, simplifying the onboarding process, and making it easier to troubleshoot issues within the organization.

Cross-framework refactoring is not just a technical challenge, but often involves multiple factors such as product decision-making, infrastructure constraints, and team skills. But what has changed today is the cost of “switching frameworks” or “starting with no frameworks”. Experimentation becomes more feasible with AI agents that can understand project intent and perform large-scale refactoring semi-autonomously.

This means that the choice of framework is becoming more and more reversible. A developer may start with Next.js but then decide to migrate to Remix and Vite and let the AI agent do most of the refactoring work. This greatly reduces the lock-in effect that the framework once brought and encouraged more experimentation, especially in the early stages of the project. This also lowers the barrier to entry for adopting a “concept-specific” technology stack, as switching in the future is no longer a huge investment.

05 Farewell.env: A new key management method in the era of AI agents

For decades, .env files have been the default way for developers to manage sensitive information such as API keys, database URLs, and service tokens locally.

They are simple, portable, and developer-friendly. But in a world powered by AI agents, this paradigm is starting to become unapplicable. When an AI IDE or agent is writing code, deploying services, and orchestrating environments for us, it is no longer clear who actually owns this .env file.

We are already starting to see some signs of a possible future pattern. For example, the latest MCP specification (Model Communication Protocol) includes an OAuth 2.1-based authorization framework, which indicates a potential shift towards providing scoped, revocable capability tokens for AI agents rather than directly exposing the original secrets.

We can envision a scenario where the AI agent does not directly obtain your AWS key, but instead obtains a short-lifetime, permission-restricted credential or capability token that only allows it to perform specific actions.

Another possible development path is the rise of “local secret brokers” – services that run on your local machine or in parallel with your application, acting as intermediaries between AI agents and sensitive credentials.

Instead of injecting keys through .env files or hardcoding keys into the project structure, agents can request some kind of operational permission (such as “deploy to pre-release environment” or “send logs to Sentry”), which the agent service then decides whether to grant that permission immediately and logs the audit log throughout.

This decouples key access from the static file system, making key management more like an API authorization behavior than a traditional environment configuration process.

▲ CLI model for agent-centric secret broker processes

06 A Universal interface with accessibility functions: application interfaces that allow LLMs to “see”

We are starting to see a new class of apps (such as Granola and Highlight) that request access to macOS’s accessibility settings, but not for traditional accessibility scenarios, but to allow AI agents to “observe” and “interact” with interfaces. However, this is not a “black technology”, but a precursor to deeper changes.

Accessibility APIs were originally designed to help users with visual or motor impairments better operate digital systems. However, if reasonably scaled, these APIs could become a common interface layer for AI agents. Instead of tapping the screen through pixel coordinates or parsing DOM structure, AI agents can understand and manipulate applications in a “semantic” way, just like assistive technologies.

The current accessibility tree already exposes structured elements such as buttons, headers, and input boxes. If some metadata (such as intent, role, operability affordance) is added, it can become a first-class interface for AI agents, allowing them to perceive and operate applications in a more purposeful and precise way.

Here are a few potential development directions:

1. Context Extraction

Provide a standardized way for LLM agents that use accessibility or semantic APIs to query:

What is shown on the screen?

What elements can it interact with?

What are current users doing?

This capability allows agents to truly understand the current application state and react based on context.

2. Intentful Execution

Instead of having an agent manually concatenate multiple API calls, provide a high-level interface that allows it to declare an intent, such as: “Add an item to the cart and choose the fastest delivery method.” ”

The back-end system then parses these goals and automatically plans the implementation steps. This reduces the agent’s dependence on the details of the underlying UI implementation and improves the level of abstraction.

3. Fallback UI for LLMs

Accessibility provides an alternate user interface for LLMs. Any app that can demonstrate an interface can be used by an AI agent, even if it doesn’t have a public API. For developers, this heralds a new concept of a “rendering layer” – not just a visual layer or DOM structure, but a contextual environment accessible by agents, perhaps defined by structured annotations or accessibility-first components.

07 The rise of asynchronous agency work

As collaboration between developers and coding agents becomes more fluid, we are witnessing a natural shift: workflows are moving towards asynchronous. Agents can run in the background, work on multiple tasks in parallel, and proactively feedback results as progress is made.

This interaction is gradually moving away from the “pair programming” model and more like “task orchestration”: you just set a goal, give it to the agent to execute, and come back later to check the progress.

The point is that this change is not just about “leaving the work to AI”, it also greatly reduces coordination costs. In the past, you had to remind other team members to update profiles, troubleshoot bugs, or refactor components; Now, developers can assign these tasks directly to agents who understand intent and execute them in the background.

Work that would otherwise require synchronous meetings, cross-functional handoffs, or lengthy review cycles can now evolve into an ongoing “request-build-validate” cycle.

At the same time, the ways of interacting with agents are constantly expanding. In addition to entering prompts in the IDE or command line, developers can interact with agents in the following ways:

  • Send a message in Slack
  • Add comments to your Figma draft
  • Insert inline comments in code diffs or pull requests (e.g., Graphite’s review assistant)
  • Provide feedback based on the app preview after deployment
  • Use voice or talk interfaces to make changes with verbal descriptions

This has resulted in a new development model: AI agents throughout the development lifecycle. They don’t just write code, they can interpret design, respond to feedback, and troubleshoot bugs across platforms. Developers transform into “conductors” who decide which task threads continue to advance, abandon, or merge.

Perhaps in the future, this “branch + delegate” proxy model will become the new Git fork – no longer a static code fork, but an asynchronous flow of tasks that run dynamically around intents and are not merged until they are ready.

08 MCP is one step closer to becoming a universal standard

We recently published an in-depth analysis of MCP (Model Communication Protocol). Since then, the momentum in this area has accelerated significantly:

OpenAI has publicly adopted MCP and incorporated several new features into the specification, and more and more tool developers are beginning to see it as the default interface for AI agents to interact with the real world.

Essentially, MCP addresses two key issues:

First, it provides the LLM with the context it needs to complete tasks, even if they are tasks it has never encountered before;

Second, the custom integration method of N×M is replaced by a clear, modular model: the tool can be used by any agent (“client”) by exposing a standard interface (called a “server”).

We expect adoption to expand further as remote MCP support and de facto registries go live. In the long run, applications may integrate MCP interfaces by default, just as many services today offer APIs by default.

Think about how APIs connect SaaS products to each other and build composable workflows across different tools. MCP can enable similar interconnectivity for AI agents – it transforms siloed tools into interoperable building blocks. A platform with a built-in MCP client is not just “AI-capable” but part of a larger ecosystem with immediate access to a growing network of agent-accessible capabilities.

In addition, MCP’s clients and servers are logical boundaries rather than physical isolation. This means that any client can also run as a server and vice versa. This design theoretically unlocks a powerful “composability”: a proxy that gets context through the MCP client can also open up its own functionality through the server interface.

For example, a coding agent can act as a client to fetch issues on GitHub, and can also register itself as a server and expose test coverage or code analysis results to other agents.

09 Modularity of basic capabilities

As vibe coding proxies become more powerful, one thing becomes clear: proxies can generate a lot of code, but they still need some reliable interface to support it.

Just as human developers rely on Stripe for payments, Clerk for authentication, or Supabase for database capabilities, AI agents need clear, composable service foundations like this to build stable and reliable applications.

In many ways, these services – those with clear boundaries, friendly SDKs, and reasonable defaults to reduce the likelihood of errors – are increasingly becoming runtime interfaces for AI agents.

If you’re building a tool that can build SaaS applications, you definitely don’t want your agents to write their own authentication system or implement billing logic from scratch; You want it to use established services like Clerk and Stripe directly.

As this model matures, we may see more and more services begin to optimize their experience for AI agents: in addition to open APIs, they also provide data structure definitions (schemas), capability metadata, and example flows to help agents integrate and use these services more stably.

Some services may even have a built-in MCP server by default, turning every core functional module into a component that AI agents can directly understand and call securely. Imagine Clerk providing an MCP server that allows agents to query available products, create new billing plans, or update a customer’s subscription information—all with permission scopes and constraints predefined.

This way, instead of manually writing API calls or digging through document lookup methods, agents can simply say, “Create a ‘Pro’ plan for $49 per month with support for usage-based overage charges.” ”

Clerk’s MCP server exposes this capability, validates parameters, and securely processes orchestration.

Just as the early era of web development required Rails generators and rails new to accelerate development, the era of AI agents also required trustworthy foundational modules: plug-and-play identity systems, usage tracking, billing logic, and access control mechanisms—modules that were not only abstract enough for code generation, but also flexible enough to grow with the application.

10 Summary

These trends point to a broader shift: as foundation model capabilities continue to increase, so do developer behavior patterns. In response, we are seeing new toolchains and protocols such as MCPs take shape.

It’s not just about superimposing AI on top of old workflows, it’s a redefinition of how software is built – building software with agents, context, and intent at its core. Many developer-facing tool layers are undergoing fundamental changes, and we are very much looking forward to participating in and investing in the construction and growth of the next generation of development tools.

End of text
 0