a16z predicts: The AI era is rewriting the development logic, and these 9 new paradigms will determine the next technology decade

AI is reshaping software development, from tools to foundations, from code to processes, this transformation is like the transformation from a carriage to a car, subverting traditional core concepts such as version control and documentation, and opening a new era of collaboration between developers and AI agents.

Have you ever thought that programming could have changed completely? Developers are moving away from simply using AI tools to treating AI as a new foundation for building software. This is not a small adjustment, but a complete paradigm shift. Think about it, the core concepts we’ve been used to—version control, templates, documentation, and even the concept of “user”—are being redefined by AI agent-driven workflows.

It reminds me of the transition from carriages to cars. At first, people thought of cars as “horse-drawn carriages”, but soon realized that the entire transportation system needed to be rethought. Roads, regulations, urban layout, all changed. Now we are experiencing a similar transformation. AI agents are both collaborators and consumers, which means we need to redesign everything.

You’ll see major shifts in the underlying development tools, such as prompts that can now be handled like source code, dashboards that can be used for conversations, and documentation that is not only written for humans, but also for machines. Model Context Protocol (MCP) and AI-native IDEs point to a deep reinvention of the development loop itself – we are not only programming in different ways, but also designing tools for a world where AI agents are fully involved in the software loop.

It’s like when the personal computer appeared in the 70s of the 20th century, we moved from host terminals to personal workstations. At that time, no one could have imagined that everyone would have a computer. Now, we are facing a similar turning point: each developer will have their own team of AI agents.

Today we are going to talk about nine very forward-thinking developer trends, although still in their early stages, but they are all based on real pain points that show us what the future might look like. These trends range from rethinking version control of AI-generated code to large language model-driven user interfaces and documentation.

AI-native Git: Reinventing version control for AI agents

The idea may sound crazy at first, but hear me out. Now that AI agents are increasingly writing or modifying most of the application code, what developers care about is starting to change. Instead of struggling with what code was written line by line, we care about whether the output works as expected. Has the change been tested? Is the app still working as expected?

There is an interesting phenomenon here, which I call the “upward shift of truth”. Previously, the source code was the truth. Now, the combination of prompts and tests is the truth. Think about it, if I tell you to “write a to-do list app in React” and the AI agent generates 1000 lines of code, do you really care how each line of code is written? Or do you care more about whether it really works?

How can product managers do a good job in B-end digitalization?
All walks of life have taken advantage of the ride-hail of digital transformation and achieved the rapid development of the industry. Since B-end products are products that provide services for enterprises, how should enterprises ride the digital ride?

View details >

This subverts a long-standing mindset. Git is designed to track the exact history of handwritten code, but with programming AI agents, this granularity becomes less meaningful. Developers usually don’t review every variance – especially if the changes are large or automatically generated – they just want to know if the new behavior is in line with the expected results.

This reminds me of a fundamental principle of software engineering: abstraction. We are always looking for the right abstraction layer. The assembly language is too low-level, so we have the high-level language. Machine code is too hard to understand, so we have a compiler. Now, line by line code may also be too low-level, and we need new abstraction.

As a result, Git SHA – once the standard reference for “codebase state” – is starting to lose some semantic value. SHA just tells you that something has changed, but not why or if it works. In AI-first workflows, a more useful unit of truth may be a combination of prompts that generate code and tests that verify its behavior.

In this world, your app’s “state” may be better represented by generated inputs (prompts, specifications, constraints) and a set of passing assertions rather than a frozen commit hash. Imagine a future developer might say, “Show me the test coverage of prompt v3.1” instead of “Show me the diff of commit SHA abc123”.

In fact, we may end up tracking prompt+ test packages as versions of themselves, and Git is relegated to tracking these packages, not just the original source code. This is what I call “intent-driven version control”. We’re not versioning code, we’re versioning intent.

Further, in AI agent-driven workflows, the source of truth may shift upstream to prompts, data architectures, API contracts, and architectural intents. The code becomes a byproduct of these inputs, more like a compiled artifact than a manually written source code. In this world, Git is starting to act less as a workspace and more as an artifact log – a place that tracks not only what has changed, but also why and by whom.

Think of this scenario: you’re debugging a question, and instead of looking for “who changed this line of code and when”, you’re asking “Which AI agent made this decision based on what prompt, and where did the human reviewer sign it off?” “This is the future of code archaeology.

Dashboard Evolution: Synthetic AI-powered dynamic interface

This is a trend that I feel is seriously underestimated. For years, dashboards have been the primary interface for interacting with complex systems such as the observability stack, analytics platform, cloud console (think AWS), and more. But their designs often suffer from UX overload: too many knobs, charts, and tabs, forcing users to both look for information and figure out what to do with it.

A friend of mine who is an operations engineer told me that he spent half his time switching between dashboards, trying to piece together what was wrong with the system. This is completely a problem of information overload, not a lack of data, but too much data and not knowing how to organize it.

Especially for non-power users or cross-team use, these dashboards can become intimidating or inefficient. Users know what they want to achieve, but they don’t know where to look or what filters to apply to get there. It’s like finding a specific screwdriver in a huge toolbox, but all the tools look about the same.

The latest generation of AI models offers a potential shift. Instead of using them as a rigid canvas, we can add search and interactive features to the dashboard. Now LLMs can help users find the right controls (“Where can I adjust the throttling settings for this API?”). ”); Synthesize full-screen data into easy-to-understand insights (“Summarize error trends across all services in the pre-release environment in the last 24 hours”); And surface the unknown (“Based on what you know about my business, generate a list of metrics I should focus on this quarter”).

Here’s a cool concept that I call “contextual data presentation.” Traditional dashboards are static: they display fixed metrics in a fixed way. But AI-powered dashboards can reconfigure themselves based on your current tasks, your role, and even your past behavior patterns.

We have seen technical solutions like Assistant UI that enable AI agents to use React components as tools. Just as content becomes dynamic and personalized, the UI itself can become adaptive and conversational. A purely static dashboard can quickly become obsolete in the face of a natural language-based interface that is reconfigured based on user intent.

For example, a user can say “Show anomalies in Europe over the last weekend,” and the dashboard will be reshaped to show that view, including summarized trends and related logs. Or, more powerfully, “Why did our NPS score drop last week?” AI may extract survey sentiment, correlate it to product deployment, and generate a brief diagnostic narrative.

But there’s a deeper shift here: dashboards aren’t just designed for humans anymore. AI agents also need to “see” and “understand” system state. This means we may need a two-mode interface: one human-friendly and one agent-friendly. Consider this scenario: an AI agent is monitoring your system, it doesn’t need beautiful charts, it needs structured data and actionable context.

It’s like designing for different senses: humans see with their eyes, agents “sense” with APIs. Future dashboards may need to serve both “species”, which is a completely new design challenge.

Documentation is becoming a hybrid of tools, indexes, and interactive knowledge bases

This shift excites me. Developer behavior is shifting when it comes to documentation. Instead of reading a table of contents or scanning from top to bottom, users now start with a question. The mental pattern is no longer “let me study this norm” but “reorganize this information the way I like”.

I remember when I first started programming, I would spend hours reading the API documentation, from start to finish. What now? I open the document, search directly for what I want, or ask the AI “how do I do X with this library”. This is not laziness, but the evolution of efficiency.

This subtle shift – from passive reading to active querying – is changing what a document needs to be. They are no longer just static HTML or markdown pages, but are becoming interactive knowledge systems, underpinned by indexed, embedded, and tool-aware AI agents.

As a result, we are seeing the rise of products like Mintlify, which not only structure documents into semantically searchable databases but also act as a contextual source for cross-platform programming AI agents. Mintlify pages are now often referenced by AI programming agents – whether in AI IDEs, VS Code extensions, or terminal agents – because programming agents use the latest documentation as the underlying context for generation.

This changes the purpose of documentation: they are no longer just for human readers, but also for AI agent consumers. In this new dynamic, the document interface becomes a kind of AI agent instruction. It not only exposes the original content but also explains how to use a system correctly.

There is an interesting trend here, which I call the “dual nature of documentation”. Human readers need context, examples, and explanations. AI agents require structured data, clear rules, and executable instructions. Good documentation needs to meet both needs.

Future documentation may have three levels: human reading (with storytelling and explanation), AI consumption (structured and precise), and interaction (allowing for questioning and exploration). It’s like designing a textbook for a different learning style, but this time for a different “way of thinking.”

From template to build: vibe coding replaces create-react-app

This trend reminds me of the transition from the industrial revolution to the digital revolution. In the past, starting a project meant choosing a static template, such as a boilerplate GitHub repository or a CLI like create-react-app, next init, or rails new. These formwork serve as scaffolding for new applications, providing consistency but lacking customization.

Developers either conform to any defaults provided by the framework or risk a lot of manual refactoring. It’s like standardized production in the industrial age: you can have a car of any color, as long as it’s black.

Now, this dynamic is changing with the emergence of text-to-app platforms like Replit, Same.dev, Loveable, Convex’s Chef and Bolt, and AI IDEs like Cursor. Developers can describe what they want (e.g., “a TypeScript API server with Supabase, Clerk, and Stripe”) and get a customized project scaffolding in seconds.

The result is a launcher that is not generic but personalized and purposeful, reflecting the developer’s intention and the technology stack they choose. It’s like moving from industrial production to mass customization. Each project can have a unique starting point instead of starting with the same template.

This unlocks a new distribution model in the ecosystem. Rather than letting a few frameworks sit on the long tail, we may see a broader distribution of composable, stack-specific generations, with tools and architectures dynamically mixed and matched. It’s more about describing an outcome around which the AI can build a stack rather than choosing a framework.

But here’s an interesting side effect, which I call “frame democratization.” Previously, choosing a framework was a big decision because switching was expensive. Now the frame choice has become more like choosing what to wear today: it can change at any time.

Of course, this also brings new challenges. Standardization has its advantages – easier teamwork, simpler troubleshooting, and faster knowledge dissemination. But as AI agents can understand project intent and perform large-scale refactoring, the cost of experimentation is significantly reduced.

This means that we may see a more fluid tech stack ecosystem where choice is no longer a permanent decision, but a starting point for evolution.

Beyond .env: Manage secrets in an AI agent-powered world

This is a question that many people overlook but extremely important. For decades, .env files have been the default way for developers to manage secrets (such as API keys, database URLs, and service tokens) locally. They are simple, portable, and developer-friendly. But in an AI agent-driven world, this paradigm is starting to crumble.

Consider this scenario: you have an AI agent writing code for you, and it needs to connect to your database. Do you really want to give it the database password directly? If so, who is responsible for the data breach? AI agent? You? Or is it an AI agent provider?

When an AI IDE or AI agent writes code, deploys services, and orchestrates environments on our behalf, it is no longer clear who owns the .env. What’s more, the traditional concept of “environmental variables” itself may be outdated. What we need is a secret management system that can give precise permissions, be auditable, and revoke.

We see some signs of what it might look like. For example, the latest MCP specification includes an OAuth 2.1-based authorization framework, hinting at a potential shift to providing AI agents with scoped, revocable tokens instead of original secrets. We can imagine a scenario where instead of getting your actual AWS key, the AI agent gets a short-term credential or capability token to perform a narrowly defined operation.

Another way this could evolve is through the rise of local secret proxies – services that run on your machine or with your application that act as intermediaries between AI agents and sensitive credentials. Agents can request access to a capability (“deploy to pre-release” or “send logs to Sentry”), and the agent decides whether to grant it – in real time and is fully auditable.

I call this trend “competency-driven security.” Instead of giving AI agents keys (secrets), we give them permission (capabilities). It’s like moving from “trust but verify” to “zero trust but enabled.”

The future of covert management may look more like a permissions system, with clear scopes for every operation, clear roles for each AI agent, and all access logged and audited. This is not only safer, but also more in line with how AI agents work: they don’t need to know everything, just what they need to do their job.

Accessibility as a universal interface: Seeing applications through the eyes of LLMs

This trend reminds me of the theory of “unexpected innovation”. We’re starting to see a new class of apps (like Granola and Highlight) that request access to accessibility settings on macOS, not for traditional accessibility use cases, but for AI agents to observe and interact with interfaces. But it’s not a hack: it’s a harbinger of a deeper shift.

Accessibility APIs were originally built to help users with vision or motor impairments navigate digital systems. Now, these same APIs are becoming the universal language for AI agents to understand and control the digital environment. It’s like Braille accidentally becoming the way robots read the world.

Think about this: Accessibility APIs have solved the problem of “how to make machines understand human interfaces”. They provide a semantic description of the element: this is a button, this is the input box, this is the link. For AI agents, this is the perfect data structure.

Here’s a profound insight: we’re always looking for ways to make AI agents interact with the human world, but the answer may be right in front of us. Accessibility technology has been standardized, implemented in all mainstream operating systems, and has been tested in practice for more than ten years.

If thoughtfully extended, this could become a universal interface layer for AI agents. AI agents can semantically observe applications like assistive technologies, rather than clicking on pixel locations or scraping DOMs. Accessibility trees already expose structured elements such as buttons, headers, and input boxes. If extended with metadata such as intents, roles, and functions, this could become the first type of interface for AI agents to sense and operate applications with purpose and precision.

In fact, there are several possible paths for this direction:

The first is contextual extraction, where we need a standard way for LLM agents using accessibility or semantic APIs to query what’s on the screen, what it can interact with, and what the user is doing. Imagine an AI agent that can say “tell me all the clickable elements on this screen” or “where the user is right now” and get an instantly structured answer.

The second is intent execution, where instead of expecting the AI agent to manually chain multiple API calls, it is better to expose a high-level endpoint, let it declare the goal (“add items to the cart, choose the fastest delivery”), and then let the backend calculate the specific steps. It’s like telling the driver to “take me to the airport” instead of giving every turn instruction.

The third is the LLM’s alternate UI, and the accessibility feature provides an alternate UI for the LLM. Any app that exposes the screen becomes available to the AI agent, even if it doesn’t expose the API. For developers, this hints at a new “rendering layer” – not just a visual or DOM layer, but an AI agent-accessible context, possibly defined by structured annotations or accessibility-first components.

Together, these three directions point to a future where applications are no longer designed only for the human eye, but also for the AI “eye”. Each interface element carries rich semantic information that describes not only what it looks like, but also what it can do and how to use it.

This leads to an interesting idea: what if we made accessibility design the standard for “machine readability”? Every new UI element, every new interaction mode, is designed with machine understanding in mind from the start. This not only benefits people with disabilities but also AI agents.

In the future, we may see a trend of “dual design”: designed not only for humans, but also for AI agents. The principle of accessibility may be a bridge between the two. It’s like designing a language for a multicultural world: not only for native speakers, but also for learners and translators.

To achieve these three challenges, product managers will only continue to appreciate
Good product managers are very scarce, and product managers who understand users, business, and data are still in demand when they go out of the Internet. On the contrary, if you only do simple communication, inefficient execution, and shallow thinking, I am afraid that you will not be able to go through the torrent of the next 3-5 years.

View details >

The rise of asynchronous AI agent work

This trend reflects a fundamental shift in the way work is done. As developers began to work more smoothly with programming AI agents, we saw a natural shift towards asynchronous workflows, with AI agents running in the background, pursuing parallel worker threads, and reporting back as progress was made.

This reminds me of the transition from synchronous to asynchronous programming. In the beginning, the programs are synchronized: do one thing, do the next. Then we found out that waiting is a waste of time, and concurrency is king. Now, we’re experiencing the same shift at the level of human-machine collaboration.

This mode of interaction starts to look less like pair programming and more like task orchestration: you delegate a target, let the AI agent run, and check it later. It’s like you have a very capable intern and you can give him a project and let him do it, and then you focus on other things.

The point is, it’s not just about offloading efforts; It also compresses coordination. Rather than contacting another team to update configuration files, misclassify, or refactor components, developers are increasingly able to assign this task directly to an AI agent that acts on their intent and executes in the background.

This change has a deep meaning: we are moving from synchronous collaboration to asynchronous symphony. Traditional software development is like a face-to-face meeting: everyone is present at the same time and discusses in real time. The new model is more like a distributed orchestra performance: each player (AI agent) plays independently according to the general score (specification), and the conductor (developer) coordinates the whole.

The interface for AI agent interaction is also expanding. In addition to always using IDE or CLI prompts, developers can start interacting with AI agents in the following ways:

  • Send a message to Slack
  • Comment on Figma Simulation
  • Create inline comments on code diffs or PRs
  • Add feedback based on the deployment of app previews
  • and utilizing voice or call-based interfaces
  • Developers can describe the changes verbally

This creates a model where the AI agent is used throughout the development lifecycle. They not only write code, but also explain designs, respond to feedback, and classify errors across platforms. Developers become the coordinators in deciding which threads to pursue, discard, or merge.

Perhaps most interestingly, this asynchronous pattern may change our understanding of “branching”. A traditional Git branch is a fork of code. Future “branches” could be forks of intent, each explored in different ways by different AI agents. Instead of merging code, developers evaluate and choose different solution paths.

MCP is one step closer to becoming a universal standard

MCP (Model Context Protocol) is one of the most exciting protocol innovations in recent times. We recently published an in-depth analysis of MCP. Since then, the momentum has accelerated: OpenAI publicly adopted MCP, several new features of the specification were merged, and tool makers began to aggregate around it as the default interface between AI agents and the real world.

It reminds me of the role of HTTP in the 90s. HTTP is not the first network protocol or the most complex, but it is simple, good enough, and widely supported. Now MCP may be on a similar path.

At its core, MCP solves two big problems: it provides LLMs with the right context set to accomplish tasks they may have never seen before; It replaces N×M custom integrations with a clean, modular model in which the tool exposes a standard interface (server) that can be used by any AI agent (client).

Here’s a profound insight: we’re witnessing the birth of “capability standardization”. Just as USB standardizes device connectivity, MCP is standardizing AI agent capabilities. Any tool can expose its capabilities, and any AI agent can use these features without the need for custom integrations.

We expect to see wider adoption as remote MCPs and de facto registries go live. Over time, apps may start to include MCP interfaces by default. Think about how APIs allow SaaS products to plug into each other and combine workflows across tools. MCPs can do the same for AI agents by turning standalone tools into interoperable building blocks.

This leads to an interesting idea: the emergence of the “competency market”. Imagine a future with a vast capability registry where AI agents can discover and use new capabilities, just as developers now use npm or PyPI. Need to send an email? There is an MCP server. Need image processing? There is an MCP server. Need custom business logic? There is an MCP server.

This is not just a technical standard, it is a new business model: Capabilities as a Service. Anyone can create an MCP server, expose a useful ability, and then have all AI agents use it. This is like the next stage of cloud computing: not only computing resources are commoditized, but also capabilities themselves.

Abstract primitives: Every AI agent requires authentication, billing, and persistent storage

This trend reflects a fundamental evolutionary law: the level of abstraction is constantly rising. As vibe coding AI agents get more powerful, one thing becomes clear: AI agents can generate a lot of code, but they still need something solid to plug in.

Just as human developers rely on Stripe for payments, Clerk for authentication, or Supabase for database capabilities, AI agents need the same clean and composable service primitives to build reliable applications. Here’s an interesting observation: AI agents aren’t about replacing infrastructure, it’s about making better use of it.

In many ways, these services – APIs with clear boundaries, ergonomic SDKs, and reasonable defaults that reduce the chance of failure – are increasingly acting as runtime interfaces for AI agents.

Consider this scenario: You tell the AI agent to “create a SaaS app with user authentication and subscription management.” What do AI agents need? It requires an authentication system (Clerk), a payment system (Stripe), a database (Supabase), and possibly mail services, file storage, and more.

This leads to a profound insight: AI agents reshape the concept of “framework.” The traditional framework gives you a structure that you fill with logic. The AI agent framework gives you a set of primitives, and the AI agent can be combined into any structure.

As this pattern matures, we may begin to see services optimized for AI agent consumption by exposing not only APIs, but also architecture, capability metadata, and sample processes that help AI agents integrate them more reliably.

It’s like going from “bottom up” to “top down.” Previously, you started with infrastructure and built layer by layer. Now, you start with intent, and the AI agent helps you find the right building blocks. This is not reverse engineering, but forward design.

Some services may even start coming with MCP servers by default, translating each core primitive into something that AI agents can reasoning and using safely and out-of-the-box. Imagine Clerk exposing an MCP server, allowing AI agents to query available products, create new billing plans, or update customer subscriptions—all with predefined permission scopes and constraints.

Instead of handwriting API calls or searching in documents, the AI agent can say “Create a $49 per month Pro plan for Product X that supports usage-based overages,” and Clerk’s MCP server exposes this capability, validates parameters, and securely handles orchestration.

This brings an interesting phenomenon that I call “declarative infrastructure”. The AI agent does not need to know how to implement user authentication, it only needs to declare “this application requires user authentication” and let the appropriate primitive service handle the specific implementation.

The deeper implications are that this can lead to the emergence of “immediate best practices.” These services not only provide functionality but also code best practices. When an AI agent uses Stripe integration, it automatically gets best practices for handling subscriptions, managing failed payments, handling refunds, and more.

It’s like a standardized component in the construction industry. You don’t need to design an electrical system from scratch every time, you use standard switches, sockets, and wiring methods. AI agents also don’t need to be certified from scratch every time, they use proven, standardized services.

The most interesting thing is the “capability ecosystem” that this could create. As more services become AI agent-friendly, we may see a new market emerge: primitive services designed specifically for AI agents. The SDKs of these services are no longer aimed at human developers, but at AI agents, exposing powerful capabilities with simple interfaces and clear constraints.

Conclusion: The next chapter in software development

These nine trends point to a broader shift, where new developer behaviors are emerging alongside more robust foundation models. In response, we see new toolchains and protocols like MCP form. It’s not just about overlaying AI on old workflows, but about redefining how software is built at its core with AI agents, context, and intent.

I would like to emphasize that these trends are not isolated. They reinforce each other and together form a new developer ecosystem. AI-native version control systems rely on standardized capability interfaces; Synthetic interfaces benefit from semantic accessibility APIs; Asynchronous collaboration mode requires a robust secret management system.

This reminds me of the laws of every technological revolution: at the beginning, new technologies mimic old patterns; Then, we began to explore the unique possibilities of new technologies; Finally, we reinvent the entire system to take full advantage of the new capabilities. We are going through the transition from the second to third stages.

Many developer tooling layers are undergoing a fundamental shift, not just a technological advancement, but a revolution in mindset. We are moving from “programming” to “intent expression”, from “version control” to “intent tracking”, and from “documentation” to “knowledge neural networks”.

Perhaps most importantly, these trends herald a fundamental shift in the role of software development. Future developers may be more like symphony conductors, coordinating the work of different AI agents, rather than independent instrumental players, as they are now.

This shift is both exciting and somewhat unsettling. We are entering uncharted territory, and new rules are still forming. But history tells us that every technological revolution creates more opportunities than it destroys. The key is to keep an open mind and adapt to change while sticking to the core values that make us great developers: solving problems, creating value, and serving users.

We’re excited to build and invest in next-generation tools not just to program more efficiently, but to solve previously unsolvable problems and create previously unimaginable possibilities. This is what technological progress is really about: not just doing faster, but doing more and doing better.

end

Finally, make a friend, I am a serial entrepreneur myself, and have served as an overseas growth consultant for 25+ products in the past two years, and now I am ready to enter the full-time All-In to start a business, I position myself as COO, hoping to find a suitable CEO and CTO

End of text
 0