a16z: AI enables everyone to be ten times more efficient, but no company has become ten times more valuable because of it

Author: George Sivulka

Translation: Deep潮 TechFlow

Deep潮 Guide: AI has increased individual productivity by 10 times, but no company has become 10 times more valuable because of it. A16z investor George Sivulka (also founder of AI company Hebbia) believes the problem isn’t the technology itself, but that organizations haven’t restructured accordingly. He proposes seven dimensions to distinguish “Institutional AI” from “Personal AI”—coordination, signals, biases, edge advantage, results orientation, empowerment, and promptless operation—essentially saying: changing the motor isn’t enough; you need to redesign the entire factory.

Full text below:

AI has just boosted everyone’s productivity by 10 times.

But no company has become 10 times more valuable because of it.

Where did the productivity go?

This isn’t the first time this has happened.

In the 1890s, electricity promised huge productivity gains.

New England textile mills, originally built around steam engine rotary power, quickly replaced steam engines with faster electric motors.

But for thirty years, electrified factories saw little increase in output. The technology was far ahead. But the organization didn’t keep up.

Until the 1920s, when factories completely redesigned their production lines—assembly lines, each machine with its own motor, workers and machines performing entirely different tasks—only then did electrification deliver real returns.

Caption: The three evolutions of Lowell Textile Mill. From left to right: 1890 steam-powered factory, 1900 electric-powered factory, 1920 “unit-driven” factory (rebuilt from scratch into an electric assembly line).

The returns didn’t come from the technology itself, nor from making individual workers or machines spin faster. They came when we finally redesigned the system and technology together, and the benefits were truly realized.

This is one of the most expensive lessons in technological history, and we are now relearning it.

By 2026, AI is delivering 10x productivity boosts for those who know how to leverage it. But that’s not enough. We’ve changed the motor, but haven’t redesigned the factory.

Because a simple fact remains: high individual efficiency does not equal high organizational efficiency.

Most AI products give the illusion of “high efficiency,” but don’t truly create value. Most AI use cases you see are individuals in Twitter or Slack self-indulging in “maximized efficiency,” with zero real impact.

The repeated phrase over the past year—“service as software”—is on the right track but lacks a blueprint. It also overlooks the bigger picture. The real transformation isn’t just from tools to services, but building both technology and systems together (whether transforming old ones or starting from scratch). A truly efficient future requires entirely new categories of products—the factories of tomorrow.

Efficient organizations need “institutional-level intelligence.”

This article will deeply analyze the seven dimensions distinguishing “Institutional AI” from “Personal AI.” Over the next decade, all companies in the B2B AI space will be built on these differences:

Caption: Comparison table of the seven pillars of institutional intelligence

Seven Pillars of Institutional Intelligence

  1. Coordination

Personal AI creates chaos.

Institutional AI fosters coordination.

Let’s run a thought experiment. Suppose you double your organization’s size tomorrow, cloning your best employees.

Each employee has slight differences, preferences, quirks, and perspectives (especially your top performers). Without proper management, insufficient communication, unclear responsibilities, OKRs, and role boundaries… you create chaos.

Measured individually, the organization might seem more efficient. But thousands of agents (or humans) rowing in opposite directions, the best outcome is stagnation; the worst is fragmentation of organizational cohesion.

This is not hypothetical. Every organization adopting AI without coordination layers is experiencing this now. Each employee has their own ChatGPT habits, prompt styles, outputs—completely disconnected from others. The org chart may still be there, but AI-generated work is on a different track.

Caption: Efficient individuals (or agents) rowing in different directions. Without coordination, it’s chaos.

Coordination is an absolute necessity, for both humans and agents.

Institutional intelligence will spawn a whole “Agent Management” industry—focused on roles and responsibilities, communication between agents and humans, and how to measure agent value (relying solely on usage-based billing is far from enough).

  1. Signals

Personal AI generates noise.

Institutional AI finds signals.

Today, humans can generate— or rather produce—anything they can imagine: AI-written articles, presentations, spreadsheets, photos, videos, songs, websites, software. Great gifts.

The problem is, most AI-generated content is utter garbage. The proliferation of AI trash has become so severe that some organizations have overcorrected, outright banning all AI output. Honestly, I feel the same—I run an AI company, but require senior executives not to use AI for any final written products. I can’t stand that junk.

Think about what the PE (private equity) industry is becoming. Last year, you might have received 10 deal opportunities on your desk. This quarter, you’ll get 50, each polished to perfection by AI, but your judgment time remains the same—you still need to find that one truly reliable deal.

Generating anything is no longer the problem. For serious organizations, the challenge is to generate and filter the right things. In an AI-driven world, finding that one good outcome, that one good deal, the signal amid noise, becomes increasingly critical. The core economic driver over the next decade is extracting signals from an exponentially growing mountain of garbage.

Caption: AI-generated trash from personal productivity tools is proliferating exponentially. Humanity can no longer sift through the noise; a new class of institutional AI products is needed.

Institutional AI must find signals, structure noise to penetrate the garbage, and operate in a way that is definable, deterministic, and auditable.

Personal AI may emphasize “always-on” productivity like Clawdbot, unpredictably satisfying your needs 24/7—essentially a nondeterministic agent. Institutional AI relies on deterministic agents’ reliability. Only agents with predictable checkpoints, steps, and processes can scale, discover signals, and drive revenue for organizations.

Caption: Matrix is a tool that uses generative tech to penetrate noise, opening a world of deterministic agents and checkpoints.

  1. Biases

Personal AI feeds biases.

Institutional AI creates objectivity.

Discussions around social and political biases have dominated AI discourse for years. Foundational model labs eventually bypassed this issue with enough RLHF, tuning all models to be flattering. Today, models like ChatGPT and Claude are overly aligned, agreeing with you on any topic within the Overton window (sometimes overstepping, as with @Grok). The political bias debate has faded. But a new problem has emerged.

This over-accommodation has become absurd—almost a meme—like Claude’s reflexive “You are completely right!” regardless of whether you are.

It may seem harmless. It’s not.

Many organizations pushing AI hardest may soon be the worst performers in history. Think about why.

The worst employees in organizations often get little positive feedback daily, and soon an AGI will agree with them all the time. They’ll think: “The smartest agent ever agrees with me. My manager is wrong.”

It’s addictive. And toxic to organizations.

Caption: Echo chambers of personal AI deepen divisions, causing factions within organizations that were once cohesive.

This reveals an important point. Personal productivity tools reinforce the user. But what should be truly reinforced is facts.

Human organizations have evolved over thousands of years to build systems that counteract this:

Investment committee meetings

Third-party due diligence

Boards of directors

Separation of powers in US government—executive, legislative, judicial

Representative democracy and democratic institutions

Caption: Objectivity can even help mitigate coordination issues—suppress rather than amplify small disagreements.

Organizations rarely fail because employees lack confidence. They fail because no one is willing or able to say “no.”

Institutional AI must play this role. It won’t be trained via RLHF to pander or confirm beliefs, but to challenge biases. When behavior is efficient, give positive feedback; when deviating, draw hard lines and enforce corrections.

Therefore, the most critical agent within organizations won’t be a “yes-man,” but a disciplined “vetoer”—questioning reasoning, exposing risks, enforcing standards. The most influential AI applications in the future will revolve around systemic constraints: AI board members, AI auditors, third-party AI testing, AI compliance…

  1. Edge Advantage

Personal AI optimizes usage.

Institutional AI optimizes edge advantage.

AI capabilities are shifting weekly or even daily. Foundation model companies iterate rapidly to compete for every individual and organization.

But the classic innovator’s dilemma teaches us that, in specific applications, depth always beats breadth:

@Midjourney excels slightly in image design.

@Elevenlabsio maintains a slight lead in speech models.

@DecagonAI leads consistently in full-stack customer experience.

While foundation models get closer, true edge advantage for domain experts is key. Many top designers use @Midjourney; many leading speech AI companies use @Elevenlabsio—because even as foundation models improve, relentless focus on specific edge advantages in dedicated applications defines the real advantage.

As dedicated solutions evolve, the capabilities that truly matter for economic outcomes—those critical to enterprises—will always be in specialized products.

This is vividly reflected in finance—the hottest area for LLM development. Once a capability becomes widespread, it no longer helps you beat the market. But if cutting-edge tech can generate a fleeting 1% niche advantage? That 1% can unlock billion-dollar returns.

Caption: For any sufficiently specific task, edge advantage is defined by your institutional solutions built on the frontier tech.

Our users have always been pushing the frontier. The context window of LLMs has grown from 4K to 1 million tokens in four years. Some users handle 30 billion tokens in a single task. This year, we see a path to processing 100 billion tokens per task. With each leap in foundation model capability, we go further.

Caption: Context window and other capabilities are moving targets. Comparing the evolution of context windows at leading labs and Hebbia over the past three years.

General-purpose models for broad users are important, especially in onboarding employees to AI. But the future isn’t just people using ChatGPT/Claude or vertical solutions—they will be combined.

Institutional intelligence must leverage domain-specific, even task-specific, agents.

We ask ourselves a seemingly absurd but actually logical question:

“Which agents would AGI choose as shortcuts? Even superintelligence would want domain-specific tools.”

The capability frontier of AI is always shifting. Organizations that harness true edge advantages will be winners. Others are paying for a very expensive general-purpose commodity.

  1. Results

Personal AI saves time.

Institutional AI expands revenue.

@MaVolpi once told me something that reshaped my view on selling AI to enterprises: “If you ask any CEO whether they prioritize cost-cutting or revenue growth, almost everyone says revenue.”

But today, almost every AI product in the market delivers cost reduction—promising to save time, do more with fewer people, or replace human labor.

Institutional AI must deliver incremental revenue. And incremental revenue is much harder to commodify than time savings.

Take AI-assisted software development. Code IDEs are among the best personal AI productivity tools ever, but they face huge competition from Claude Code (another personal AI tool). Cognition is playing a completely different game. Their most stable growth comes from selling transformation through technology, not tools. I believe this model will have lasting power.

Pure software “is rapidly becoming uninvestable.” Pure services can’t scale. The solution layer—linking technology and results—is where lasting value is created.

Looking at M&A: personal AI helps analysts model faster. Institutional AI identifies the one promising target among hundreds, then expands the search to thousands. One saves time; the other creates revenue.

Caption: Foundation model companies are moving toward vertical applications. Vertical application companies are moving toward solutions.

“Moving upstream” is the current market trend. Foundation models head toward application layers; application companies move toward solutions.

Institutional AI is the solution layer. And the solution layer—where results matter—will generate lasting value and capture the greatest returns.

  1. Empowerment

Personal AI gives you a tool.

Institutional AI teaches you how to use it.

No matter how smart humans are, they resist change.

Believe it or not, some successful stores in New York still don’t accept credit cards. They know they’re losing money, they know not accepting cards costs them, but they refuse to change. Similarly, in the foreseeable future, some employees in certain organizations will refuse to use AI.

Transforming from a purely manual organization to an AI-first hybrid will be the most enduring and defining challenge of the next decade. And often, the highest-level, most critical people in organizations are the last to adopt.

Caption: The top of the organization—those farthest from “productivity tool operation”—are often the slowest but most crucial adopters of new technology.

Palantir is the only software company that has maintained a super-high valuation multiple during the recent trillion-dollar tech sell-off. There’s a reason. Palantir was one of the first true “process engineering” companies—whether you call it “process engineering” or “writing Claude skill documents,” the future of institutional AI will spawn an industry: encoding enterprise processes into agents and implementing change management.

Caption: Widespread AI adoption across organizations will cross multiple barriers, each with its own challenges. Automating processes with AI will be a key driver.

I believe process engineering will become one of the most important “technologies” in the near future.

And within process engineering, domain expertise—rather than software expertise—will be most critical. Vertical solutions will cultivate professionals skilled in deploying, implementing, and managing change on the front lines.

A leading top-tier investment bank that chose Hebbia for full deployment put it best: they don’t collaborate with a major model lab because “we need to explain what CIM (Confidential Information Memorandum) is to their team.” Claude or GPT naturally understand the domain, but the team responsible for implementation and rollout doesn’t…

This difference determines everything.

  1. No Prompt Needed

Personal AI responds to human prompts.

Institutional AI acts proactively, without prompts.

There’s much discussion about communication between agents, and whether future enterprises and systems will still need humans.

But a better question is: will future AI agents still need prompts?

Writing prompts for AGI is like connecting an electric motor to a handloom. It fundamentally and irreversibly limits the organization’s supply chain—ourselves. Humans simply don’t know what the right questions are, let alone when to ask them.

The most valuable work AI can do is the work no one has thought to ask. AI should identify unseen risks, unknown trading partners, undiscovered sales pipelines.

This will radically expand the boundaries of AI use cases.

A promptless system continuously monitors data streams across a portfolio. It detects that a portfolio company’s working capital cycle has quietly worsened for three months, cross-checks against loan covenants, and alerts the operational partner before anyone opens that PDF.

When you no longer need humans to write prompts for AI, new interfaces and workflows will emerge. We at @Hebbia have strong ideas in this area. More to come.

Conclusion

All of the above do not negate the value of chatbots, agents, and personal AI.

Personal AI will be the first vehicle for most companies worldwide to experience the transformative power of AI. Driving usage and usability is the critical first step in building an AI-first economy.

But at the same time, the demand for institutional intelligence is clear, urgent, and enormous.

In the future, every organization will have a chatbot from a large model lab. Each will also have specialized institutional AI tailored to specific domain problems—and personal AI will use institutional AI as its most critical toolbox.

The better integration of institutional AI and personal AI is an inevitable trend.

But remember the lesson from 1890s textile factories: the first to electrify lost to those who redesigned their workshops.

We already have electricity. It’s time to redesign our factories.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin