Visa’s Head of Crypto Cuy Sheffield pointed out that cryptocurrencies and AI are moving from “theoretically feasible” to “practically implementable,” with a focus on infrastructure steady accumulation by 2026, reshaping value transfer and work models. This article is based on a piece by Cuy Sheffield, organized, translated, and written by Foresight News.
(Previous summary: Forbes analyzes key trends for 2026 in cryptocurrencies: five major trends reveal the industry’s path toward maturity)
(Additional background: Bloomberg summarizes Wall Street’s 50 institutions’ expectations for 2026: AI driving a 3% global average growth, high valuation risks still warrant caution)
Table of Contents
Topic 1: Cryptocurrencies are transforming from speculative assets to high-quality technology
Topic 2: Stablecoins are a clear achievement of cryptocurrencies’ “pure practicality”
Topic 3: When cryptocurrencies become infrastructure, “distribution capability” is more important than “technological novelty”
Topic 4: AI agents demonstrate practical value, with influence surpassing coding domains
Topic 5: The bottleneck of AI has shifted from “intelligence level” to “trustworthiness”
Topic 6: Systems engineering determines whether AI can be deployed in production scenarios
Topic 7: The contradiction between open models and centralized control raises unresolved governance issues
Topic 8: Programmable currencies give rise to new intelligent payment flows
Conclusion
As cryptocurrencies and AI gradually mature, the most important shift in these two fields is no longer “theoretically feasible,” but “reliably implementable in practice.” Currently, both technologies have crossed critical thresholds, with significant performance improvements, yet their practical adoption remains uneven. The core dynamic for 2026 stems from this gap between “performance and adoption.”
Below are some key themes I have long been following, along with initial thoughts on the development directions of these technologies, value accumulation areas, and even “why the ultimate winners may differ greatly from industry pioneers.”
Topic 1: Cryptocurrencies are transforming from speculative assets to high-quality technology
The first decade of cryptocurrency development was characterized by “speculative advantage”—its market is global, continuous, and highly open, with volatility making crypto trading more vibrant and attractive than traditional finance markets.
However, at the same time, its underlying technology was not yet ready for mainstream applications: early blockchains were slow, costly, and lacked stability. Aside from speculative scenarios, cryptocurrencies have rarely surpassed existing traditional systems in cost, speed, or convenience.
Now, this imbalance is beginning to reverse. Blockchain technology has become faster, more economical, and more reliable. The most attractive use cases for cryptocurrencies are no longer speculation but infrastructure—especially settlement and payments. As cryptocurrencies become more mature, the core role of speculation will gradually weaken: it will not disappear entirely but will no longer be the main source of value.
Topic 2: Stablecoins are a clear achievement of cryptocurrencies’ “pure practicality”
Unlike previous narratives around cryptocurrencies, stablecoins’ success is based on concrete, objective standards: in specific scenarios, stablecoins are faster, cheaper, and more widely accessible than traditional payment channels, seamlessly integrating into modern software systems.
Stablecoins do not require users to see cryptocurrencies as an “ideology” to trust; their applications often occur “implicitly” within existing products and workflows—this allows institutions and companies previously skeptical of crypto’s “volatility and opacity” to clearly understand its value.
It can be said that stablecoins help cryptocurrencies re-anchor on “practicality” rather than “speculation,” establishing a clear benchmark for “how cryptocurrencies can successfully land.”
Topic 3: When cryptocurrencies become infrastructure, “distribution capability” is more important than “technological novelty”
In the past, when cryptocurrencies mainly served as “speculative tools,” their “distribution” was endogenous—new tokens could naturally accumulate liquidity and attention simply by “existing.”
But once cryptocurrencies become infrastructure, their application scenarios shift from “market level” to “product level”: embedded in payment processes, platforms, and enterprise systems, end-users often remain unaware of their presence.
This shift benefits two types of entities: one, companies with existing distribution channels and reliable customer relationships; two, institutions with regulatory licenses, compliance systems, and risk management infrastructure. Merely having “protocol novelty” is no longer enough to drive large-scale adoption of cryptocurrencies.
Topic 4: AI agents demonstrate practical value, with influence surpassing coding domains
The practicality of AI agents (Agents) is increasingly evident, but their role is often misunderstood: the most successful agents are not “autonomous decision-makers,” but “tools that reduce coordination costs in workflows.”
Historically, this is most apparent in software development—agent tools accelerate coding, debugging, refactoring, and environment setup. Recently, this “tool value” has expanded significantly into more fields.
Take tools like Claude Code, for example. Although positioned as “developer tools,” their rapid adoption reflects a deeper trend: agent systems are becoming “interfaces for knowledge work,” not limited to programming. Users are applying “agent-driven workflows” to research, analysis, writing, planning, data processing, and operations—tasks more aligned with “general professional work” rather than traditional coding.
The real key is not “coding atmosphere” itself, but the underlying core pattern:
· Users delegate “goal intent,” not “specific steps”;
· Agents cross “files, tools, and task management” contexts;
· Work modes shift from “linear progression” to “iterative, dialog-based.”
In various knowledge work, agents excel at gathering context, executing limited tasks, reducing handoffs, and accelerating iteration, but still have shortcomings in “open-ended judgment,” “responsibility attribution,” and “error correction.”
Therefore, most agents used in production today still need “scope limitations, supervision, and system embedding,” rather than operating completely independently. The true value of agents lies in “restructuring knowledge workflows,” not in “replacing labor” or “achieving full autonomy.”
Topic 5: The bottleneck of AI has shifted from “intelligence level” to “trustworthiness”
AI models have rapidly improved in intelligence level, but current limitations are no longer about “language fluency or reasoning ability,” but about “reliability in real systems.”
Production environments have zero tolerance for three issues: first, AI “hallucinations” (generating false information); second, inconsistent outputs; third, opaque failure modes. When AI involves customer service, fund transfers, or compliance, “roughly correct” results are no longer acceptable.
Building “trust” requires four foundations: first, traceability of results; second, memory capabilities; third, verifiability; and fourth, the ability to proactively expose “uncertainty.” Until these capabilities are mature enough, AI autonomy must be limited.
Topic 6: Systems engineering determines whether AI can land in production scenarios
Successful AI products treat “models” as “components,” not “finished products”—their reliability depends on “architectural design,” not “prompt optimization.”
This “architectural design” includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. As a result, AI development is increasingly approaching “traditional software engineering,” rather than “cutting-edge theoretical research.”
Long-term value will favor two groups: one, system builders; two, platform owners controlling workflows and distribution channels.
As agent tools expand from coding to research, writing, analysis, and operations, the importance of “systems engineering” will further increase: knowledge work is often complex, state-dependent, and context-rich, making “reliable management of memory, tools, and iteration processes” more valuable than just generating outputs.
( Topic 7: The contradiction between open models and centralized control raises unresolved governance issues
As AI system capabilities grow and integrate deeper into the economy, the question of “who owns and controls the most powerful AI models” is causing core conflicts.
On one hand, cutting-edge AI R&D remains “capital-intensive,” increasingly concentrated due to “computing power access, regulatory policies, and geopolitical factors”; on the other hand, open-source models and tools continue to iterate and optimize under “broad experimentation and easy deployment.”
This coexistence of “centralization and openness” raises unresolved issues: dependency risks, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a “hybrid model”—advanced models drive technological breakthroughs, while open or semi-open systems embed these capabilities into “widely distributed software.”
) Topic 8: Programmable currencies give rise to new intelligent payment flows
As AI systems play roles in workflows, their demand for “economic interactions” increases—such as paying for services, calling APIs, rewarding other agents, or settling “usage-based interaction fees.”
This demand has renewed interest in “stablecoins”: seen as “machine-native currencies,” with programmability, auditability, and capable of transfers without manual intervention.
Take protocols like x402, aimed at developers. Although still in early experimentation, their direction is clear: payment flows will operate via “APIs,” not traditional “checkout pages”—enabling software agents to conduct “continuous, fine-grained transactions.”
The field remains nascent: small transaction sizes, rough user experience, ongoing security and permission system improvements. But infrastructure innovation often begins with such “early explorations.”
It’s worth noting that the significance is not “autonomy for its own sake,” but “when software can execute transactions through programming, new economic behaviors become possible.”
Conclusion
Whether in cryptocurrencies or artificial intelligence, early development stages favor “eye-catching concepts” and “technological novelty”; in the next phase, “reliability,” “governance,” and “distribution capability” will become the more critical dimensions of competition.
Today, the technology itself is no longer the main limiting factor—“embedding technology into real systems” is the key.
In my view, the hallmark of 2026 will not be “a breakthrough technology,” but “steady infrastructure accumulation”—these systems quietly operate while subtly reshaping “value transfer methods” and “work modes.”
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Visa Crypto Executive: Eight Major Trends in Cryptocurrency and AI by 2026
Visa’s Head of Crypto Cuy Sheffield pointed out that cryptocurrencies and AI are moving from “theoretically feasible” to “practically implementable,” with a focus on infrastructure steady accumulation by 2026, reshaping value transfer and work models. This article is based on a piece by Cuy Sheffield, organized, translated, and written by Foresight News.
(Previous summary: Forbes analyzes key trends for 2026 in cryptocurrencies: five major trends reveal the industry’s path toward maturity)
(Additional background: Bloomberg summarizes Wall Street’s 50 institutions’ expectations for 2026: AI driving a 3% global average growth, high valuation risks still warrant caution)
Table of Contents
As cryptocurrencies and AI gradually mature, the most important shift in these two fields is no longer “theoretically feasible,” but “reliably implementable in practice.” Currently, both technologies have crossed critical thresholds, with significant performance improvements, yet their practical adoption remains uneven. The core dynamic for 2026 stems from this gap between “performance and adoption.”
Below are some key themes I have long been following, along with initial thoughts on the development directions of these technologies, value accumulation areas, and even “why the ultimate winners may differ greatly from industry pioneers.”
Topic 1: Cryptocurrencies are transforming from speculative assets to high-quality technology
The first decade of cryptocurrency development was characterized by “speculative advantage”—its market is global, continuous, and highly open, with volatility making crypto trading more vibrant and attractive than traditional finance markets.
However, at the same time, its underlying technology was not yet ready for mainstream applications: early blockchains were slow, costly, and lacked stability. Aside from speculative scenarios, cryptocurrencies have rarely surpassed existing traditional systems in cost, speed, or convenience.
Now, this imbalance is beginning to reverse. Blockchain technology has become faster, more economical, and more reliable. The most attractive use cases for cryptocurrencies are no longer speculation but infrastructure—especially settlement and payments. As cryptocurrencies become more mature, the core role of speculation will gradually weaken: it will not disappear entirely but will no longer be the main source of value.
Topic 2: Stablecoins are a clear achievement of cryptocurrencies’ “pure practicality”
Unlike previous narratives around cryptocurrencies, stablecoins’ success is based on concrete, objective standards: in specific scenarios, stablecoins are faster, cheaper, and more widely accessible than traditional payment channels, seamlessly integrating into modern software systems.
Stablecoins do not require users to see cryptocurrencies as an “ideology” to trust; their applications often occur “implicitly” within existing products and workflows—this allows institutions and companies previously skeptical of crypto’s “volatility and opacity” to clearly understand its value.
It can be said that stablecoins help cryptocurrencies re-anchor on “practicality” rather than “speculation,” establishing a clear benchmark for “how cryptocurrencies can successfully land.”
Topic 3: When cryptocurrencies become infrastructure, “distribution capability” is more important than “technological novelty”
In the past, when cryptocurrencies mainly served as “speculative tools,” their “distribution” was endogenous—new tokens could naturally accumulate liquidity and attention simply by “existing.”
But once cryptocurrencies become infrastructure, their application scenarios shift from “market level” to “product level”: embedded in payment processes, platforms, and enterprise systems, end-users often remain unaware of their presence.
This shift benefits two types of entities: one, companies with existing distribution channels and reliable customer relationships; two, institutions with regulatory licenses, compliance systems, and risk management infrastructure. Merely having “protocol novelty” is no longer enough to drive large-scale adoption of cryptocurrencies.
Topic 4: AI agents demonstrate practical value, with influence surpassing coding domains
The practicality of AI agents (Agents) is increasingly evident, but their role is often misunderstood: the most successful agents are not “autonomous decision-makers,” but “tools that reduce coordination costs in workflows.”
Historically, this is most apparent in software development—agent tools accelerate coding, debugging, refactoring, and environment setup. Recently, this “tool value” has expanded significantly into more fields.
Take tools like Claude Code, for example. Although positioned as “developer tools,” their rapid adoption reflects a deeper trend: agent systems are becoming “interfaces for knowledge work,” not limited to programming. Users are applying “agent-driven workflows” to research, analysis, writing, planning, data processing, and operations—tasks more aligned with “general professional work” rather than traditional coding.
The real key is not “coding atmosphere” itself, but the underlying core pattern:
· Users delegate “goal intent,” not “specific steps”;
· Agents cross “files, tools, and task management” contexts;
· Work modes shift from “linear progression” to “iterative, dialog-based.”
In various knowledge work, agents excel at gathering context, executing limited tasks, reducing handoffs, and accelerating iteration, but still have shortcomings in “open-ended judgment,” “responsibility attribution,” and “error correction.”
Therefore, most agents used in production today still need “scope limitations, supervision, and system embedding,” rather than operating completely independently. The true value of agents lies in “restructuring knowledge workflows,” not in “replacing labor” or “achieving full autonomy.”
Topic 5: The bottleneck of AI has shifted from “intelligence level” to “trustworthiness”
AI models have rapidly improved in intelligence level, but current limitations are no longer about “language fluency or reasoning ability,” but about “reliability in real systems.”
Production environments have zero tolerance for three issues: first, AI “hallucinations” (generating false information); second, inconsistent outputs; third, opaque failure modes. When AI involves customer service, fund transfers, or compliance, “roughly correct” results are no longer acceptable.
Building “trust” requires four foundations: first, traceability of results; second, memory capabilities; third, verifiability; and fourth, the ability to proactively expose “uncertainty.” Until these capabilities are mature enough, AI autonomy must be limited.
Topic 6: Systems engineering determines whether AI can land in production scenarios
Successful AI products treat “models” as “components,” not “finished products”—their reliability depends on “architectural design,” not “prompt optimization.”
This “architectural design” includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. As a result, AI development is increasingly approaching “traditional software engineering,” rather than “cutting-edge theoretical research.”
Long-term value will favor two groups: one, system builders; two, platform owners controlling workflows and distribution channels.
As agent tools expand from coding to research, writing, analysis, and operations, the importance of “systems engineering” will further increase: knowledge work is often complex, state-dependent, and context-rich, making “reliable management of memory, tools, and iteration processes” more valuable than just generating outputs.
( Topic 7: The contradiction between open models and centralized control raises unresolved governance issues
As AI system capabilities grow and integrate deeper into the economy, the question of “who owns and controls the most powerful AI models” is causing core conflicts.
On one hand, cutting-edge AI R&D remains “capital-intensive,” increasingly concentrated due to “computing power access, regulatory policies, and geopolitical factors”; on the other hand, open-source models and tools continue to iterate and optimize under “broad experimentation and easy deployment.”
This coexistence of “centralization and openness” raises unresolved issues: dependency risks, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a “hybrid model”—advanced models drive technological breakthroughs, while open or semi-open systems embed these capabilities into “widely distributed software.”
) Topic 8: Programmable currencies give rise to new intelligent payment flows
As AI systems play roles in workflows, their demand for “economic interactions” increases—such as paying for services, calling APIs, rewarding other agents, or settling “usage-based interaction fees.”
This demand has renewed interest in “stablecoins”: seen as “machine-native currencies,” with programmability, auditability, and capable of transfers without manual intervention.
Take protocols like x402, aimed at developers. Although still in early experimentation, their direction is clear: payment flows will operate via “APIs,” not traditional “checkout pages”—enabling software agents to conduct “continuous, fine-grained transactions.”
The field remains nascent: small transaction sizes, rough user experience, ongoing security and permission system improvements. But infrastructure innovation often begins with such “early explorations.”
It’s worth noting that the significance is not “autonomy for its own sake,” but “when software can execute transactions through programming, new economic behaviors become possible.”
Conclusion
Whether in cryptocurrencies or artificial intelligence, early development stages favor “eye-catching concepts” and “technological novelty”; in the next phase, “reliability,” “governance,” and “distribution capability” will become the more critical dimensions of competition.
Today, the technology itself is no longer the main limiting factor—“embedding technology into real systems” is the key.
In my view, the hallmark of 2026 will not be “a breakthrough technology,” but “steady infrastructure accumulation”—these systems quietly operate while subtly reshaping “value transfer methods” and “work modes.”