The Age of AI Disruption
Dip learning: chips down, convictions up
Despite the recent market correction we continue to view compute, networking, and hyperscaler platforms as the most direct and scalable exposure to the AI theme today. While the market remains focused on the magnitude of capital expenditure, the accelerating utility of these systems supports the case that we are at the early stages of the “Age of Inference,” which should drive a significant increase in tokens generated and consumed.
Recent developments across the AI stack reinforce our constructive view on the space, particularly at the intersection of compute infrastructure and emerging agentic applications.
Post-training techniques—most notably reinforcement learning with verifiable rewards—have materially improved model capabilities. Transformer models are no longer limited to pattern recognition; they increasingly demonstrate reasoning, iteration, and self-correction.
Star history
Github stars represent the number of users star-marking this depository on GitHub – OpenClaw (red line) surpassed Linux and Python depositories within 3 months.
Source https://medium.com/data-science-collective/what-are-clawdbot-moltbot-and-openclaw-7cc9faaae6c3
This step-change is now translating into tangible utility.
Around the turn of the year, coding tools reached a level where developers can, in some cases, ship fully AI-generated code. More importantly, the February release of tools such as OpenClaw1
A key enabler is the growing ecosystem of Model Context Protocols (MCPs) and APIs, which allow agents to interact with thirdparty applications and data sources. Agents can now access proprietary databases, browse the web, write and execute code, and operate within enterprise tools such as Excel or Slack. At the same time, user interfaces are evolving to include structured guardrails, enabling users to define permissions and approval checkpoints within agentic workflows.
In parallel, the supply-demand dynamics of AI infrastructure remain tight. Hyperscaler capacity continues to be effectively sold out, and early evidence of enterprise adoption— through efficiency initiatives at companies such as Amazon, Meta, and Block—suggests that demand is likely to persist. At the same time, the AI stack continues to evolve at a pace that materially exceeds Moore’s Law, driving stepwise improvements in both training and inference efficiency.
Citrini Research argues the winners of the agentic AI era will be “Agentic Utilities”: companies building the infrastructure to support agent traffic; the ecosystem of services purpose-built for agent interactions (e.g. agentic payment rails); and new governance solutions (e.g. observability and counter-AI tools) to keep rogue agents in check.
Source: CitriniResearch.
Conference Takeaways (MS TMT, San Francisco)
The discussions at the MS TMT Conference provided incremental validation of these trends, particularly around the emergence of agentic workflows and their implications for compute demand.
Nvidia CEO Jensen Huang highlighted OpenClaw as a critical development, underscoring its rapid adoption and its role in enabling agents to interact directly with local systems and applications.
This represents a meaningful step toward fully agentic environments, where software systems can act, communicate, and coordinate with minimal human intervention.
Current usage remains skewed toward individual experimentation, partly due to limited guardrails and associated risks. Anecdotally, this has even led to increased purchases of secondary devices to sandbox agent activity. However, this phase is important: it demonstrates both the feasibility of agent-to-agent interaction and the significantly higher token intensity associated with autonomous execution relative to human-driven workflows.
Thus, agent-driven workflows will lead to the volume of tokens consumed expanding exponentially compared to the current enterprise usage patterns (e.g., Copilot, ChatGPT, Claude, Gemini).
This vision aligns with comments from OpenAI CEO Sam Altman, who highlighted a shift from reactive systems toward highly proactive models. As models gain persistent context and broader visibility into user environments, their ability to anticipate and act on next steps begins to resemble human-like reasoning.
1 Github stars represent the number of users star-marking this depository on GitHub – OpenClaw (blue line) surpassed Linux and Python depositories within 3 months. Source https://medium.com/data-science-collective/what-are-clawdbot-moltbot-and-openclaw-7cc9faaae6c3