For the past year, we’ve been told that artificial intelligence is revolutionising productivity—helping us write emails, generate code, and summarise documents. But what if the reality of how people actually use AI is completely different from what we’ve been led to believe?

A data-driven study by OpenRouter has just pulled back the curtain on real-world AI usage by analysing over 100 trillion tokens—essentially billions upon billions of conversations and interactions with large language models like ChatGPT, Claude, and dozens of others. The findings challenge many assumptions about the AI revolution.

​​OpenRouter is a multi-model AI inference platform that routes requests across more than 300 models from over 60 providers—from OpenAI and Anthropic to open-source alternatives like DeepSeek and Meta’s LLaMA. 

With over 50% of its usage originating outside the United States and serving millions of developers globally, the platform offers a unique cross-section of how AI is actually deployed across different geographies, use cases, and user types. 

Importantly, the study analysed metadata from billions of interactions without accessing the actual text of conversations, preserving user privacy while revealing behavioural patterns.

Open-source AI models have grown to capture approximately one-third of total usage by late 2025, with notable spikes following major releases.

The roleplay revolution nobody saw coming

Perhaps the most surprising discovery: more than half of all open-source AI model usage isn’t for productivity at all. It’s for roleplay and creative storytelling.

Yes, you read that right. While tech executives tout AI’s potential to transform business, users are spending the majority of their time engaging in character-driven conversations, interactive fiction, and gaming scenarios. 

Over 50% of open-source model interactions fall into this category, dwarfing even programming assistance.

“This counters an assumption that LLMs are mostly used for writing code, emails, or summaries,” the report states. “In reality, many users engage with these models for companionship or exploration.”

This isn’t just casual chatting. The data shows users treat AI models as structured roleplaying engines, with 60% of roleplay tokens falling under specific gaming scenarios and creative writing contexts. It’s a massive, largely invisible use case that’s reshaping how AI companies think about their products.

Programming’s meteoric rise

While roleplay dominates open-source usage, programming has become the fastest-growing category across all AI models. At the start of 2025, coding-related queries accounted for just 11% of total AI usage. By the end of the year, that figure had exploded to over 50%.

This growth reflects AI’s deepening integration into software development. Average prompt lengths for programming tasks have grown fourfold, from around 1,500 tokens to over 6,000, with some code-related requests exceeding 20,000 tokens—roughly equivalent to feeding an entire codebase into an AI model for analysis.

For context, programming queries now generate some of the longest and most complex interactions in the entire AI ecosystem. Developers aren’t just asking for simple code snippets anymore; they’re conducting sophisticated debugging sessions, architectural reviews, and multi-step problem solving.

Anthropic’s Claude models dominate this space, capturing over 60% of programming-related usage for most of 2025, though competition is intensifying as Google, OpenAI, and open-source alternatives gain ground.

Programming-related queries exploded from 11% of total AI usage in early 2025 to over 50% by year’s end.

The Chinese AI surge

Another major revelation: Chinese AI models now account for approximately 30% of global usage—nearly triple their 13% share at the start of 2025.

Models from DeepSeek, Qwen (Alibaba), and Moonshot AI have rapidly gained traction, with DeepSeek alone processing 14.37 trillion tokens during the study period. This represents a fundamental shift in the global AI landscape, where Western companies no longer hold unchallenged dominance.

Simplified Chinese is now the second-most common language for AI interactions globally at 5% of total usage, behind only English at 83%. Asia’s overall share of AI spending more than doubled from 13% to 31%, with Singapore emerging as the second-largest country by usage after the United States.

The rise of “Agentic” AI

The study introduces a concept that will define AI’s next phase: agentic inference. This means AI models are no longer just answering single questions—they’re executing multi-step tasks, calling external tools, and reasoning across extended conversations.

The share of AI interactions classified as “reasoning-optimised” jumped from nearly zero in early 2025 to over 50% by year’s end. This reflects a fundamental shift from AI as a text generator to AI as an autonomous agent capable of planning and execution.

“The median LLM request is no longer a simple question or isolated instruction,” the researchers explain. “Instead, it is part of a structured, agent-like loop, invoking external tools, reasoning over state, and persisting across longer contexts.”

Think of it this way: instead of asking AI to “write a function,” you’re now asking it to “debug this codebase, identify the performance bottleneck, and implement a solution”—and it can actually do it.

The “Glass Slipper Effect”

One of the study’s most fascinating insights relates to user retention. Researchers discovered what they call the Cinderella “Glass Slipper” effect—a phenomenon where AI models that are “first to solve” a critical problem create lasting user loyalty.

When a newly released model perfectly matches a previously unmet need—the metaphorical “glass slipper”—those early users stick around far longer than later adopters. For example, the June 2025 cohort of Google’s Gemini 2.5 Pro retained approximately 40% of users at month five, substantially higher than later cohorts.

This challenges conventional wisdom about AI competition. Being first matters, but specifically being first to solve a high-value problem creates a durable competitive advantage. Users embed these models into their workflows, making switching costly both technically and behaviorally.

Cost doesn’t matter (as much as you’d think)

Perhaps counterintuitively, the study reveals that AI usage is relatively price-inelastic. A 10% decrease in price corresponds to only about a 0.5-0.7% increase in usage.

Premium models from Anthropic and OpenAI command $2-35 per million tokens while maintaining high usage, while budget options like DeepSeek and Google’s Gemini Flash achieve similar scale at under $0.40 per million tokens. Both coexist successfully.

“The LLM market does not seem to behave like a commodity just yet,” the report concludes. “Users balance cost with reasoning quality, reliability, and breadth of capability.”

This means AI hasn’t become a race to the bottom on pricing. Quality, reliability, and capability still command premiums—at least for now.

What this means going forward

The OpenRouter study paints a picture of real-world AI usage that’s far more nuanced than industry narratives suggest. Yes, AI is transforming programming and professional work. But it’s also creating entirely new categories of human-computer interaction through roleplay and creative applications.

The market is diversifying geographically, with China emerging as a major force. The technology is evolving from simple text generation to complex, multi-step reasoning. And user loyalty depends less on being first to market than on being first to truly solve a problem.

As the report notes, “ways in which people use LLMs do not always align with expectations and vary significantly country by country, state by state, use case by use case.”

Understanding these real-world patterns—not just benchmark scores or marketing claims—will be crucial as AI becomes further embedded in daily life. The gap between how we think AI is used and how it’s actually used is wider than most realise. This study helps close that gap.

See also: Deep Cogito v2: Open-source AI that hones its reasoning skills

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How people really use AI: The surprising truth from analysing billions of interactions appeared first on AI News.



Source link


Leave a Reply

Your email address will not be published. Required fields are marked *