Insights | BetaNXT

What 81,000 People Told the World About AI—And What It Means for Financial Services

Written by Laura Barger | May 4, 2026

Last month, Anthropic published a landmark study on what people want from (and fear about) AI. In a time when we are literally drowning in thought leadership around AI innovation, I was particularly interested in this study—drawn from 80,508 AI interviews across 159 countries—because it was focused on the opinions of real people, looking at the rapid increase in AI adoption across a number of industries and practices.

Trust is the real battleground for AI adoption

And through all of the opinions and perspectives, I found that the study surfaces something that we in the financial services industry can no longer treat as background noise: trust is the real battleground for AI adoption, and the stakes are uniquely high in wealth management.

The #1 Fear Isn't Job Displacement. It’s Reliability.

Public discourse often fixates on automation’s threat to human workers, but the Anthropic study uncovers a new data point. The top fear was actually AI unreliability (manifesting as hallucinations, false citations, and confident errors)—27% of respondents worried that AI won't do what it's supposed to. To be fair, job displacement was a close second, at 22%.

In financial services, unreliability isn't a hypothetical. When a platform misclassifies a corporate action or generates a compliance document with a subtle factual error, client trust erodes and regulatory exposure follows. This is why our approach to AI starts not with the model—but with data and governance. Both our DataXChange data management platform and our InsightX enterprise AI platform have our institutional knowledge and strict governance embedded in their very infrastructure, to ensure that our clients can trust the data underneath as well as the resulting insights and outputs.

People Don't Want More Output. They Want Time Back.

Another interesting finding came from what people want from AI. "Professional excellence" was the #1 desire among respondents, with 19% citing it as their primary AI aspiration—wanting AI to "clear the queue” so that they could focus on higher-value work. But when the Anthropic interviewer pressed further, a different priority emerged: time freedom. Not more throughput, but simply hours back.

This resonates with what we consistently hear from firms and advisors. AI tools built purely around volume metrics miss the deeper need. What actually earns adoption is cognitive relief—reducing administrative burden so that advisor expertise and effort can flow toward client relationships and judgment calls, not data wrangling.

Balancing the Light and Shade of AI

Anthropic found that what people want from AI and what they fear from it turn out to be tightly bound—a tension they call the “light and shade” of AI. The same capabilities that lead to benefits can also produce harm. For example, the emotional support AI can provide can also lead to emotional dependency. The promise of economic mobility is entangled with the risk of economic displacement.

One tension relates to AI’s unreliability, as mentioned above: 22% of people expressed excitement about AI as an aid in decision-making, but 37% lamented that AI actually impedes good decisions because of its unreliability (e.g., hallucinations). In other words, AI can drive better decisions but can also result in poorer outcomes due to unreliability. This is the only tension in which the negative overshadowed the positive.

Many respondents cited firsthand experience with both leaning on AI for decision-making and getting burned by it, especially among people in high-stakes professions (finance, law, government, and healthcare), at nearly twice the average rate. When the stakes are raised, reliability becomes an even more pressing issue.

Cognitive Augmentation vs. Atrophy

Another key tension exists between people using AI to learn and becoming so reliant on it that they stop thinking for themselves. 17% of respondents feared cognitive atrophy from AI dependency, and nearly half of those had already experienced signs of atrophy firsthand. In financial services, if AI handles the analytical reasoning that builds expertise in junior professionals, the industry faces a pipeline question: where does the next generation of deep financial judgment come from?

AI as augmentation, not replacement

The answer lies in a clear design philosophy: AI as augmentation, not replacement. Systems should surface information and pattern recognition while leaving interpretation and judgment with the human professional. The goal should be enabling operators and advisors who are better informed and more efficient, not users who simply approve automated outputs.

What It Means for How We Build

The Anthropic findings reveal valuable truths that technology leaders must keep in mind. Users want the benefits that AI offers, they are already experiencing many of them, and their primary barrier to deeper adoption is not skepticism about AI's potential—it’s concern about whether they can rely on its outputs. That concern can be addressed with careful construction. But it requires an honest assessment of what reliability requires: data quality, good governance, and domain-aware architecture form the infrastructure that makes AI outputs auditable, traceable, and teachable.

The other key truth is the importance of the humans using AI. “Human-in-the-loop” must translate to a real role for human judgment and interpretation, not just a nominal fact check. Particularly in an industry as complex and consequential as ours, AI platforms and solutions should be purpose-built to enable subject matter experts with the intelligence and insights they need to do their jobs even better.

At BetaNXT, our clients’ needs shape our AI strategy, and the Anthropic study only validates our focus on both reliability and human enablement. Learn more about our AI approach and innovation, or get in touch if you’d like to discuss how our AI can help you achieve your goals.

The Anthropic study referenced in this post, What 81,000 people want from AI, was published on March 19, 2026, and is available at anthropic.com/81k-interviews. All statistics cited are drawn from that report.