Salesforce CEO Marc Benioff called for National Guard troops in San Francisco from his private plane, shocking his own PR team. The theatrical Trump embrace—timed before his big conference—tests whether loud loyalty beats quiet accommodation.
Coursera courses now live inside ChatGPT—world-class instructors summoned mid-conversation. No app switching, no browser tabs. EU users blocked by privacy laws while Wall Street bets big on AI-powered education. The learning shortcut arrived, but not for everyone.
Google's 1.3 quadrillion token milestone sounds massive—until you see growth rates halving and realize tokens measure server intensity, not customer demand. The slowdown reveals something uncomfortable about AI economics.
Weather, music, timers: Americans still use assistants like it’s 2018
Tech giants spent billions upgrading Siri, Alexa, and Google Assistant with AI. Americans still use them for weather checks and timers—exactly like 2018. Fresh YouGov data reveals why the utility gap persists.
Generative AI didn’t fix assistants’ biggest problem: trust
Generative AI promised conversational copilots; consumers still ask for the weather and a song. Fresh data from the YouGov U.S. assistant survey shows usage patterns are virtually unchanged despite headline upgrades to Siri, Alexa, and Google Assistant.
What the numbers show
Most interactions remain quick and low-stakes. Sixty percent of Americans use assistants for weather checks (59%), music playback (51%), quick answers (47%), and timers/alarms (40%). That’s the core bundle. It’s familiar. It works.
More “platform” behaviors lag. Smart-home control reaches 19% of users, shopping commands 14%, and third-party action launching just 9%. Those figures describe a ceiling, not a frontier.
Generational gaps exist but don’t change the picture. Boomers over-index on information lookup (55% vs. 38% for Gen Z) and news (22% vs. 12%). Millennials lead on timers (43% vs. 30% for Gen Z). Routine, single-step tasks dominate across the board. That’s the pattern.
Capability vs. perception
Non-users cite a simple reason first: “I don’t need it” (42%). Privacy concerns follow at 19%. Smaller groups say they don’t know enough (9%) or find assistants “creepy” (9%). If the perceived job-to-be-done is “glance at phone,” voice never gets a chance.
For current users, the blockers are trust and comprehension. The top complaint is “doesn’t understand my request” (27%), followed by accuracy issues (12%) and “not as smart as expected” (10%). One failed parse teaches people to stay conservative. They retreat to weather, music, and timers. Reliability sets the boundary.
The stalled upgrade
Tech giants have spent the last two years refitting their assistants with generative models and longer context windows. The pitch: deeper reasoning, memory, and multi-step execution. The outcome so far: minimal change in day-to-day behavior. Ambition outran adoption.
Why? Assistants that promise more must first do the basics perfectly, in every accent, room, and microphone. They rarely do. Latency blips, mishears, and brittle follow-ups make complex tasks feel risky. Users optimize for certainty.
This creates a feedback loop. Complex requests are attempted less, so models get less real-world practice in those flows. Meanwhile, the simple tasks keep reinforcing themselves because they’re dependable. Product roadmaps hit sociology.
What people actually want next
Consumers are not asking for sci-fi. The top asks are pragmatic: better accent and speech understanding (30%) and the ability to answer more complex or conceptual questions (30%). Environmental alerts (27%), faster results (26%), and better personalization (22%) follow.
Read that list closely. It’s a quality bar, not a novelty wishlist. People want the current surface to stop failing, then to stretch a bit. Make what exists dependable, then extend it.
Strategy implications for Big Tech
If assistants remain glorified voice remotes, the long-promised “ambient computing” platform doesn’t materialize. That matters for everything from smart-home lock-in to app discovery to commerce. An assistant used mainly for timers won’t be the gateway to services.
The playbook, then, looks less like moonshots and more like ruthless basics: error-rate reduction across accents and environments; faster, interruption-tolerant responses; graceful recovery when a request is partial or ambiguous; visible memory that feels helpful, not invasive. Nail those and behavior might move.
Method notes and caveats
These findings come from YouGov’s Profiles panel in August 2025. It’s self-reported behavior, which can undercount edge-case power users and overstate routine habits. Even so, the consistency of the pattern across age groups and its resemblance to past snapshots suggests a durable norm. The plateau is real.
Why this matters
Adoption lesson, not a model race: Capability announcements don’t change behavior unless reliability, speed, and comprehension are solved first, useful guidance for every AI product aiming at consumers.
Platform risk for Siri/Alexa/Assistant: If usage won’t move beyond basic commands, the long-term “assistant as interface” thesis needs a reset in both product design and business model.
❓ Frequently Asked Questions
Q: Are there big differences between how people use Siri, Alexa, and Google Assistant?
A: The YouGov data doesn't break down usage by specific platform. All three received major AI upgrades, but Americans use digital assistants the same way regardless of brand. The consistency suggests this is about user behavior patterns, not one company's technology.
Q: Why do 19% of non-users worry about privacy with digital assistants?
A: Assistants need always-on microphones to hear wake words, creating constant listening devices in homes and phones. High-profile incidents like Amazon employees reviewing Alexa recordings and accidental activations capturing private conversations have made users wary of ambient voice monitoring.
Q: What makes 9% of people find digital assistants "creepy"?
A: The creep factor comes from assistants responding to unintended conversations, devices lighting up unexpectedly, and the uncanny valley of talking to machines. Voice interaction feels more human than typing, making technical failures seem like privacy violations rather than simple bugs.
Q: What specific problems cause 27% to say assistants don't understand their requests?
A: Accent recognition fails for non-mainstream dialects. Background noise interferes with parsing. Context breaks down—asking "What about Friday?" after checking weather often fails. Multi-part requests frequently collapse. These everyday scenarios teach users to stick with simple, safe commands.
Q: How long have these usage patterns been stuck at the same levels?
A: Adobe published nearly identical data in 2018. The YouGov survey from August 2025 shows seven years of static behavior despite continuous technological investment. The plateau began when assistants reached mass adoption around 2017-2018 and hasn't moved since.
Q: Is this stalled usage pattern unique to America?
A: This YouGov data covers only U.S. users. The core technical problems—accent recognition, noise handling, context understanding—exist worldwide. Different privacy laws and cultural attitudes toward voice interaction might create regional variations, but reliability issues likely persist globally.
Tech journalist. Lives in Marin County, north of San Francisco. Got his start writing for his high school newspaper. When not covering tech trends, he's swimming laps, gaming on PS4, or vibe coding through the night.
Security teams assumed attackers needed to taint a percentage of training data. New research shows a fixed number of documents can backdoor models regardless of scale—upending detection strategies built around dilution assumptions.
How can an AI master a complex game without ever playing it? DeepMind's Dreamer 4 learns by watching, then trains in imagination. This shift from big data to efficient world models could be key for real-world robotics and autonomous systems.
Tech CEOs warned AI would spike unemployment to 20%. Yale researchers tracking 33 months of labor data can't find the disruption. Either the measurement tools are wrong, adoption is slower than claimed, or the apocalypse is just delayed.
AI adoption doubles across companies, but 95% see no returns. The culprit: "workslop"—polished AI content that shifts real work onto colleagues. Each incident costs $186 in hidden labor. The productivity promise meets workplace reality.