Stanford report highlights growing disconnect between AI insiders and everyone

ZeidJ 233 points 323 comments April 13, 2026
techcrunch.com · View on Hacker News

Discussion Highlights (20 comments)

gcheong

"Make something people want" seems so quaint now.

jjulius

>... with Gen Z reportedly leading the way... The kids are alright.

simonw

I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.

therobots927

What the tech elite fail to understand is that we are at historic levels of wealth and income inequality. Access to healthcare is determined by one’s employment which makes what I’m about to explain a matter of life and death. It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state . The current state is: Nearly all productivity growth since 1980 has gone to shareholders, not workers: https://www.epi.org/productivity-pay-gap/ Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing. And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.

ike2792

This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.

hcmgr

The tone deafness of the tech community is so unbearable. Either too on the spectrum, too ambitious (the world is fine cause I’m getting mine), or too isolated from non-tech people, to realise most people despise what they’re creating. There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.

nacozarina

a person can have full faith in the potential value of ai science and simultaneously have zero faith in the current crop of business stewards of that science. no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.

belval

This is poor reporting, almost needs a checklist: [X] Tweets and instagram comments presented as "what society is thinking" [X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?). [X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.

ChrisArchitect

Source: https://hai.stanford.edu/ai-index/2026-ai-index-report ( https://news.ycombinator.com/item?id=47758120 )

SunshineTheCat

Giant leaps in innovation almost always have a reaction like this. It's new, people fear it. Sometimes justified, usually not. People greatly feared the car because of the number of horse-related jobs it would displace. President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them. Looking back at these we might laugh. We're largely in the same boat now. It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.

cynicalsecurity

Paraphrasing the classic, it's not AI that people are unhappy with, it's their life around AI. The world generally appears to have become a harsher and more dangerous place - even though it hasn't. But people and especially tabloid press like finding scapegoats and participating in mass hysteria. The anti-AI hysteria is going to go away soon while AI isn't. It's just another tool, like cars or factories. Granted, it brings some danger, but at the same time it brings overwhelmingly more good.

CobrastanJorji

I think people are really underestimating how poorly today's tweens think of AI. "That looks like chatgpt" is an insult. Kids avoid things because they heard somewhere that AI might have been involved and have a sense that means it is bad or immoral or illegal or cheating in some nebulous way, and it's reinforced by their teachers telling them that using AI for homework is cheating. I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.

MrScruff

I think it's not that difficult to see why a technology that will likely trigger widespread unemployment during a cost of living crisis, an arms race with China, along with all the alignment concerns, might not be hugely popular with the public. Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.

JumpCrisscross

The lack of federal permitting standards for AI data centers is really going to bite the industry in the ass. We also probably need something akin to the WARN Act for AI-related layoffs. (Possibly with multi-year benefits for large companies.)

Yokohiii

My only surprise is that the AI "elite" is surprised.

slopinthebag

If "AI" was just free local and open models running on consumer hardware, fewer people would have an issue with it. Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc. We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.

advael

AI continues to be a stupidly vague term, and the example I keep going back to is present in this article Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be

goekjclo

Makes sense.

markus_zhang

Regardless, I think we are going to see an acceleration of AI research. I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.

thepasch

This AI rollout has been fundamentally rushed and fucked from the very beginning and I think the people who are responsible for doing it this way have done more irreparable damage to society than any single group of humans in the entire history of the species, and I mean it . It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works , how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!” And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise ! When you give people a model that’s trained to anthropomorphize itself , people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?! This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.

Semantic search powered by Rivestack pgvector
4,562 stories · 42,934 chunks indexed