Labor market impacts of AI: A new measure and early evidence

jjwiseman 122 points 160 comments March 05, 2026
www.anthropic.com · View on Hacker News

Discussion Highlights (20 comments)

rishabhaiover

> There's suggestive evidence that hiring of young workers (ages 22–25) into exposed occupations has slowed — roughly a 14% drop in the job-finding rate There goes my excuse of not finding a job in this market.

behnamoh

I don't think there's been much of an impact, really. Those who know how to use AI just got tangentially more productive (because why would you reveal your fake 10x productivity boost so your boss hands you 10x more tasks to finish?), and those w/o AI knowledge stayed the way they were. The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.

nickphx

You know you're having a real impact when you have to self-report on the impact you're having.

sp4cec0wb0y

My speed shipping software increased but so did the demands of features by my company.

g947o

I am not going to trust a single word from a company whose business is selling you AI products.

tl2do

From my experience as a software engineer, doubling my productivity hasn’t reduced my workload. My output per hour has gone up, but expectations and requirements have gone up just as fast. Software development is effectively endless work, and AI has mostly compressed timelines rather than reduced total demand.

zthrowaway

My day to day is even busier now with agents all over the place making code changes. The Security landscape is even more complex now overnight. The only negative impact I see is that there’s not much need for junior devs right now. The agent fills that role in a way. But we’ll have to backfill some way or another.

thatmf

cigarettes don't cause cancer! -cigarette companies

bandrami

I don't write code for a living but I administer and maintain it. Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has. Maybe this will change, but until it does I'm mostly watching bemusedly.

nitwit005

The problem with using unemployment as a metric is hiring is driving by perception. You're making an educated guess as to how many people you need in the future. Anthropic can cause layoffs through pure marketing. People were crediting an Anthropic statement in causing a drop in IBM's stock value, which may genuinely lead to layoffs: https://finance.yahoo.com/news/ibm-stock-plunges-ai-threat-1... We'll probably have to wait for the hype to wear off to get a better idea, but that might take a long while.

andai

How is Anthropic getting this data? Are they running science experiments on people's chat history? (In the app, API or both?)

nl

This is a pretty interesting report. The TL;DR is that there is little measurable impact (and I'd personally add "yet"). To quote: "We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations" My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle. I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe. The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!

recursivedoubts

A possible outcome of AI: domestic technical employment goes up because the economics of outsourcing change. Domestic technical workers working with AI tools can replace outsourcing shops, eliminating time-shift issues, etc at similar or lower costs.

holografix

One of the more interesting takes I heard from a colleague, who’s in the marketing department, is that he uses the corporate approved LLM (Gemini) for “pretend work” or very basic tasks. At the same time he uses Claude on his personal account to seriously augment his job. His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned. Only the paranoid survive I guess

keeda

This rhymes with another recent study from the Dallas Fed: https://www.dallasfed.org/research/economics/2026/0224 - suggests AI is displacing younger workers but boosting experienced ones. This matches what we see discussed here, as well as the couple similar other studies we've seen discussed here. Also, it seems to me the concept of "observed exposure" is analogous to OpenAI's concept of "capability overhang" - https://cdn.openai.com/pdf/openai-ending-the-capability-over... I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge. Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei... ) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps. I call this "Conway Overhead." This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient. Now imagine, a single AI-assisted person competently owns everything a team used to own. Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers. So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear. But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is! And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability... These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things. So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.

ChrisMarshallNY

I'm working on a project right now, that is heavily informed by AI. I wouldn't even try it, if I didn't have the help. It's a big job. However, I can't imagine vibe-coders actually shipping anything. I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues. I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up. I'm afraid to turn on SwiftLint. The LLM code is ... prolix ... All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already. To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.

boxedemp

I think it really depends what you're working on. I do some consulting and found it's not helping the C++ devs as much it's helping the html/js devs.

thewhitetulip

Did you all read about the aws outage for 13hrs because their autonomous AI agent decided to delete everything and write from scratch?

zmmmmm

the numbers they show are barely distinguishable from noise as far as I can interpret them. For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI. It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.

sanex

I know multiple devs who would have a very large productivity increase but instead choose to slow down their output on purpose and play video games instead. I get it.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed