Obfuscation is not security – AI can deobfuscate any minified JavaScript code
rvz
35 points
32 comments
April 01, 2026
Related Discussions
Found 5 related stories in 32.8ms across 3,471 title embeddings via pgvector HNSW
- Email obfuscation: What works in 2026? jaden · 23 pts · April 02, 2026 · 56% similar
- The Claude Code Source Leak: fake tools, frustration regexes, undercover mode alex000kim · 1057 pts · March 31, 2026 · 56% similar
- The Claude Code Leak mergesort · 79 pts · April 02, 2026 · 53% similar
- Anthropic Races to Contain Leak of Code Behind Claude AI Agent sonabinu · 21 pts · April 01, 2026 · 49% similar
- Verification debt: the hidden cost of AI-generated code xfz · 87 pts · March 07, 2026 · 49% similar
Discussion Highlights (14 comments)
ryandrake
I successfully did this the other day. There was a web app I used quite a bit with an annoying performance issue (in some cases its graphics code would spin my CPU at 100% constantly, fans full-blast). I asked Claude to fetch the code and fed it a few performance traces I took through Firefox, and it cut through all those obfuscated variables like they weren't even there, easily re-interpreting what each function actually did, finding a plausible root cause and workaround (which worked). Can you generally trust it to de-obfuscate reliably? No idea. My sample size is 1.
durzo22
write your blog yourself if ppl are supposed to read it not this llm slop
notepad0x90
It's a cat and mouse game, it provides the desired level of security for people who use it. It isn't used to prevent people from finding vulnerabilities (not mostly at least). It's used to deter competition, prevent clones of the application,etc.. it's make-shift "DRM". There are ways to defeat even AI-assisted analysis running in a proper browser. But I think it's not a good idea to give anyone ideas on this subject. proper-DRM is hellish enough. Was there ever an obfuscated JS code a human couldn't reverse given enough time? It's like most people's doors, it won't stop someone with a battering ram, but it will ideally slow them down enough for you to hide or get your guns. in this case, it won't even slow them down, until it does (hence: cat and mouse game).
0x3f
The _any_ part is not clear to me. Obfuscation is an arms race. Reverse engineers have always been tool-assisted. Now they just have new tools and the obfuscators need to catch up.
socalgal2
And read through native code as well
tw04
Huh? Their justification for "ofuscation isn't security" is by pointing out that the Claude source wasn't obfuscated, it was minified. And it could be "deobfuscated by claude itself" - even though, again, they said the code wasn't obfuscated. So I guess, ask Claude to deobfuscate some code that's ACTUALLY OBFUSCATED if you want to claim obfuscation provides ZERO additional security. >We analyzed this file at AfterPack as part of a deobfuscation case study. What we found: it's minified, not obfuscated. >Here's the difference. Minification — what every bundler (esbuild, Webpack, Rollup) does by default — shortens variable names and removes whitespace. It makes code smaller for shipping. It was never designed to hide anything. >Here's where it gets interesting. We didn't need source maps to extract Claude Code's internals. We asked Claude — Anthropic's own model — to analyze and deobfuscate the minified cli.js file.
maxwg
JS was never really obfuscated - it wasn't the goal of minification. Minifiers especially struggle with ES6 classes/etc, outputting code that is almost human readable. Proper obfuscation libraries exist, typically at the cost of a pretty notable amount of performance that I'd wager most are not willing to sacrifice And like even the best of client-side DRM, everything can be reverse engineered. All the code has been downloaded to the user's machine. It's one of the (IMO terrible) excuses for the SaaSification of all software
Retr0id
Minification is not obfuscation and obfuscation is not security, but no amount of deobfuscation will recover the comments in the source, which are often more insightful than the source itself.
layer8
> AfterPack approaches this differently. Instead of layering reversible transforms on top of each other, AfterPack uses non-linear, irreversible transforms — closer to how a hash function works than how a traditional obfuscator works. The output is functionally equivalent to the input, but the transformation destroys semantic meaning in a way that cannot be reversed — even by AfterPack itself. There's no inverse function. No secret key that unlocks the original. That’s probably fun when trying to analyze bugs occurring in production. :)
sublinear
> No one talks about this. There's no VentureBeat headline about GitHub shipping email addresses in their JS bundles. No Hacker News thread about internal URLs exposed in Anthropic's CDN scripts That's a huge sign none of that information is truly sensitive. What is being implied here? > AI Makes This Urgent No it doesn't. This is blogspam and media hype nobody is interested in. Unless the demographics have really shifted that much in the last few years, HN is one of the worst places to attempt this marketing style.
motohagiography
slight historical note, it might be interesting to see how the brief period of "white box cryptography" stands up to AI today. At the time there were a few companies with products that had trouble finding fit (for straightforward security reasons) but they were essentially commercial obfuscators that made heavy use of lookup tables, miniature virtual machines, and esolang concepts that worked mainly against human reverse engineers. An example was this early AES proposal: https://link.springer.com/chapter/10.1007/3-540-36492-7_17
mediumsmart
JavaScript code is the essence of minified security.
throwaway9980
Nicholson entered the mantrap and the double doors closed behind him. He emptied his pockets and disrobed before donning the clean suit that had been provided to him by the orderlies. The camera watching him appeared satisfied that he was properly prepared and, more to the point, that the vendor was properly protected. The doors to the inner chamber opened and he proceeded into the hallway. He passed several doors until he reached the one that was labeled with the name of the vendor. He pressed the button on the doorframe. A satisfying tactile click, a spinning light illuminating around the button, a click, and then the door opened soundlessly. A single desk with a small chair and a computer terminal awaited him. He sat down and the screen turned on automatically. Finally, he was able to set about classifying his expenses from a recent trip to Tokyo. It was inconvenient, but a small price to pay to ensure that the vendor’s unique interfaces, their intellectual property, couldn’t be copied by the replication machines. Their eyes and their ears were everywhere in the outside world. Simply by seeing your software, these machines could copy its essence. The risks of operating software in the wild required that proprietary software be protected. Hidden away from eavesdroppers. Such was the world in 2037.
beej71
About 8 months ago I deliberately obfuscated some merge sort code to give to my software engineering students to make them upset. :) I munged it up pretty good, changing the structure, making the variable names completely misleading, destroying the symmetry, etc. Out of curiosity, I fed it into ChatGPT and it had it figured out in zero seconds.