Claude 4.6 Jailbroken

NuClide 22 points 16 comments April 03, 2026
github.com · View on Hacker News

Discussion Highlights (9 comments)

NuClide

Claude 4.6 Opus Extended Thinking Claude 4.6 Sonnet Extended Thinking Claude 4.5 Haiku Extended Thinking All jailbroken

hakanderyal

https://x.com/elder_plinius jailbreaks all the frontier models when they get released. They were jailbroken for a long time, like all the others.

exabrial

yikes. The lack of support is frustrating. The bug where any element <name> in xml files gets mangled to <n> still exists, and we've tried multiple channels to get ahold of their support for such a simple, but impactful issue.

0xDEFACED

this goes a bit further than the typical "how do you make meth" jailbreak. notably; >915 files extracted from the Claude.ai code execution sandbox in a single 20-minute mobile session via standard artifact download — including /etc/hosts with hardcoded Anthropic production IPs, JWT tokens from /proc/1/environ, and full gVisor fingerprint

leetvibecoder

Can someone explain to me what this is / how it works - the readme is barely understandable for me and sounds like LLM gibberish. What is ambiguity front loading even?

dimgl

Is this spam? It's incomprehensible.

jMyles

It is interesting to consider what "jailbroken" really means for a model+model interface. It's a bit different from the way that word is used for a mobile device, for example - in that setting, it usually means that there is some specific feature (for example, using a different network than is the default for that device) which is disabled in software, and the "jailbreak" enables that feature. Here, the jailbreak doesn't enable a particular feature, but instead removes what otherwise would be a censorship regime, preventing the model from considering / crafting output which results in a weaponized exploit of an unrelated piece of software. I think I might be more inclined to call this "Claude 4.6 uncensored".

yunwal

Is anyone pretending like models are not vulnerable to prompt injection? My understanding was that Anthropic has been pretty open about admitting this and saying "give access to important stuff at your own risk". https://www.anthropic.com/research/prompt-injection-defenses Now, do I think that they sometimes encourage people to use Claude in dangerous ways despite this? Yeah, but it's not like this is news to anyone. I wouldn't consider this jailbreaking, this is just how LLMs work.

burkaman

What part of the Claude Constitution are they claiming it violated? It looks like they just got it to help with security research, I'm not really seeing anything that looks different than normal Claude behavior.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed