Speed Matters: Why AI Software Vulnerability Exploitation is going be bad
I co-founded a successful security company close to the Mythos ecosystem and have spoken with participants in the know and I am deeply concerned. We, collectively, have answers for some but not all of the problems ahead but are overlooking the speed at which we can apply fixes even if they magically are generated instantaneously by Mythos. Here are some considerations to consider: 1. More Vulnerabilities Are Coming: Supposedly Mythos can find vulnerabilities more effectively, many models can do this, but the claim it can find them more acutely. Based on the momentum of the models, others will follow and we can all agree that many more vulnerabilities will be found in the future. 2. The supposedly game changer with Mythos is not the finding, it is chiefly because it can chain these vulns together sequentially to develop exploit chains and is creative/innovative in doing so. 3. Anthropic claims Mythos can also be used to provide FIXES as well, I am not convinced about that. I believe it will FIND more than it can FIX. 4. FIXING SPEED MATTERS. But even if it can FIND and FIX at the same rate, which it can’t, there is a whole other aspect that is being overlooked. How long it takes to get these FIXES deployed. 5. WE CAN'T FIX FAST ENOUGH. Even if it can fix all of them it takes time to get these patches into the software upstream because they have to be accepted and TESTED and there is an entire approval process and release process. It’s not instantaneous. Typically a patch takes days even weeks to move through the upstream ecosystem before it becomes available to the general public. Here is the AI generated timescales for a critical vuln: Upstream Fix: 24–48 hours after confirmation by the core project team. Downstream Packaging 12–48 hours for major distros (Ubuntu LTS, RHEL, Debian Stable) to backport and test. Availability to User: 2–5 days from the initial public disclosure of the vulnerability. For arguments sake lets assume we shrink that down to a day. Magically. 6. WE CAN'T DEPLOY FAST ENOUGH: Then the end users themselves must take these patches and apply them to their infrastructure. This requires another QA cycle at least. These stats are AI generated YMMV: but for Log4J, by Day 10: On average, organizations had patched only 45% of their vulnerable cloud resources. Average Remediation Time: For systems that were detected and tracked, the average time to remediate was 17 days. Priority Patching: Externally-facing systems (those most at risk) were patched faster, averaging about 12 days, while internal systems lagged behind. The 1-Year Mark: By late 2022, telemetry from security firms like Tenable showed that 72% of organizations still had at least one vulnerable Log4j instance in their environment. The U.S. Department of Homeland Security's Cyber Safety Review Board (CSRB) stated that Log4j is a "endemic vulnerability" and predicted it will take a decade or longer to fully eliminate it from the global software supply chain. A DECADE!! 7. So there is a massive timing problem even if FIND to FIX rate is the same which it won’t be, the entire downstream system cannot move at the right speed to get the fixes deployed into the infrastructure. 8. This all sucks up developer time and cost as teams pivot to emergency mode etc. It’s just a scary prospect. If you are a dev get ready for some genuine stress and miserly. CONCLUSION: The deployment time lag is what we are facing. Please can you make suggestions in terms of what you are planning on doing to find and apply patches faster, so that we can get some creative ideas around best practices. We have other things we are doing that solves some of these issues but the speed timing issue is the one that is being overlooked in this entire debate. Russ from RapidFort
Discussion Highlights (3 comments)
GuitarHack
Fighting machine-speed threats with human-speed processes will be daunting.
davidravid
Nice thoughts, I completely agree with Russ.
ssundarr12
Agreed. Despite finding and fixing CVEs faster, downstream lag will still expose organizations to a lot of risk. Keen to understand how this lag can be reduced