The Rise of 'Vibe Hacking' Is the Next AI Nightmare

  • Software security has been running off large-scale automation for over a decade. LLMs might or might not be a step change in that automation (I'm optimistic but uncertain), but, unlike in conventional software development, the standard arguments about craft and thoughtfulness aren't operative here. If there was an argument to be had, it would have been had around the time Google stood up the mega fuzzing farms.

    A fun thing to keep in mind about software security is that it's premised on the existence of a determined and amoral adversary. In the long-long ago, Richard Stallman's host at MIT had no password; anybody could log into it. It was a statement about the ethics of locking down computing resources. That's approximately the position any security practitioner would be taking if they attempted to moralize against LLM-assisted offensive computing.

  • > victory will belong to the savvy blackhat hacker who uses AI to generate code at scale

    This is just it: AI, while providing some efficiency gains for the average user, will become simply too powerful. Imagine a superpower that allows you to move objects with your mind. That could be a very nice thing to have for many to have, because you could probably help people with it. That's the attitude many hacker-types take. The problem is, it allows people to also kill instantly, which means that telekinesis would just be too powerful to juxtapose against our animal instincts.

    AI is just too powerful – and if more people took a serious stand against it, it might actually be shut down.

  • I've written tailored offensive security tools and malware for Red Teams for around a decade and now work in the AI space.

    The argument that LLMs will enable "super powered" malware and that existing security solutions won't be able to keep up, is completely overblown. I see 0 evidence of this being possible with the current incarnation of "AI" or LLMs.

    "Vide coded" malware will be easier to detect if the people creating it don't understand what the code is actually doing and will result in incredible amount of OpSec fails when the malware actually hits the target systems.

    I do agree that "vide coding" will accelerate malware development and generally increase the amount of attacks to orgs. However if you're already applying bog-standard security practices like defense in depth, you shouldn't be concerned about this. If anything, you might want to start thinking about SOC automations in order to reduce alert fatigue.

    Stay far away from anyone trying to sell you products to defend against "AI enabled malware". As of right now it's 100% snake oil.

    Also, this is probably one of the cringiest articles on the subject I've ever read and is only meant to spread FUD.

    I do find the banner video extremely entertaining however.

  • The AI nightmare after that is "vibe capitalism". You put in a pitch deck and an operating business comes out.

    Somebody should pitch that to YC.

  • We are no closer to this than "vibe-businessing", where you vibe your way into a profit-generating business powered by nothing than AI agents.

  • Alright folks.. To qualify myself. I am a vulnerability Researcher @ MIT. My day to day research concerns embedded hardware/software security. Some of my current/past endeavors involve AI/ML integration and understanding just how useful it actually is for finding/exploiting vulnerabilities. Just last week my lab hosted a conference that included MIT folks and the outsiders we invite. One talk was on the current state of AI/LLM. To keep things short, this article is sensationalized and overstates the utility of AI/ML on finding actual novel vulnerabilities. As it currently stands, LLMs cannot even reliably find bugs other less sophisticated tools could have found in much less time. Binary Exploitation is a great place for illustrating the wall you’ll hit using LLMs hoping for a 0day. While LLMs can help with things like setting up fuzzers or maybe giving you a place to start manual analysis, their utility kind of stops there. They cannot reliably catch memory corruption bugs that a basic fuzzers or sanitizers could have found within seconds. This makes sense for that class of bugs. LLMs are fuzzy logic and these issues aren’t reliably found with that paradigm. That’s the whole reason we have fuzzers; they find subtle bugs worth triaging. You’ve seen how well LLMs count, it’s no surprise they might miss many of the same things a humans would but fuzzers wouldn’t (think UaF, OOB, etc). All the other tools you see written for script kiddies yield the same amount of false positives they could have gotten with other tools that already exist.. I can go on and on but I am on shuttle, typing on small phone. TLDR: Article is trying to say LLMs are super hackers already and that’s simply false. They definitely have allure for script kiddies. In the future this might change. LLMs time saving aspects are definitely worth checking out for static binary analysis. Binary Ninja with Sidekick saves a lot of time! But again. You still need to double check important things!

  • [flagged]