astralcomputing , to random
@astralcomputing@twit.social avatar

STOP all AI development NOW!

The world is racing down the rabbit hole of unrecoverable damage to the human race

AI should be classified as a "munition" and banned, countries that refuse should be disconnected from the global Internet

We are only months away from AIs that are "too powerful" to control (even if they are not AGI yet)

Anyone can already use AI to write malware that cripples the world's Internet, and crashes all of Society

🤖
#LLM #GPT #GPT4 #ChatGPT4 #AI #AIChaos #AGI

astralcomputing OP ,
@astralcomputing@twit.social avatar

"...there’s already a toolkit circulating called WormGPT, a genAI tool “designed specifically for malicious activities.”

"...the results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks. In summary, it’s similar to ChatGPT, but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT..."

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing Yes, this is all extremely laughable. If you need GPT to explain you the output or manual of nmap, then you really have no chance for success in any "hacking", ethical or not. The nmap output is as concise as you can make it and contains all information that the tool can extract, understanding it using context is the hackers/pentesters job, and this requires understanding of the entire picture, down to how each of the services you attack works and what are it's quirks. Putting a language model will only add noise and false information to it, if it decides to add some "context" that it cannot have any information about.

The "AI written malware" is also laughable, there is malware templates that the attacker fills in (for example replaces the shellcode with their own, or adjusts the C2 server address), and this is something a text editor search-and-replace, or GNU SED can do. Once again, GPT can do this but it does not understand anything, it merely operates on tokens (words) without any meaning attached. This is why it's biggest feature is generating (very simple and poor) boilerplate code which has been pasted enough times on the web that the language model knows that if i write "int main" then "int argc, char* argv" will follow. Again, writing malware requires deep and wide knowledge about the entire system, something which the language model will never have.

The nebula readme also shows what this is all about, with a huge link to "BUY PRO" on the beginning and then instructions on how to install wget on debian. It also passes the -y flag to apt, so that people who don't know how to press "Y" when apt asks them if they want to install the packages don't have to think too much.

This is the next iteration of "script kiddies", "hackers" whose expertise ends and downloading a ready made script and possibly replacing some constants inside it. They were dangerous in the 90s. The only danger this tool poses is to the wallets of rich morons who see "AI", have no understanding how any of it works, and think that if they buy it they can hack the entire internet, whatever that means in their imagination.

Laughable, resource-hogging tools for people who don't know how computers work.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines