astralcomputing ,
@astralcomputing@twit.social avatar

STOP all AI development NOW!

The world is racing down the rabbit hole of unrecoverable damage to the human race

AI should be classified as a "munition" and banned, countries that refuse should be disconnected from the global Internet

We are only months away from AIs that are "too powerful" to control (even if they are not AGI yet)

Anyone can already use AI to write malware that cripples the world's Internet, and crashes all of Society

🤖

astralcomputing OP ,
@astralcomputing@twit.social avatar

"...there’s already a toolkit circulating called WormGPT, a genAI tool “designed specifically for malicious activities.”

"...the results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks. In summary, it’s similar to ChatGPT, but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT..."

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing so somebody used a LLM to generate a phishing email. absolutely nothing special.

astralcomputing OP ,
@astralcomputing@twit.social avatar

@gorplop

Since that post, it's exploded into a full suite of AI automated tools that are crawling the internet in search of victims.

The "collapse" is accelerating at an exponential rate.

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing that sounds very cool, do you have a source for it? I research malware

astralcomputing OP ,
@astralcomputing@twit.social avatar
gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing this is very funny, I'm sorry if you believe this all. Extra points for linking to some ai aggregator instead of actual source :caco_laugh:

astralcomputing OP ,
@astralcomputing@twit.social avatar

@gorplop

So you think it's all just media hype?

The links to the original articles are in the perplexity.ai summary if you want to deep dive.

Do you hang out on the dark web and have direct insight of the current scene? I don't, so you may have a clearer view of the state-of-the-art hacking environment.

astralcomputing OP ,
@astralcomputing@twit.social avatar
gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing Yes, this is all extremely laughable. If you need GPT to explain you the output or manual of nmap, then you really have no chance for success in any "hacking", ethical or not. The nmap output is as concise as you can make it and contains all information that the tool can extract, understanding it using context is the hackers/pentesters job, and this requires understanding of the entire picture, down to how each of the services you attack works and what are it's quirks. Putting a language model will only add noise and false information to it, if it decides to add some "context" that it cannot have any information about.

The "AI written malware" is also laughable, there is malware templates that the attacker fills in (for example replaces the shellcode with their own, or adjusts the C2 server address), and this is something a text editor search-and-replace, or GNU SED can do. Once again, GPT can do this but it does not understand anything, it merely operates on tokens (words) without any meaning attached. This is why it's biggest feature is generating (very simple and poor) boilerplate code which has been pasted enough times on the web that the language model knows that if i write "int main" then "int argc, char* argv" will follow. Again, writing malware requires deep and wide knowledge about the entire system, something which the language model will never have.

The nebula readme also shows what this is all about, with a huge link to "BUY PRO" on the beginning and then instructions on how to install wget on debian. It also passes the -y flag to apt, so that people who don't know how to press "Y" when apt asks them if they want to install the packages don't have to think too much.

This is the next iteration of "script kiddies", "hackers" whose expertise ends and downloading a ready made script and possibly replacing some constants inside it. They were dangerous in the 90s. The only danger this tool poses is to the wallets of rich morons who see "AI", have no understanding how any of it works, and think that if they buy it they can hack the entire internet, whatever that means in their imagination.

Laughable, resource-hogging tools for people who don't know how computers work.

astralcomputing OP ,
@astralcomputing@twit.social avatar

@gorplop

At some point (if not already) it will be as simple as asking the EvilAI-LLM/Agent program to "Hack IBM.com and steal all their data"

...And the EvilAI will spawn enough smart agents to do it with no further human intervention...

altomare ,
@altomare@oldbytes.space avatar

@astralcomputing @gorplop

But what if an ethical counter-hacker™ tells the GoodAI-LLM/Agent program to protect against it? It will be the battle between good and evil, and it will balance out.

The cyberchurch can just build a bigger computer!

gorplop ,
@gorplop@pleroma.m68k.church avatar

@altomare @astralcomputing just buy my SecureGPT PRO and it will protect your main frame for $49.99/mo +VAT

altomare ,
@altomare@oldbytes.space avatar

@gorplop @astralcomputing What's your hacker hat color? Is that color ethical?!

f4grx ,
@f4grx@chaos.social avatar

@altomare @astralcomputing @gorplop but is it an astral computer?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines