@gorplop@pleroma.m68k.church cover
@gorplop@pleroma.m68k.church avatar

gorplop

@gorplop@pleroma.m68k.church

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

gorplop , to random
@gorplop@pleroma.m68k.church avatar

cool I dont give a damn

astralcomputing , to random
@astralcomputing@twit.social avatar

STOP all AI development NOW!

The world is racing down the rabbit hole of unrecoverable damage to the human race

AI should be classified as a "munition" and banned, countries that refuse should be disconnected from the global Internet

We are only months away from AIs that are "too powerful" to control (even if they are not AGI yet)

Anyone can already use AI to write malware that cripples the world's Internet, and crashes all of Society

🤖
#LLM #GPT #GPT4 #ChatGPT4 #AI #AIChaos #AGI

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing so somebody used a LLM to generate a phishing email. absolutely nothing special.

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing that sounds very cool, do you have a source for it? I research malware

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing this is very funny, I'm sorry if you believe this all. Extra points for linking to some ai aggregator instead of actual source :caco_laugh:

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing Yes, this is all extremely laughable. If you need GPT to explain you the output or manual of nmap, then you really have no chance for success in any "hacking", ethical or not. The nmap output is as concise as you can make it and contains all information that the tool can extract, understanding it using context is the hackers/pentesters job, and this requires understanding of the entire picture, down to how each of the services you attack works and what are it's quirks. Putting a language model will only add noise and false information to it, if it decides to add some "context" that it cannot have any information about.

The "AI written malware" is also laughable, there is malware templates that the attacker fills in (for example replaces the shellcode with their own, or adjusts the C2 server address), and this is something a text editor search-and-replace, or GNU SED can do. Once again, GPT can do this but it does not understand anything, it merely operates on tokens (words) without any meaning attached. This is why it's biggest feature is generating (very simple and poor) boilerplate code which has been pasted enough times on the web that the language model knows that if i write "int main" then "int argc, char* argv" will follow. Again, writing malware requires deep and wide knowledge about the entire system, something which the language model will never have.

The nebula readme also shows what this is all about, with a huge link to "BUY PRO" on the beginning and then instructions on how to install wget on debian. It also passes the -y flag to apt, so that people who don't know how to press "Y" when apt asks them if they want to install the packages don't have to think too much.

This is the next iteration of "script kiddies", "hackers" whose expertise ends and downloading a ready made script and possibly replacing some constants inside it. They were dangerous in the 90s. The only danger this tool poses is to the wallets of rich morons who see "AI", have no understanding how any of it works, and think that if they buy it they can hack the entire internet, whatever that means in their imagination.

Laughable, resource-hogging tools for people who don't know how computers work.

gorplop ,
@gorplop@pleroma.m68k.church avatar

@altomare @astralcomputing just buy my SecureGPT PRO and it will protect your main frame for $49.99/mo +VAT

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines