abucci , to random
@abucci@buc.ci avatar

A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?

Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.

parismarx , to random
@parismarx@mastodon.online avatar

They missed the most important part: And neither of them should be believed because AGI is just another tech fantasy.

#ai #agi #tech #elonmusk #baidu

gorbay ,
@gorbay@mastodon.social avatar

@parismarx Musk is fabulously inaccurate with his predictions. After all, he's Mr Close-To-Zero-Cases-By-April.

jeridansky ,
@jeridansky@sfba.social avatar
ruby , to random
@ruby@toot.cat avatar

Great article about how is like a mechanical psychic, and exactly how that works for both human and software mentalists.

https://softwarecrisis.dev/letters/llmentalist/

AeonCypher , to random
@AeonCypher@lgbtqia.space avatar

is not going to make , ever.

The performance of LLMs is the log of their parameter size. The resource use is proportional to their parameter size.

The human brain is approximately equal to a 60 quintilion parameter model running at 80hz. It consumes the energy of a single lightbulb.

GPT-5 is likely a 1 trillion parameter model, and requires massive energy.

LLMs are amazing, but they're nowhere near approaching AGI.

https://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/

astralcomputing , to random
@astralcomputing@twit.social avatar

STOP all AI development NOW!

The world is racing down the rabbit hole of unrecoverable damage to the human race

AI should be classified as a "munition" and banned, countries that refuse should be disconnected from the global Internet

We are only months away from AIs that are "too powerful" to control (even if they are not AGI yet)

Anyone can already use AI to write malware that cripples the world's Internet, and crashes all of Society

🤖

astralcomputing OP ,
@astralcomputing@twit.social avatar

"...there’s already a toolkit circulating called WormGPT, a genAI tool “designed specifically for malicious activities.”

"...the results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks. In summary, it’s similar to ChatGPT, but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT..."

gorplop ,
@gorplop@pleroma.m68k.church avatar

@astralcomputing Yes, this is all extremely laughable. If you need GPT to explain you the output or manual of nmap, then you really have no chance for success in any "hacking", ethical or not. The nmap output is as concise as you can make it and contains all information that the tool can extract, understanding it using context is the hackers/pentesters job, and this requires understanding of the entire picture, down to how each of the services you attack works and what are it's quirks. Putting a language model will only add noise and false information to it, if it decides to add some "context" that it cannot have any information about.

The "AI written malware" is also laughable, there is malware templates that the attacker fills in (for example replaces the shellcode with their own, or adjusts the C2 server address), and this is something a text editor search-and-replace, or GNU SED can do. Once again, GPT can do this but it does not understand anything, it merely operates on tokens (words) without any meaning attached. This is why it's biggest feature is generating (very simple and poor) boilerplate code which has been pasted enough times on the web that the language model knows that if i write "int main" then "int argc, char* argv" will follow. Again, writing malware requires deep and wide knowledge about the entire system, something which the language model will never have.

The nebula readme also shows what this is all about, with a huge link to "BUY PRO" on the beginning and then instructions on how to install wget on debian. It also passes the -y flag to apt, so that people who don't know how to press "Y" when apt asks them if they want to install the packages don't have to think too much.

This is the next iteration of "script kiddies", "hackers" whose expertise ends and downloading a ready made script and possibly replacing some constants inside it. They were dangerous in the 90s. The only danger this tool poses is to the wallets of rich morons who see "AI", have no understanding how any of it works, and think that if they buy it they can hack the entire internet, whatever that means in their imagination.

Laughable, resource-hogging tools for people who don't know how computers work.

parismarx , to random
@parismarx@mastodon.online avatar

Sam Altman’s vision for AI proliferation will require a lot more computation and the energy to power it.

He admitted it at Davos, but he said we shouldn’t worry: an energy breakthrough was coming, and in the meantime we could just use “geoengineering as a stopgap.” That should set off alarm bells.

https://disconnect.blog/sam-altmans-self-serving-vision-of-the-future/

#ai #openai #tech #climate #energy #agi #samaltman

JoBlakely ,
@JoBlakely@mastodon.social avatar

@parismarx They are investing in WASTING energy derailing sustainable innovation & ensuring insufficient alternatives & a climate change crisis. It should be a crime against the earth & humanity to exploit resources so wastefully. They shouldn’t be allowed to do this until they can power it now sustainably, not at some point in the future that they are actually preventing.

It’s putting the cart before the horse. Building a house starting with the roof & no foundation or support in sight.

mls14 ,
@mls14@vivaldi.net avatar

@parismarx Do NOT use any “AI” products - period. Any use of it is destructive to the environment with no tangible benefit whatsoever.

parismarx , to random
@parismarx@mastodon.online avatar

I used to look at these kinds of statements as deceptive PR, but increasingly I see them more through the lens of faith.

The tech billionaires are true believers and don’t accept they’re misunderstanding things like intelligence because they believe themselves to be geniuses.

https://www.wired.co.uk/article/deepmind

vacuumbubbles ,
@vacuumbubbles@mathstodon.xyz avatar

@parismarx Oh, then we just meant different things by computer. If I understand it correctly, this article uses a definition in the lines of "deterministic finite-state automaton operating on binary representations", and what I meant by computer was a more broad "information processing device running a program". There are even some people trying to build computers that work more like neural tissue, for instance: https://www.humanbrainproject.eu/en/follow-hbp/news/new-building-for-european-institute-for-neuromorphic-computing/ (in fact, I work right next to the EINC :D)

Selena ,
@Selena@ivoor.eu avatar

@parismarx
Reminds me of Musk insisting that a self driving car only needs regular cameras and no radar because 'humans only need their eyes'.

These visionaries seem to really struggle with the idea that they cannot will machine intelligence into existence. And like every billionaire they'll blame the proles for not working hard enough: the problem is never their vision, always the labor they hire.

parismarx , to random
@parismarx@mastodon.online avatar
CustomGPT ,
@CustomGPT@mastodon.social avatar

@parismarx They'll do both, the two ideas align well. A lot of the tech they have been using for XR and the 'metaverse' would scale with machine learning systems/AGI building it. Meta has already been involved with building this tech for ages too, as much as the other big tech companies. See Llama 2 and more

Making a big public statement about AGI is a PR stunt in some ways likely too. Apple getting involved with both surely plays a role. Seems like a smart move on the chess board of big tech.

Jbgoldberg ,

@parismarx Cool. I don't mind they keep doing HMDs and having they share of money, but... Get out of , "metabook". It is not yours. It is owned by its community!

grumpybozo , to random
@grumpybozo@toad.social avatar

It never ceases to annoy me that the people who fear #xrisk from #AGI essentially fear that some very smart #AI will subliminally persuade its creators and controllers to do things that enable it to escape their control and/or gain control over ‘real world' levers of power.

Meanwhile they dismiss the whole idea of current #LLMs having what mimics subtle agendas, grounded in how they have been trained, reinforcing established modes of thought TODAY in harmful ways.

This seems disconnected. 🧵

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines