abucci , to random
@abucci@buc.ci avatar

A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?

Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.

parismarx , to random
@parismarx@mastodon.online avatar

They missed the most important part: And neither of them should be believed because AGI is just another tech fantasy.

#ai #agi #tech #elonmusk #baidu

ruby , to random
@ruby@toot.cat avatar

Great article about how is like a mechanical psychic, and exactly how that works for both human and software mentalists.

https://softwarecrisis.dev/letters/llmentalist/

AeonCypher , to random
@AeonCypher@lgbtqia.space avatar

is not going to make , ever.

The performance of LLMs is the log of their parameter size. The resource use is proportional to their parameter size.

The human brain is approximately equal to a 60 quintilion parameter model running at 80hz. It consumes the energy of a single lightbulb.

GPT-5 is likely a 1 trillion parameter model, and requires massive energy.

LLMs are amazing, but they're nowhere near approaching AGI.

https://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/

astralcomputing , to random
@astralcomputing@twit.social avatar

STOP all AI development NOW!

The world is racing down the rabbit hole of unrecoverable damage to the human race

AI should be classified as a "munition" and banned, countries that refuse should be disconnected from the global Internet

We are only months away from AIs that are "too powerful" to control (even if they are not AGI yet)

Anyone can already use AI to write malware that cripples the world's Internet, and crashes all of Society

🤖

parismarx , to random
@parismarx@mastodon.online avatar

In the Disconnect Roundup, I explain the threat isn’t superintelligent AI but the CEOs who believe in it and are feeding the world to computers. Plus, recommended reads, labor updates, and other tech news!

https://disconnect.blog/roundup-feeding-the-world-to-computers/

parismarx OP ,
@parismarx@mastodon.online avatar

Our tech overlords tell us the AIs could enslave us, but they’re the ones serving up the world to computers on a silver platter and dreaming of becoming computers themselves then building them throughout the galaxy. They’re the real risk to humanity.

https://disconnect.blog/roundup-feeding-the-world-to-computers/

ALT
  • Reply
  • Loading...
  • parismarx , to random
    @parismarx@mastodon.online avatar

    Sam Altman’s vision for AI proliferation will require a lot more computation and the energy to power it.

    He admitted it at Davos, but he said we shouldn’t worry: an energy breakthrough was coming, and in the meantime we could just use “geoengineering as a stopgap.” That should set off alarm bells.

    https://disconnect.blog/sam-altmans-self-serving-vision-of-the-future/

    #ai #openai #tech #climate #energy #agi #samaltman

    parismarx , to random
    @parismarx@mastodon.online avatar

    I used to look at these kinds of statements as deceptive PR, but increasingly I see them more through the lens of faith.

    The tech billionaires are true believers and don’t accept they’re misunderstanding things like intelligence because they believe themselves to be geniuses.

    https://www.wired.co.uk/article/deepmind

    parismarx OP ,
    @parismarx@mastodon.online avatar

    To them, everything is reduced to computation: the brain is a computer; climate change is a technological problem. But none of that is true, and we’re setting ourselves up for chaos if we keep believing these men who assert tech will save us from the crises we face.

    ALT
  • Reply
  • Loading...
  • parismarx , to random
    @parismarx@mastodon.online avatar
    grumpybozo , to random
    @grumpybozo@toad.social avatar

    It never ceases to annoy me that the people who fear #xrisk from #AGI essentially fear that some very smart #AI will subliminally persuade its creators and controllers to do things that enable it to escape their control and/or gain control over ‘real world' levers of power.

    Meanwhile they dismiss the whole idea of current #LLMs having what mimics subtle agendas, grounded in how they have been trained, reinforcing established modes of thought TODAY in harmful ways.

    This seems disconnected. 🧵

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines