A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.
If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.
The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.
If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?
Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.
In the Disconnect Roundup, I explain the threat isn’t superintelligent AI but the CEOs who believe in it and are feeding the world to computers. Plus, recommended reads, labor updates, and other tech news!
Our tech overlords tell us the AIs could enslave us, but they’re the ones serving up the world to computers on a silver platter and dreaming of becoming computers themselves then building them throughout the galaxy. They’re the real risk to humanity.
Sam Altman’s vision for AI proliferation will require a lot more computation and the energy to power it.
He admitted it at Davos, but he said we shouldn’t worry: an energy breakthrough was coming, and in the meantime we could just use “geoengineering as a stopgap.” That should set off alarm bells.
I used to look at these kinds of statements as deceptive PR, but increasingly I see them more through the lens of faith.
The tech billionaires are true believers and don’t accept they’re misunderstanding things like intelligence because they believe themselves to be geniuses.
To them, everything is reduced to computation: the brain is a computer; climate change is a technological problem. But none of that is true, and we’re setting ourselves up for chaos if we keep believing these men who assert tech will save us from the crises we face.
It never ceases to annoy me that the people who fear #xrisk from #AGI essentially fear that some very smart #AI will subliminally persuade its creators and controllers to do things that enable it to escape their control and/or gain control over ‘real world' levers of power.
Meanwhile they dismiss the whole idea of current #LLMs having what mimics subtle agendas, grounded in how they have been trained, reinforcing established modes of thought TODAY in harmful ways.