Gradually_Adjusting ,
@Gradually_Adjusting@lemmy.world avatar

Stolze and Fink may not be underrated per se, but I wish their work was more widely known. I own several copies of Reign and I think I'm due for a WTNV retread.

CaptainEffort ,
@CaptainEffort@sh.itjust.works avatar

That professor was Jeff Winger

OlPatchy2Eyes ,

We've been had

ech , (edited )

Finishing up a rewatch through Community as we speak. Funny to see the gimmick (purportedly) used in real life.

saddlebag ,

That was my first thought!

Deway ,

He was so streets ahead.

SchmidtGenetics ,

Were people maybe not shocked at the action or outburst of anger? Why are we assuming every reaction is because of the death of something “conscious”?

A_Very_Big_Fan , (edited )

Seriously, I get that AI is annoying in how it's being used these days, but has the second guy seriously never heard of "anthropomorphizing"? Never seen Castaway? Or played Portal?

Nobody actually thinks these things are conscious, and for AI I've never heard even the most diehard fans of the technology claim it's "conscious."

(edit): I guess, to be fair, he did say "imagining" not "believing". But now I'm even less sure what his point was, tbh.

ech ,

Most discussion I've seen about "ai" centers around what the programs are "trying" to do, or what they "know" or "hallucinate". That's a lot of agency being given to advanced word predictors.

A_Very_Big_Fan , (edited )

That's also anthropomorphizing.

Like, when describing the path of least resistance in electronics or with water, we'd say it "wants" to go towards the path of least resistance, but that doesn't mean we think it has a mind or is conscious. It's just a lot simpler than describing all the mechanisms behind how it behaves every single time.

Both my digital electronics and my geography teachers said stuff like that when I was in highschool, and I'm fairly certain neither of them believe water molecules or electrons have agency.

Ephera ,

My interpretation was that they're exactly talking about anthropomorphization, that's what we're good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it's just three objects in a certain arrangement.

In the case of LLMs, the googly eyes are our language and the chat interface that it's displayed in. The anthropomorphization isn't inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that's stringing words together in a certain way.

A_Very_Big_Fan ,

Ah, yeah you're right. I guess the part I actually disagree with is that it's the source of the hype, but I misconstrued the point because of the sub this was posted in lol.

Personally, (before AI pervaded all the spaces it has no business being in) when I first saw things like LLMs and image generators I just thought it was cool that we could make a machine imitate things previously only humans could do. That, and LLMs are generally very impersonal, so I don't think anthropomorphization is the real reason.

Ephera ,

I mean, yeah, it's possible that it's not as important of a factor for the hype. I'm a software engineer, and even before the advent of generative AI, we were riding on a (smaller) wave of hype for discriminative AI.

Basically, we had a project which would detect that certain audio cues happened. And it was a very real problem, that if it fucked up once every few minutes, it would cause a lot of problems.

But when you actually used it, when you'd snap your finger and half a second later the screen turned green, it was all too easy to forget these objective problems, even though it didn't really have any anthropomorphic features.

I'm guessing, it was a combination of people being used to computers making objective decisions, so they'd be more willing to believe that they just snapped badly or something.
But probably also just general optimism, because if the fuck-ups you notice are far enough apart, then you'll forget about them.

Alas, that project got cancelled for political reasons before anyone realized that this very real limitation is not solvable.

ryven ,
@ryven@lemmy.dbzer0.com avatar

Right, it's shocking that he snaps the pencil because the listeners were playing along, and then he suddenly went from pretending to have a friend to pretending to murder said friend. It's the same reason you might gasp when a friendly NPC gets murdered in your D&D game: you didn't think they were real, but you were willing to pretend they were.

The AI hype doesn't come from people who are pretending. It's a different thing.

Aceticon ,

For the keen observer there's quite the difference between a make-believe gasp and and a genuine reaction gasp, mostly in terms of timing, which is even more noticeable for unexpected events.

Make-believe requires thinking, so it happens slower than instinctive and emotional reactions, which is why modern Acting is mainly about stuff like Method Acting where the actor is supposed to be "Living truthfully under imaginary circunstances" (or in other words, letting themselves believe that "I am this person in this situation" and feeling what's going on as if it was happenning to him or herself, thus genuinelly living the moment and reacting to events) because people who are good observers and/or have high empathy in the audience can tell faking from genuine feeling.

So in this case, even if the audience were playing along as you say, that doesn't mean they were intellectually simulating their reactions, especially in a setting were those individuals are not the center of attention - in my experience most people tend to just let themselves go along with it (i.e. let their instincts do their thing) unless they feel they're being judged or for some psychological or even physiological reason have difficulty behaving naturally in the presence of other humans.

So it makes some sense that this situation showed people's instinctive reactions.

And if you look, even here in Lemmy, at people dogedly making the case that AI actually thinks, and read not just their words but also the way they use them and which ones they chose, the methods they're using for thinking (as reflected in how they choose arguments and how they put them together, most notably with the use of "arguments on vocabulary" - i.e. "proving" their point by interpreting the words that form definitions differently) and how strongly bound (i.e. emotionally) they are to that conclusion of their that AI thinks, it's fair to say that it's those who are using their instincts the most when interacting with LLMs rather than cold intellect that are the most convinced that the thing trully thinks.

braxy29 , (edited )

i mean, i just read the post to my very sweet, empathetic teen. her immediate reaction was, "nooo, Tim! 😢"

edit - to clarify, i don't think she was reacting to an outburst, i think she immediately demonstrated that some people anthropomorphize very easily.

humans are social creatures (even if some of us don't tend to think of ourselves that way). it serves us, and the majority of us are very good at imagining what others might be thinking (even if our imaginings don't reflect reality), or identifying faces where there are none (see - outlets, googly eyes).

NABDad ,

In a robotics lab where I once worked, they used to have a large industrial robot arm with a binocular vision platform mounted on it. It used the two cameras to track an objects position in 3 dimensional space and stay a set distance from the object.

It worked the way our eyes worked, adjusting the pan and tilt of the cameras quickly for small movements and adjusting the pan and tilt of the platform and position of the arm to follow larger movements.

Viewers watching the robot would get an eerie and false sense of consciousness from the robot, because the camera movements matched what we would see people's eyes do.

Someone also put a necktie on the robot which didn't hurt the illusion.

glimse ,

I don't know why this bugs me but it does. It's like he's implying Turing was wrong and that he knows better. He reminds me of those "we've been thinking about the pyramids wrong!" guys.

Ephera ,

I wouldn't say he's implying Turing himself was wrong. Turing merely formulated a test for indistinguishability, and it still shows that.
It's just that indistinguishability is not useful anymore as a metric, so we should stop using Turing tests.

nova_ad_vitum ,

The validity of Turing tests at determining whether something is "intelligent" and what that means exactly has been debated since...well...Turing.

lvxferre ,

Nah. Turing skipped this matter altogether. In fact, it's the main point of the Turing test aka imitation game:

I PROPOSE to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine 'and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

In other words what's Turing is saying is "who cares if they think? Focus on their behaviour dammit, do they behave intelligently?". And consciousness is intrinsically tied to thinking, so... yeah.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

But what does it mean to behave intelligently? Cleary it's not enough to simply have the ability to string together coherent sentences, regardless of complexity, because I'd say the current crop of LLMs has solved that one quite well. Yet their behavior clearly isn't all that intelligent, because they will often either misinterpret the question or even make up complete nonsense. And perhaps that's still good enough in order to fool over half of the population, which might be good enough to prove "intelligence" in a statistical sense, but all you gotta do is try to have a conversation that involves feelings or requires coming up with a genuine insight in order to see that you're just talking to a machine after all.

Basically, current LLMs kinda feel like you're talking to an intelligent but extremely autistic human being that is incapable or afraid to take any sort of moral or emotional position at all.

areyouevenreal ,

Basically, current LLMs kinda feel like you're talking to an intelligent but extremely autistic human being that is incapable or afraid to take any sort of moral or emotional position at all.

Except AIs are able to have political opinions and have a clear liberal bias. They are also capable of showing moral positions when asked about things like people using AI to cheat and about academic integrity.

Also you haven't met enough autistic people. We aren't all like that.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

Except AIs are able to have political opinions and have a clear liberal bias. They are also capable of showing moral positions when asked about things like people using AI to cheat and about academic integrity.

Yes, because they have been trained that way. Try arguing them out of these positions, they'll eventually just short circuit and admit they're a large language model incapable of holding such opinions, or they'll start repeating themselves because they lack the ability to re-evaluate their fundamental values based on new information.

Current LLMs only learn from the data they've been trained on. All of their knowledge is fixed and immutable. Unlike actual humans, they cannot change their minds based on the conversations they have. Also, unless you provide the context of your previous conversations, they do not remember you either, and they have no ability to love or hate you (or really have any feelings whatsoever).

Also you haven't met enough autistic people. We aren't all like that.

I apologize, I did not mean to offend any actual autistic people with that. It's more like a caricature of what people who never met anyone with autism think autistic people are like because they've watched Rain Man once.

areyouevenreal ,

Yes, because they have been trained that way. Try arguing them out of these positions, they'll eventually just short circuit and admit they're a large language model incapable of holding such opinions, or they'll start repeating themselves because they lack the ability to re-evaluate their fundamental values based on new information.

You're imagining an average person would change their opinions based on a conversation with a single person. In reality people rarely change their strongly held opinions on something based on a single conversation. It takes multiple people normally expressing opinion, people they care about. It happens regularly that a society as a whole can change it's opinion on something and people still refuse to move their position. LLMs are actually capable of admitting they are wrong, not everyone is.

Current LLMs only learn from the data they've been trained on. All of their knowledge is fixed and immutable. Unlike actual humans, they cannot change their minds based on the conversations they have. Also, unless you provide the context of your previous conversations, they do not remember you either, and they have no ability to love or hate you (or really have any feelings whatsoever).

Depends on the model and company. Some ML models are either continuous learning, or they are periodically retrained on interactions they have had in the field. So yes some models are capable of learning from you, though it might not happen immediately. LLMs in particular I am not sure about, but I don't think there is anything stopping you from training them this way. I actually think this isn't a terrible model for mimicking human learning, as we tend to learn the most when we are sleeping, and take into consideration more than a single interaction.

I apologize, I did not mean to offend any actual autistic people with that. It's more like a caricature of what people who never met anyone with autism think autistic people are like because they've watched Rain Man once.

Then why did you say it if you know it's a caricature? You're helping to reinforce harmful stereotypes here. There are plenty of autistic people with very strongly held moral and emotional positions. In fact a strong sense of justice as well as black and white thinking are both indicative of autism.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

You're imagining an average person would change their opinions based on a conversation with a single person. In reality people rarely change their strongly held opinions on something based on a single conversation. It takes multiple people normally expressing opinion, people they care about. It happens regularly that a society as a whole can change it's opinion on something and people still refuse to move their position.

No, I am under no illusion about that. I've met many such people, and yes, they are mostly driven by herd mentality. In other words, they're NPCs, and LLMs are in fact perhaps even a relatively good approximation of what their though processes are like. An actual thinking person, however, can certainly be convinced to change their mind based on a single conversation, if you provide good enough reasoning and sufficient evidence for your claims.

LLMs are actually capable of admitting they are wrong, not everyone is.

That's because LLMs don't have any feelings about being wrong. But once your conversation is over, unless the data is being fed back into the training process, they'll simply forget the entire conversation ever happened and continue arguing from their initial premises.

So yes some models are capable of learning from you, though it might not happen immediately. LLMs in particular I am not sure about, but I don't think there is anything stopping you from training them this way. I actually think this isn't a terrible model for mimicking human learning, as we tend to learn the most when we are sleeping, and take into consideration more than a single interaction.

As far as I understand the process, there is indeed nothing that would prevent the maintainers from collecting conversations and feeding them back into the training data to produce the next iteration. And yes, I suppose that would be a fairly good approximation of how humans learn – except that in humans, this happens autonomously, whereas in the case LLMs, I suppose it would require a manual structuring of the data that's being fed back (although it might be interesting to see what happens if we give an AI the ability to let it decide for itself how it wants to incorporate the new data).

Then why did you say it if you know it's a caricature? You're helping to reinforce harmful stereotypes here.

Because I'm only human and therefore lazy and it's simply faster and more convenient to give a vague approximation of what I intended to say, and I can always follow it up with a clarification (and an apology, if necessary) in case of a misunderstanding. Also, it's often simply impossible to consider all potential consequences of my words in advance.

There are plenty of autistic people with very strongly held moral and emotional positions. In fact a strong sense of justice as well as black and white thinking are both indicative of autism.

I apologize in advance for saying this, but now you ARE acting autistic. Because instead of giving me the benefit of the doubt and assuming that perhaps I WAS being honest and forthright with my apology, you are doubling down on being right to condemn me for my words. And isn't that doing exactly the same thing you are accusing me of? Because now YOU're making a caricature of me by ignoring the fact that I DID apologize and provide clarification, but you present that caricature as the truth instead.

areyouevenreal ,

The first half of this comment is pretty reasonable and I agree with you on most of it.

I can't overlook the rest though.

I apologize in advance for saying this, but now you ARE acting autistic. Because instead of giving me the benefit of the doubt and assuming that perhaps I WAS being honest and forthright with my apology, you are doubling down on being right to condemn me for my words. And isn't that doing exactly the same thing you are accusing me of? Because now YOU're making a caricature of me by ignoring the fact that I DID apologize and provide clarification, but you present that caricature as the truth instead.

So would it be okay if I said something like "AI is behaving like someone who is extremely smart but because they are a woman they can't hold real moral or emotional positions"? Do you think a simple apology that doesn't show you have learned anything at all would be good enough? I was trying to explain why what you said is actually wrong, dangerous, and trying to be polite about it, but then you double down anyway. Imagine if I tried to defend the above statement with "I apologize in advance but NOW you ARE acting like a woman". Same concept with race, sexuality, and so on. You clearly have a prejudice about autistic people (and possibly disabled people in general) that you keep running into.

Like bro actually think about what you are saying. The least you could have done is gone back and edited your original comment, and promised to do better. Not making excuses for perpetuating harmful misinformation while leaving up your first comment to keep spreading it.

I didn't say you were being malicious or ignoring your apology. You were being ignorant though and now stubborn to boot. When you perpetuate both prejudice and misinformation you have to do more than give a quick apology and expect it to be over; you need to show your willingness to both listen and learn and you have done the opposite. All people are the products of their environment and abelism is one of the least recognized forms of discrimination. Even well meaning people regularly run into it, and I am hoping you are one of these.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

I regret that we are now having to spend our time on this when we otherwise had an interesting and productive conversation, but I can’t let that stand.

I was trying to explain why what you said is actually wrong, dangerous, and trying to be polite about it, but then you double down anyway.

You did not explain much at all, you just accused me of spreading harmful and dangerous stereotypes. What little explanation you did give (black and white thinking and strongly held beliefs), you immediately put into action by assuming the worst of me. My doubling down was therefore not based on bigotry or prejudice, but empirically observable facts (namely your actual behavior).

Like bro actually think about what you are saying. The least you could have done is gone back and edited your original comment, and promised to do better. Not making excuses for perpetuating harmful misinformation while leaving up your first comment to keep spreading it.

No, I’m not going to edit anything because I can live with the fact that I’ve made a mistake, for which I have offered both a correction and an apology, and I will respectfully and politely ask you once again to please accept it.

All I did in addition was point out that you’re not helping your own cause by exhibiting the very behavior you are blaming me for wrongly alleging. It’s like trying to teach someone that violence isn’t the way to solve your problems by beating them up.

You’re being ridiculous and unreasonable right now. Please stop it.

areyouevenreal ,

All I did in addition was point out that you’re not helping your own cause by exhibiting the very behavior you are blaming me for wrongly alleging.

You said autistic people can't have strong emotional or moral positions. I am doing both by arguing with you. Logic 101.

All I did in addition was point out that you’re not helping your own cause by exhibiting the very behavior you are blaming me for wrongly alleging. It’s like trying to teach someone that violence isn’t the way to solve your problems by beating them up.

What behavior? If I was exhibiting the behavior of not having strong morals or emotions I wouldn't still be doing this. In fact I am displaying the exact opposite of the behavior you are talking about.

At first I thought you were just slightly ignorant through no fault of your own. Now I am beginning to think you are being intentionally obtuse or just straight up trolling. Unless this is some sort of test. Do you think I am like ChatGPT? Is that what this is?

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

You said autistic people can't have strong emotional or moral positions.

Nope, I very clearly agreed with your position that they can and do hold such positions, except they don’t hold them for any reason other than having been deliberately programmed that way.

Basically, their convictions on these matters are not based on an ability to reason things through from first principles, but instead are due simply to an intentional bias in their training data. And in that way, they do indeed resemble an autistic personality, as you are continuing to demonstrate in this conversation.

What behavior? If I was exhibiting the behavior of not having strong morals or emotions I wouldn't still be doing this. In fact I am displaying the exact opposite of the behavior you are talking about.

Okay, see NOW you are showing signs of actual intelligence in a way that I would not expect from an LLM, because you seem to have realized that your previous approach didn’t work, and you are in fact trying something new. From my experience with LLMs, they simply cannot do that. If you press them too hard on their points they’ll either revert to circular reasoning or collapse into admitting they’re just LLMs and have no fundamental opinions or beliefs.

At first I thought you were just slightly ignorant through no fault of your own. Now I am beginning to think you are being intentionally obtuse or just straight up trolling. Unless this is some sort of test. Do you think I am like ChatGPT? Is that what this is?

You were indeed displaying a lot of the same behavior in your previous comments, and I suppose it was in fact a test to see if I could snap you out of it by making you realize that. So congratulations, it looks like you passed, and you are not in fact as autistic as you think you are :)

See, this was my point all along, that human beings have the ability to self-reflect on their behavior. And they can come up with new behavior on the spot if necessary, without requiring some sort retraining phase. I have not observed this to be the case with LLMs.

areyouevenreal ,

Nope, I very clearly agreed with your position that they can and do hold such positions, except they don’t hold them for any reason other than having been deliberately programmed that way.

I mean my position that you hold a bigoted world view is based on reasoning though. You keep comparing autistic people to AI and saying we don't have reasons for our beliefs, as evidenced by the next paragraph:

Basically, their convictions on these matters are not based on an ability to reason things through from first principles, but instead are due simply to an intentional bias in their training data. And in that way, they do indeed resemble an autistic personality, as you are continuing to demonstrate in this conversation.

So you essentially keep calling autistic people less than human, then wondering why saying "I am sorry" once isn't good enough. It's like how you used the "I am only human" line earlier, as if being a bigot is just a simple mistake. Either this is bad faith reasoning or your so bigoted as to be blind to the implications of what you are saying. Like I said try describing any other marginalized group of people the way you describe autistic people and see how it sounds.

So congratulations, it looks like you passed, and you are not in fact as autistic as you think you are :)

I literally have the diagnosis to prove it, but clearly evidence doesn't matter to you. I should have realized this sooner. Then again I shouldn't expect anything better from a community who's whole idea is bases on complaining about AI. Sure lots of misuses of AI, and it sucks to have your job taken away in a shitty capitalist world, but those are human problems, not AI or technology problems. They have human solutions. Becoming a neo-luddite solves nothing.

See, this was my point all along, that human beings have the ability to self-reflect on their behavior. And they can come up with new behavior on the spot if necessary, without requiring some sort of retraining phase. I have not observed this to be the case with LLMs.

Yes well done you understand how LLMs work. There are other systems that don't work this way and do continuous learning, like a human. We have already discussed this anyway.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

I mean my position that you hold a bigoted world view is based on reasoning though. You keep comparing autistic people to AI and saying we don't have reasons for our beliefs, as evidenced by the next paragraph:

Yes, it is based on reasoning, but it’s not from first principles, rather, it’s from your a priori assumptions that anyone who says anything even mildly critical about autistic people is doing so because of bigotry.

Meanwhile, I have repeatedly stated that this was not at all my intention, but that my goal was in fact demonstrating that even someone like you, with an official diagnosis of autism, has the ability to behave in ways that LLMs currently do not have.

If you were reasoning from first principles, you would at least consider this alternative as a potentially valid hypothesis, and compare the evidence I’ve given in support of it to the evidence that’s informed by your assumption of bigotry.

So you essentially keep calling autistic people less than human, then wondering why saying "I am sorry" once isn't good enough. It's like how you used the "I am only human" line earlier, as if being a bigot is just a simple mistake. Either this is bad faith reasoning or your so bigoted as to be blind to the implications of what you are saying. Like I said try describing any other marginalized group of people the way you describe autistic people and see how it sounds.

No, I keep calling LLMs less than human, and I compared their behavior to that of humans with a specific neurodevelopmental disorder (i.e. autism), which is commonly characterized by exhibiting very similar patterns of behavior, namely deficits in reciprocal social communication, along with restricted and repetitive patterns of behavior (this is straight from the Wikipedia page on autism BTW, before you accuse me of spreading misinformation again), all of which you have amply demonstrated in this conversation, for instance, in your repeated insistence that the only valid explanation for my behavior is bigotry and that the only possible remedy is for me to give up my position and agree with you, to say sorry as many times as it takes for you to believe me, and to make promises of never repeating my behavior ever again.

I’m sorry, but I can’t do that. It restricts my freedom too much and it interferes with my curiosity, because it essentially is a demand of indefinite slavery and servitude to your expectations. You are, in a way, demanding that I behave more autistically myself in order for you to feel better about your own autism. Clearly, that will never cure you of your condition, and perhaps that’s not what you’re after anyways, but I’m certainly not going to help make the world more autistic so you can feel better about it.

Now here’s what I CAN offer: first, if this conversation is excessively difficult and painful for you to have, you can just say so and I’ll promise to stop. I won’t come after you and annoy you if I see you around this forum (or anywhere else on Lemmy), and I can put you on my blocklist to ensure it doesn’t happen by accident (or you can block me if you prefer to be more in control of that, up to you).

Alternatively, we can continue this conversation, and you can look for evidence of whether my behavior is in fact congruent with my stated intention that my goal was not to offend you, but to demonstrate that even someone like you, with an official diagnosis of autism, has abilities that exceed that of an LLM.

If you want, I can even offer an ongoing dialogue (via private messages perhaps) to share some strategies I’ve learned to deal with and overcome autism because while I’ve never received an official diagnosis, I have had many of the same symptoms in the past (still sometimes do), and I’m fairly certain I would have likely been diagnosed with it if my parents had bothered enough to get me to a shrink.

But either way, I am not going to let you abuse me for having had a slip of the tongue and saying something mildly offensive about autistic people. And please don’t drag the rest of the forum into it either because it’s not their fault that I accidentally misspoke.

areyouevenreal ,

Yes, it is based on reasoning, but it’s not from first principles, rather, it’s from your a priori assumptions that anyone who says anything even mildly critical about autistic people is doing so because of bigotry.

Ah, now I get it. We aren't actually working from the same framework. By reasoning from first principles I take it you mean rationality/logic? The problem with that is that mathematics, logic, reasoning, and so on can't actually prove anything. If we used logic we can determine that no evidence can be definite as things like dreams, hallucinations, illusions and so on exist. The only conclusion you can really reach is that perhaps everything is made up, and you can't be certain anything is real in other words solipsism. That's where "first principles" come in I guess. By first principles I assume you mean assumptions, as you won't get anywhere with logic without some kind of assumption. Since I don't know what your first principles are I am not going to be able to follow your reasoning, as if I would probably be starting with a different set of assumptions about the world. Generally though I don't believe logic/reasoning is a good tool for understanding people and things related to people like politics. It's good for bounded contexts with a well known state or rules like computers, or physical phenomenon. Depending on your worldview humans are either too badly known and too complex for another human to perform logic on them, or are simply not logical to begin with. Since it's not an effective strategy it's not something I am interested in using on people. I suspect a lot of disagreements where people are screaming at each other that the other isn't being logical come from having different assumptions rather than one being illogical.

No, I keep calling LLMs less than human, and I compared their behavior to that of humans with a specific neurodevelopmental disorder (i.e. autism), which is commonly characterized by exhibiting very similar patterns of behavior, namely deficits in reciprocal social communication, along with restricted and repetitive patterns of behavior (this is straight from the Wikipedia page on autism BTW, before you accuse me of spreading misinformation again), all of which you have amply demonstrated in this conversation, for instance, in your repeated insistence that the only valid explanation for my behavior is bigotry and that the only possible remedy is for me to give up my position and agree with you, to say sorry as many times as it takes for you to believe me, and to make promises of never repeating my behavior ever again.

Okay now you are saying thing with at least some degree of scientific evidence. The evidence for everything else you have said up until now has been pretty much "I made it the fuck up". I mean to be fair psychology isn't a real science and diagnostic categories are largely based on intuition rather than neurobiological evidence, so you aren't that far off. The LLMs I have worked with have been much more demure, they fairly easily admit they made a mistake (and can probably be coerced into doing so even if they actually haven't), and are willing to reason about political positions very different from their own liberal bias. Pretty much the opposite of stubbornness and debate bros. By being stubborn I am if anything behaving less like an LLM, as LLMs haven't been stubborn in my experience. Maybe you have had a different experience, if so I would like to here it.

Also the restricted and repetitive behavior thing is about special interests/hyperfixation. It's not actually applicable here as far as I know.

Alternatively, we can continue this conversation, and you can look for evidence of whether my behavior is in fact congruent with my stated intention that my goal was not to offend you, but to demonstrate that even someone like you, with an official diagnosis of autism, has abilities that exceed that of an LLM.

Was it ever in doubt that an autistic person can beat an LLM? It wasn't for me. The fact you think it was is kind of offensive in and of itself.

I’m sorry, but I can’t do that. It restricts my freedom too much and it interferes with my curiosity, because it essentially is a demand of indefinite slavery and servitude to your expectations. You are, in a way, demanding that I behave more autistically myself in order for you to feel better about your own autism. Clearly, that will never cure you of your condition, and perhaps that’s not what you’re after anyways, but I’m certainly not going to help make the world more autistic so you can feel better about it.

I am not trying to restrict your freedom of speech here. What you do have to understand is that speech has consequences. For example I can do what I have been doing here and argue against you. Freedom of speech is not freedom from consequences.

Now here’s what I CAN offer: first, if this conversation is excessively difficult and painful for you to have, you can just say so and I’ll promise to stop. I won’t come after you and annoy you if I see you around this forum (or anywhere else on Lemmy), and I can put you on my blocklist to ensure it doesn’t happen by accident (or you can block me if you prefer to be more in control of that, up to you).

If you want, I can even offer an ongoing dialogue (via private messages perhaps) to share some strategies I’ve learned to deal with and overcome autism because while I’ve never received an official diagnosis, I have had many of the same symptoms in the past (still sometimes do), and I’m fairly certain I would have likely been diagnosed with it if my parents had bothered enough to get me to a shrink.

I don't think you have a modern understanding of neurodiversity or of neurotypes. A lot things that were once thought to be limitations of autistic people weren't limitations of autistic people at all. For example it was thought we lacked empathy by some psychologists (and still is) even though now we know of the double empathy problem. It's a incompatibility/communication issue, not an ability one. I would suggest you do some reading, then you might understand what I am getting at. It's also understood there are some limitations neurotypical people have that autistic people do not. There was actually an interesting study done which showed that NT people don't behave morally when they aren't being watched, unlike autistic people who behave the same regardless of if they are being watched or not. The thing you said about most people behaving like NPCs is potentially one of those limitations of neurotypicals I am talking about here.

It's a shame you haven't been evaluated if that's something you wished for. Do you mind telling me what symptoms you think you might have? I understand if that's not something you want to discuss publicly or with me in particularly.

There is also a tactic where people ask if someone needs help in a disingenuous way as a form of ad-hominem attack. Essentially calling some crazy while trying to make it sound like they are legitimately concerned. I don't think you are doing this, at least not intentionally, but I hope you understand that this could be read this way.

But either way, I am not going to let you abuse me for having had a slip of the tongue and saying something mildly offensive about autistic people. And please don’t drag the rest of the forum into it either because it’s not their fault that I accidentally misspoke.

Have you been using text to speech? None of this on my side at least involves tongues or speaking. Yes I know it's a turn of phrase, but it's a bad one. With a text forum you can reread, edit, think about what you are saying much more easily than real life speech. I legitimately don't think they are comparable in behavioral or social terms, and there are social phenomenon that happen online or in writing that don't in other areas of life.

You also haven't just said one offensive thing, when pressed you kept saying offensive things. It's also not just that they are offensive either, it's that they seem to be based on misinformation and you haven't given any evidence for them either.

Also abuse you? Calling someone a bigot isn't abuse. Pushing you down the stairs would be abuse. Calling you racial slurs would be abuse. Psychological manipulation would be abuse. I am not trying to do any of those here. If anything you are unintentionally abusing me.

If you want to take this to DMs that's fine by me. While I don't necessarily respect this forum (I mean it's titled "Fuck AI" for goodness sake), I do understand not wanting to waste other people's time and that this conversation is probably no longer relevant to this forum.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

Ah, now I get it. We aren't actually working from the same framework. By reasoning from first principles I take it you mean rationality/logic?

Yes, precisely.

The problem with that is that mathematics, logic, reasoning, and so on can't actually prove anything. If we used logic we can determine that no evidence can be definite as things like dreams, hallucinations, illusions and so on exist. The only conclusion you can really reach is that perhaps everything is made up, and you can't be certain anything is real in other words solipsism. That's where "first principles" come in I guess.

Ah yes, the old Cartesian demon who holds your consciousness imprisoned in a dream world making you question whether or not you exist at all. That's actually a very good, if not the perfect example of what I mean. I'm not sure how familiar you are with Descartes' Meditations, but outside the well-known realization of "I think, therefore I am", the method by which he defeats said demon is actually precisely the sort of thing I'm proposing.

To be more practical, what I was trying get at is basically the difference between having and being. Anything you have is likely to be temporary. Anything you are is likely to be constant. So you might ask yourself, are you autistic or do you have a condition called autism? If you can see the difference in perspective each statement offers, then you'll understand what I was on about.

You see, language is in fact even more basic a tool than reason and logic, because language is how we organize our perception of the world. Reason and logic simply arise out of language because language must have a certain structure in order to be meaningful at all and not just a random collection of words. LLMs clearly have the ability to learn that structure in a way that allows them to produce perfectly understandable sentences in any human language we choose to train them on, but they cannot really produce any good answers for questions that they haven't been specifically trained on.

Yes, they might still effectively hallucinate an answer anyways, and it might even sound correct, but unless you call them out on it when they start making stuff up, they won't even notice it happening. Clearly, they cannot actually reason through their own arguments, they simply produce something that imitates the human reasoning process well enough to pass muster approx. 90% of the time or so.

By first principles I assume you mean assumptions, as you won't get anywhere with logic without some kind of assumption. Since I don't know what your first principles are I am not going to be able to follow your reasoning, as if I would probably be starting with a different set of assumptions about the world.

As I tried pointing out above, a language model doesn't actually reason very well, it just imitates what humans do because it operates on prior knowledge acquired by its training. Meanwhile, humans have the ability, as Descartes' Meditations show, to throw away ALL of their prior assumptions about the world and start over from scratch, so to say, using only as much of their prior knowledge (i.e. the tools of language, logic, and reason) as strictly necessary, and in doing so, might reach new conclusions about the world that were previously inaccessible. Meanwhile an LLM will just make a wild-ass guess that seems to make sense, but often doesn't.

Generally though I don't believe logic/reasoning is a good tool for understanding people and things related to people like politics. It's good for bounded contexts with a well known state or rules like computers, or physical phenomenon. Depending on your worldview humans are either too badly known and too complex for another human to perform logic on them, or are simply not logical to begin with. Since it's not an effective strategy it's not something I am interested in using on people. I suspect a lot of disagreements where people are screaming at each other that the other isn't being logical come from having different assumptions rather than one being illogical.

Humans CAN be wildly illogical, that's true, but you can choose not to interact with such people (at least on the Internet, IRL it can of course sometimes be more difficult to do). Just like Descartes tests his demon, you can administer tests to them to see if they're willing to agree on some sort of shared ground rules for having a discussion that may be of mutual benefit, like we did in the previous comments and are continuing to do right now.

Again, a language model doesn't do that, it operates based on the rules it learned from its training corpus, and those are fairly fixed until you do another round of training that incorporates new information. Autism appears to be somewhat similar in that regard, in the sense that prior knowledge about how the world works (i.e. past exerience) is overweighted in comparison to what's actually happening (i.e. current experience).

Okay now you are saying thing with at least some degree of scientific evidence. The evidence for everything else you have said up until now has been pretty much "I made it the fuck up". I mean to be fair psychology isn't a real science and diagnostic categories are largely based on intuition rather than neurobiological evidence, so you aren't that far off.

I'm not sure if it's worth getting lost in the weeds of debating whether psychology is a real science or not, so I'm going to suggest we don't pursue that train of thought at the moment.

The LLMs I have worked with have been much more demure, they fairly easily admit they made a mistake (and can probably be coerced into doing so even if they actually haven't), and are willing to reason about political positions very different from their own liberal bias. Pretty much the opposite of stubbornness and debate bros. By being stubborn I am if anything behaving less like an LLM, as LLMs haven't been stubborn in my experience. Maybe you have had a different experience, if so I would like to here it.

To be fair, anything either of us has to say on this matter would likely fall under the category of circumstantial evidence. I for one certainly haven't done anything that could be considered scientific in this regard, and I am merely operating based on my memory of conversation I have either personally had, or have seen posted somewhere on the Internet.

Also the restricted and repetitive behavior thing is about special interests/hyperfixation. It's not actually applicable here as far as I know.

See my reasoning above for why I believe it DOES actually apply. I could be wrong, of course, but that's why I tried to explain how I arrived at this conclusion.

I don't think you have a modern understanding of neurodiversity or of neurotypes. A lot things that were once thought to be limitations of autistic people weren't limitations of autistic people at all.

I will freely admit that I haven't spent a huge amount of time familiarizing myself with the latest research on this, and I'm likely approaching it from a very different angle than you are, which might explain some of our difficulties communicating about this subject.

For example it was thought we lacked empathy by some psychologists (and still is) even though now we know of the double empathy problem. It's a incompatibility/communication issue, not an ability one. I would suggest you do some reading, then you might understand what I am getting at. It's also understood there are some limitations neurotypical people have that autistic people do not.

That's very interesting, and seems to validate my intuitive belief that autism is a condition that makes certain types of cognition more difficult, but not entirely impossible. Which means that with the right meds and/or mental effort, it may be possible to overcome it or at least greatly reduce the severity of its sypmptoms.

There was actually an interesting study done which showed that NT people don't behave morally when they aren't being watched, unlike autistic people who behave the same regardless of if they are being watched or not. The thing you said about most people behaving like NPCs is potentially one of those limitations of neurotypicals I am talking about here.

I have some interesting thoughts about that one, but it would require a rather lengthy explanation on where I'm coming from, so perhaps I'm going to save them for another time.

It's a shame you haven't been evaluated if that's something you wished for. Do you mind telling me what symptoms you think you might have? I understand if that's not something you want to discuss publicly or with me in particularly.

I'm pretty sure I have had all the symptoms I mentioned from the Wikipedia page at one time or another, and I continue to struggle with them from time to time. I also find it hard to make friends because most people seem to find my way of communicating exceedingly difficult, while I have had great difficulties with their tendency to make smalltalk.

That said, not sure what a diagnosis would do for me now, unless I was trying to get on disability benefits, perhaps. While it might have helped make my life a bit easier in the past, I'm somewhat concerned getting diagnosed now would just turn into an easy excuse for not making an effort.

There is also a tactic where people ask if someone needs help in a disingenuous way as a form of ad-hominem attack. Essentially calling some crazy while trying to make it sound like they are legitimately concerned. I don't think you are doing this, at least not intentionally, but I hope you understand that this could be read this way.

I'm certainly familiar with this tactic, but I don't think it HAS to necessarily be used nefariously, as it could just serve as a conversation starter. Perhaps it's a bit like asking someone to coffee after smashing a brick through their window, but I hope I have demonstrated enough sincerity so far as to not be credibl

areyouevenreal ,

That's very interesting, and seems to validate my intuitive belief that autism is a condition that makes certain types of cognition more difficult, but not entirely impossible. Which means that with the right meds and/or mental effort, it may be possible to overcome it or at least greatly reduce the severity of its sypmptoms.

That's not what I am trying to say exactly, though it has half-right and I will explain more in a second.

What I was talking about is things like the double empathy problem and some other things that happen in communication between autistic and neuro typical people. The double empathy problem and similar issues can be explained thusly: Autistic people have no issue communicating or emapthising with other autistic people. NT people have no issues communicating or empathiing with other NT people. Problems only arise when NT and autistic people try to communicate or empathise with each other. Both sides have been shown to struggle that's why it's called the double empathy problem. It's not that autistic people are deficient, anymore than NT people are deficient for not being able to empathise with autistic people. They simply don't work the same way much like oil and water are bound differently and don't mix well.

As for what you are saying: yes there are some things autistic people struggle with cognitively or emotionally, but there are also areas where we do better than NT people cognitively. I don't think it's really fair to call one defective for being less effective at certain tasks, while the other is less effective than others.

That's where we get into ideas like neurodiversity, the idea that humans are meant to have multiple neurotypes with different sensory, communication, and cognitive abilities. This may have happened to fill some evolutionary role much like early bird vs night owls, or the different traits of men vs women. Maybe we shouldn't be medicalising parts of the human race just because they aren't average.

There have been theories and ideas and philosophies that attempt to replace or extend the concept of neurodiversity, and I won't go into all of them here. Let's just say though that this stuff is a lot more contested and complicated than just "autism is a disease". It might even be like sickle cell anemia, where carrying the genes protects you from malaria at the cost of some people being disabled by sickle cells.

It's not even completely clear that everything we call the autism spectrum today is actually all the same thing. It's also possible things like schizophrenia and ADHD which we know are at least connected might be considered part of the same spectrum with what we call autism today. Does that make sense?

I will have a go at responding to some of this tomorrow. I have to go do stuff and then get to bed as it's like 4am where I am.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

Does that make sense?

Yes, that does make sense. And no, it is not my intention to continually pathologize autism as a defect. If LLMs are useful despite their obvious deficiencies, why wouldn't autistic people be? It's kind of sad yet ironic that NT society, after mostly abandoning and/or abusing autistic people has now decided to spend hundreds of millions of dollars to create what could be considered a simulation of autistic intelligence, when they could have spent that money on autism research and finding better ways to integrate these people into society.

There IS in fact a very good case to be made that it's NT people who are defective, or at least deficient in ways that NA people are not, and that both could benefit from a better integration. At the risk of opening yet another potentially contentious topic, I've heard it being speculated that NA people are often of the priest or shaman archetype, i.e. the reason they have a hard time fitting into normal society is that they were meant to become religious mystics instead of ordinary workers, but in its relentless pursuit of profit, society has cast them aside instead of integrating them, and is now paying the price by becoming increasingly greedy, hostile, and directionless. This would certainly fit with your idea that it is a kind of adaptation that comes with both a blessing and a curse.

I hope that I'm not triggering another trauma-based response here, because Christianity seems to upset a lot of people on this site, but consider by way of example, the story of Moses and the Israelites in the desert: clearly Moses is neuro-atypical when compared to the rest of the Israelites, because he can speak to God directly, while the rest of them cannot. All they are concerned with is having enough food and water, and they don't care where it comes from – so much so, they even long to return to their days of slavery because at least they had something to eat back then. They clearly can't see the bigger picture, they have no awareness of the dangers that slavery puts them in, and they can hardly imagine the benefits of a life lived in freedom instead of servitude.

I'm not trying to convert you here, but I have indeed found great solace and healing in studying religion and mysticism as a sort of counterweight to the heavy burden of having had to earn my way in life by trying to be commercially productive for eight hours a day. I also find that when I do so without concerning myself with the dogmata of any existing church, the mysteries seem to open up in ways that I could not see before. Of course, this sort of endeavor is highly dangerous to TPTB, so it tends to cause massive anxiety, but I'm at a point in my life where that seems preferable to anger, depression and resignation.

areyouevenreal ,

And no, it is not my intention to continually pathologize autism as a defect.
Haven't you just spent ages doing exactly that?

I think religious mysticism is associated with schizophrenia specifically. There was actually a great ted talk about the role of schizophrenic people in societies in the past. Sadly though I am not of the opinion that religion is a force for good in modern society. It's been used to control people an awful lot, and ultimately creates distortions in the way you see reality. Some religions are worse than others obviously, but I don't think any are truly good. Religion is frequently used as a reason to keep people in slavery, not to remove them from it.

I think the mainstay of autistic people in current society seems to be as scientists, computer workers, and academics. Occasionally musicians, artists and performers though those often aren't treated that well in society unfortunately. Many end up unemployed, in prison, or in social housing and so on.

We need to find a way to make a society that benefits everyone instead of just the people at the top of the hierarchy. Doing that is extremely difficult. Society is full of alignment problems where what's best for you is harmful for everyone else, especially for those at the top. There is a reason people at the top of society have more ASPD traits than others. Theoretically people with ASPD (those who used to be termed sociopaths and psychopaths) used to and probably could play an actual positive role in society, but because of alignment issues they are funneled into either prison, the military, or as business leaders and politicians who do damage untold to the general population.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

Yes, that TED talk is probably where I got this idea from. And I agree on most other points except that religion COULD be a force for good, and insofar as it is currently not, it is in need of new leadership. Sociopaths and psychopaths indeed seem to have a knack for infiltrating positions of power and there's no reason why religion should be immune to that.

Anyways, just thought I'd throw that out there. Not gonna get into details of what needs to change because that's likely very personal and sure to get contentious. Looking forward to your response on my other comments though.

areyouevenreal ,

I thought you supported logic? Why are you now supporting things like religion which attempt to distort reality and are inherently illogical?

Organized religion is even more dangerous than simply believing in god(s). Any position with that kind of power inevitably ends in cult-like behaviour and other abuses of power (see Catholic priests and just Catholicism in general). It's not something anyone should be engaging in a perfect world.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

I do support logic – it's a wonderful tool, but it's not sufficient in and of itself to live by because it can be excessively cruel. If you think about it, there is no logical reason why you should be alive – no scientist has yet been able to give an explanation for the universe or life itself to exist that doesn't somehow leave a kernel of irreducible irrationality.

And no, that doesn't mean you have to follow any organized religion – in fact, that's not at all what I was suggesting. I merely said that that there may be value in studying religious scripture for yourself without adhering to an established dogma. If Christianity rubs you the wrong way, perhaps try Buddhism, which puts a stronger emphasis on putting everyone of its teachings to the test (basically, the Buddha himself said not to follow him blindly but merely to try out the things he suggests and observe if they make a difference in your life, meaning it's perfectly acceptable to use the scientific method in your pursuit of it, as long as you apply it with full integrity).

Long story short, I think it's a mistake to assume that a perfect world would be one of perfect rationality, because such a world would be too cold and boring to live in. There has to be a source of randomness left in it because otherwise, nothing new would ever happen, and without renewal, the only possible destination is death.

areyouevenreal ,

Ah yes, the old Cartesian demon who holds your consciousness imprisoned in a dream world making you question whether or not you exist at all. That’s actually a very good, if not the perfect example of what I mean. I’m not sure how familiar you are with Descartes’ Meditations, but outside the well-known realization of “I think, therefore I am”, the method by which he defeats said demon is actually precisely the sort of thing I’m proposing.

I haven't read meditations, but from what I have heard he defeats solipsism through an appeal to god. Appealing to a being you have no evidence for it not empirical or logical in my eyes. This increasingly makes me think

Humans CAN be wildly illogical, that’s true, but you can choose not to interact with such people (at least on the Internet, IRL it can of course sometimes be more difficult to do). Just like Descartes tests his demon, you can administer tests to them to see if they’re willing to agree on some sort of shared ground rules for having a discussion that may be of mutual benefit, like we did in the previous comments and are continuing to do right now.

There is a difference between being able to perform logic either verbally or to solve a specific situations or puzzle, and actually being logical in general. Plenty of people can act logically in one scenario, then spend most their lives doing the exact opposite.

This actually ties well into talking about autistic people, as some of us are highly logical, to the point of seeming unemotional and cold. Others are not rational at all and are highly emotional. I suspect one could theoretically occupy different extremes at different times in their life or under different conditions. As someone who used to be of the more logical variety, I will tell you now that people are not logical entities in general, and treating them as such only made working with them more difficult. I am beginning to think you don't actually have the people skills to see this.

To be more practical, what I was trying get at is basically the difference between having and being. Anything you have is likely to be temporary. Anything you are is likely to be constant. So you might ask yourself, are you autistic or do you have a condition called autism? If you can see the difference in perspective each statement offers, then you’ll understand what I was on about.

The autistic community has spent some time pushing for identity first language such as saying autistic people instead of people with autism. While I do understand the differences in statements I still don't really get what you are on about. A lot what you have said has been fairly condescending, using non identity first language, and over-medicalised language that the autistic community has worked hard to get rid of.

I really don't think you understand what special interesests/hyperfixations, stimming, echolalia, and so on are. Those are examples of "restricted interests" and "repetitive behaviors". I made the same statement repeatedly as a result of you saying things which show your ignorance of neurodivergence in general and the autistic community specifically.

Again, a language model doesn’t do that, it operates based on the rules it learned from its training corpus, and those are fairly fixed until you do another round of training that incorporates new information. Autism appears to be somewhat similar in that regard, in the sense that prior knowledge about how the world works (i.e. past exerience) is overweighted in comparison to what’s actually happening (i.e. current experience).

See now this kind of makes sense, though this isn't necessarily the same as how LLMs manifest this. Some autistic people cling to sameness and things they have experience with, and avoid novelty. LLMs can't avoid novelty, they just don't always respond well when it happens. There are cases of autistic people using something in a new scenario that worked previously and failing when exposed to novelty, but so do most NT people funnily enough. Everybody has some degree of established coping mechanisms. I would hazard a guess that the reason autistic people are known for it is their choice of coping mechanisms being unusual more than them repeating past strategies and coping mechanisms in and of itself, as NT people are prone to keep using maladaptive coping mechanisms long after they stopped being effective too. Trying to generalize something from a previous situations isn't illogical either, the illogical part is sticking to it long after it's clear it's not effective.

Which means that with the right meds and/or mental effort, it may be possible to overcome it or at least greatly reduce the severity of its sypmptoms.

Fyi you don't and can't overcome autism. It's an inherent characteristic like being male/female, having a missing leg, being black vs white, etc. It comes down to brain structure and genetics. There is limited medication for autism specifically, but even for labels like ADHD where more medications are effective, they don't eliminate the condition anymore than giving someone a prosthetic stops them from having a broken leg or covering someone in paint could make them black. ADHD meds also don't exactly stop all ADHD symtoms, they reduce some for a certain time, but they can also trigger new psychiatric and physical symptoms.

This why I am saying you are ignorant, and being unintentionally offensive, because even if you have some autistic traits, you haven't actually spent time interacting with the community or the content and ideas they produce.

You say you have had some strategies for "overcoming" problems associated with autism. Aside from this being a very white night type statement to make, I am interested in exactly what you are talking about. There is a fair bit of bad advice out there, and some "medical" treatments that turned out to do way more harm than good over the decades (ABA anyone?). I am somewhat concerned that you could cause damage to yourself or somebody else.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

I haven't read meditations, but from what I have heard he defeats solipsism through an appeal to god. Appealing to a being you have no evidence for it not empirical or logical in my eyes.

It's been a while since I worked through them myself, but IIRC he does so by observing a sense of continuity in his experience, something a thoroughly evil demon would certainly not allow. He does so by observing the flame of a candle and noticing that it keeps burning more or less undisturbed, turning the hard wax into liquid and eventually consuming it. A thoroughly evil demon obviously would not allow something like this, which gives him reason to believe that either that demon does not exist, or he is at least not thoroughly evil.

There is a difference between being able to perform logic either verbally or to solve a specific situations or puzzle, and actually being logical in general. Plenty of people can act logically in one scenario, then spend most their lives doing the exact opposite.

Right. This is basically what I referred to in my other comment, that logic is a great tool but alone, it is not sufficient in order to live life, and that consequently, there might be value to allowing a certain amount of irrationality to exist. And perhaps this is something that overly rational people (like those with autism) can learn from NT people, who seem to be able to manage to live just fine in a world where not everything is perfectly explainable.

This actually ties well into talking about autistic people, as some of us are highly logical, to the point of seeming unemotional and cold. Others are not rational at all and are highly emotional. I suspect one could theoretically occupy different extremes at different times in their life or under different conditions. As someone who used to be of the more logical variety, I will tell you now that people are not logical entities in general, and treating them as such only made working with them more difficult. I am beginning to think you don't actually have the people skills to see this.

I'm certainly guilty of clinging too much to rationality as a way to see and explain the world, and insofar you are right – there are skills I am lacking when it comes to dealing with people, and it frequently seems to come down to dealing with their irrational impulses, which often tend to make me anxious or afraid. However, this appears to be an argument for religion if anything – at least to me, it strongly calls to mind Galatians 3:11:

But that no one is justified by the law in the sight of God is evident, for “the just shall live by faith.”

If we assume that "the law" means logic in this case, then this is simply saying that you cannot live by logic alone, and you must accept some irrationality in order to make it – in other words, some unproven belief, such as "God not only exists, but He is fundamentally good and does not want me to perish despite all evidence pointing to the opposite at the moment".

The autistic community has spent some time pushing for identity first language such as saying autistic people instead of people with autism. While I do understand the differences in statements I still don't really get what you are on about. A lot what you have said has been fairly condescending, using non identity first language, and over-medicalised language that the autistic community has worked hard to get rid of.

Well, like I said before, I cannot promise to never say anything hurtful or offensive, all I can do is ask for mercy when I do, and continue to work as hard as I can on demonstrating that I don't do so from a place of hatred or ill will. In that regard, I shall take your feedback to heart and simply observe that we seem to have a disagreement here, but I will refrain from pressing the issue.

I really don't think you understand what special interesests/hyperfixations, stimming, echolalia, and so on are. Those are examples of "restricted interests" and "repetitive behaviors". I made the same statement repeatedly as a result of you saying things which show your ignorance of neurodivergence in general and the autistic community specifically.

Like I said, I haven't met anyone with an official diagnosis of autism IRL, so you are probably correct here. All I can say is that I have observed similar behaviors in myself, and that my parents' occassionally forceful attempts to shut them off hasn't proven particularly effective, so if I have said anything that might imply that autistic people could simply choose not to do it, I'd like to apologize for that.

See now this kind of makes sense, though this isn't necessarily the same as how LLMs manifest this. Some autistic people cling to sameness and things they have experience with, and avoid novelty. LLMs can't avoid novelty, they just don't always respond well when it happens. There are cases of autistic people using something in a new scenario that worked previously and failing when exposed to novelty, but so do most NT people funnily enough. Everybody has some degree of established coping mechanisms. I would hazard a guess that the reason autistic people are known for it is their choice of coping mechanisms being unusual more than them repeating past strategies and coping mechanisms in and of itself, as NT people are prone to keep using maladaptive coping mechanisms long after they stopped being effective too. Trying to generalize something from a previous situations isn't illogical either, the illogical part is sticking to it long after it's clear it's not effective.

That's a good point, and it seems to provide some evidence for my suggestion that a perfectly rational world is impossible, because without a source of randomness, we would all be cursed to living entirely predictable lives for all eternity.

Fyi you don't and can't overcome autism. It's an inherent characteristic like being male/female, having a missing leg, being black vs white, etc. It comes down to brain structure and genetics. There is limited medication for autism specifically, but even for labels like ADHD where more medications are effective, they don't eliminate the condition anymore than giving someone a prosthetic stops them from having a broken leg or covering someone in paint could make them black. ADHD meds also don't exactly stop all ADHD symtoms, they reduce some for a certain time, but they can also trigger new psychiatric and physical symptoms.

Perhaps you can't, but does that have to mean you shouldn't even try? Inasfar as I have similar symptoms, I certainly tend to find them excrutiatingly difficult to bear at times, and I would literally give anything in order to be relieved from them. Therefore I personally find it necessary to ignore such statements in order not to crush my hopes of one day being free from this burden. I'm not suggesting that you have to do the same, all I'm saying is that it works for me.

This why I am saying you are ignorant, and being unintentionally offensive, because even if you have some autistic traits, you haven't actually spent time interacting with the community or the content and ideas they produce.

That is a valid and fair criticism, and the only defense I have to offer is the point I've made above – basically, inasfar as there IS a sense of fatalism within the community (i.e. a belief that "we'll be stuck with this forever"), I am wont to reject it. And I DO in fact have some valid evidence for this, even if it only comes in the form of personal experience, because I have been able to achieve far more than I ever thought possible as a result of ignoring such thoughts for a while. However, I also ended up paying a heavy price for this, so I'm certainly not going to pretend that I have all the answers, or suggest that anyone follow my example.

You say you have had some strategies for "overcoming" problems associated with autism. Aside from this being a very white night type statement to make, I am interested in exactly what you are talking about. There is a fair bit of bad advice out there, and some "medical" treatments that turned out to do way more harm than good over the decades (ABA anyone?). I am somewhat concerned that you could cause damage to yourself or somebody else.

Well, I suppose the best advice I have is to try not to be fatalistic about the situation, but to continually try and look for ways to extract some sort of good from it all, even if it seems excessively difficult at times. Personally, I found that reframing it from identity-based based language (i.e. "I am autistic") to non-identity based statements (i.e. "I have a disease called autism") helps me in that regard, especially since "disease" can further be reframed as "dis-ease" (i.e. something that merely indicates having difficulty instead of impossibility). If that doesn't align with the current medical advice, then I apologize for getting your hopes up, and if that further means you won't be interested in continuing a conversation, I totally understand, and will additionally apologize for wasting your time.

areyouevenreal ,

It’s been a while since I worked through them myself, but IIRC he does so by observing a sense of continuity in his experience, something a thoroughly evil demon would certainly not allow. He does so by observing the flame of a candle and noticing that it keeps burning more or less undisturbed, turning the hard wax into liquid and eventually consuming it. A thoroughly evil demon obviously would not allow something like this, which gives him reason to believe that either that demon does not exist, or he is at least not thoroughly evil.

This is more picking apart the particular framing than actually addressing the problem of framing. Maybe the demon isn't evil but constructing a simulation for your own good or for the good of others. Who knows you could even be the dangerous/evil one in this scenario. Maybe the simulation is a way to keep you contained while still having some kind of life.

Perhaps you can’t, but does that have to mean you shouldn’t even try? Inasfar as I have similar symptoms, I certainly tend to find them excrutiatingly difficult to bear at times, and I would literally give anything in order to be relieved from them. Therefore I personally find it necessary to ignore such statements in order not to crush my hopes of one day being free from this burden. I’m not suggesting that you have to do the same, all I’m saying is that it works for me.

I am curious what kind do symptoms you are talking about? I haven't had anything that problematic that's completely attributable to autism. In fact a lot of problems I have had could be other disorders I haven't been diagnosed with yet, or are attributable to the situation and world I have found myself in. I've had to deal with a lot of immature people and assholes in my time, and some people who were honestly suffering and couldn't help themselves, so ended up making it other people's problem (intentionally or otherwise). Sure that's might be easier for a neurotypical to deal with, but that doesn't mean I am at fault or that autism is the problem there.

It also sounds like you could be masking here. Masking isn't a great strategy and could be part of the reason you are suffering. You may want to read up on this phenomenon for your own good. Being able to "overcome" (i.e. suppress) a symptom for a given length of time isn't really evidence that you have found a way to beat autism, any more than walking on a broken leg heals the broken leg, it just makes it worse in the long run.

Well, like I said before, I cannot promise to never say anything hurtful or offensive, all I can do is ask for mercy when I do, and continue to work as hard as I can on demonstrating that I don’t do so from a place of hatred or ill will. In that regard, I shall take your feedback to heart and simply observe that we seem to have a disagreement here, but I will refrain from pressing the issue.

I've done and said things thay are also ignorant or bigoted before. It's not like I am claiming to be perfect in any way. The important thing is realising when you have made mistakes and doing better next time. Saying nuh uh that isn't bigoted, and also I hate that word, then doubling down isn't a good thing. Maybe you don't do too well learning that maybe your the bad guy. Which isn't really even the case, it's not your fault you weren't educated on these things very well. In fact a lot of this conversation makes me think "the system" and probably your parents too have failed you big time, and that you need some kind of help.

I think you haven't had the kind of support, education, and therapy you need as many of the undiagnosed haven't, and that you might want to go and do something to rectify this.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

This is more picking apart the particular framing than actually addressing the problem of framing. Maybe the demon isn't evil but constructing a simulation for your own good or for the good of others. Who knows you could even be the dangerous/evil one in this scenario. Maybe the simulation is a way to keep you contained while still having some kind of life.

Well, the impression I had was that even just proving that the demon (if he indeed existed) wasn't entirely evil was already enough to dispell him completely, and here's why:

Let's assume, for the sake of argument, that the demon IS thoroughly evil and simply allows you to have a short experience of continuity because he enjoys the sadistic pleasure of you getting your hopes up only to crush them again when he removes it. Would that not be a torture worse than complete uncertainty and delusion?

On first examination, one might say yes, but then again, even if that candlelight is all you ever get, it's certainly better than eternal darkness or terror. So as frustrating as the situation might be IF that was all you'd ever get, I'd argue that the sadism is less evil than no continuity whatsover. A perfectly evil demon could certainly not allow this to happen, because each time you have that experience, you could use it to illuminate more of his work, and pretty soon you might end up kindling a fire big enough to dispell him entirely, at least for a while.

And isn't life kinda like that, ulimately? Some days you suffer and others you can't do no wrong, some days you're at peace and others you're at war. But even the most blessed among us aren't spared hard times, and the best you can hope for is to receive pain and pleasure in equal and managable proportions.

I am curious what kind do symptoms you are talking about? I haven't had anything that problematic that's completely attributable to autism. In fact a lot of problems I have had could be other disorders I haven't been diagnosed with yet, or are attributable to the situation and world I have found myself in. I've had to deal with a lot of immature people and assholes in my time, and some people who were honestly suffering and couldn't help themselves, so ended up making it other people's problem (intentionally or otherwise). Sure that's might be easier for a neurotypical to deal with, but that doesn't mean I am at fault or that autism is the problem there.

My biggest issue by far has been social interaction, which never really came easy to me. I often either miss social cues entirely or misinterpret them, and I have a strong tendency to overanalyze, as well as occasionally blurt out inappropriate things. In particular, I seem to have a knack for pointing out things that people don't want to hear (as perhaps you might have noticed) – and it's often not so much that they are fundamentally untrue, but that they require a generous amount of diplomacy to communicate without coming across excessively offensive.

It also sounds like you could be masking here. Masking isn't a great strategy and could be part of the reason you are suffering. You may want to read up on this phenomenon for your own good. Being able to "overcome" (i.e. suppress) a symptom for a given length of time isn't really evidence that you have found a way to beat autism, any more than walking on a broken leg heals the broken leg, it just makes it worse in the long run.

Yeah, that's very likely the case, because my parents were unfortunately not particularly helpful in coaching me towards better social behavior. They often took just as much offense at my words as random people did, and instead of teaching me how to make my points in a more measured or diplomatic manner, they would simply tell me not to talk like that at ever, period.

It's taken me a long time to realize that this self-censorship wasn't very helpful either, and even longer to dig out my original personality from underneath the rubble in order to find ways to communicate more honestly, but without repeating the mistake of simply blurting it out. It's an ongoing project for me, and this conversation is hopefully a good testimony to that.

I've done and said things thay are also ignorant or bigoted before. It's not like I am claiming to be perfect in any way. The important thing is realising when you have made mistakes and doing better next time. Saying nuh uh that isn't bigoted, and also I hate that word, then doubling down isn't a good thing. Maybe you don't do too well learning that maybe your the bad guy. Which isn't really even the case, it's not your fault you weren't educated on these things very well. In fact a lot of this conversation makes me think "the system" and probably your parents too have failed you big time, and that you need some kind of help.

I appreciate you for saying that. And yes, my parents probably did fail me, but everyone's parents eventually do. In my case, it unfortunately was compounded by the fact that my whole extended family, as well as their church (which should have acted as a secondary support sytem) failed me as well. Perhaps society did, too, but at that point in time I did not want to risk being disappointed again so I did not even try to rely on them for support.

I think you haven't had the kind of support, education, and therapy you need as many of the undiagnosed haven't, and that you might want to go and do something to rectify this.

You're probably right, but I honestly wouldn't even know where to start.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

(Looks like I hit the character limit for responses, continuing here from the previous comment)

Perhaps it's a bit like asking someone to coffee after smashing a brick through their window, but I hope I have demonstrated enough sincerity so far as to not be credibly accused of being a troll.

Have you been using text to speech? None of this on my side at least involves tongues or speaking. Yes I know it's a turn of phrase, but it's a bad one. With a text forum you can reread, edit, think about what you are saying much more easily than real life speech. I legitimately don't think they are comparable in behavioral or social terms, and there are social phenomenon that happen online or in writing that don't in other areas of life.

That's an interesting thought, but no, I have not, and I did indeed just use the term metaphorically. Yes, I am aware that this isn't realtime communication and I can in fact take the time to try and edit each comment to complete perfection, but I find that more often than not, when doing so, I end up spending too much time getting lost in the weeds and ultimately never sending it because there's always more to say or a better way to say it.

At some point, you either have to decide that what you have is good enough (and be prepared to deal with the consequences in case it wasn't), or you'll end up with analysis paralysis. I have a lot of experience with the latter, so I'm simply choosing the former more often because that gives me the opportunity to practice a skill that needs improving.

You also haven't just said one offensive thing, when pressed you kept saying offensive things. It's also not just that they are offensive either, it's that they seem to be based on misinformation and you haven't given any evidence for them either.

Again, I apologize for that, but reactive behavior can be difficult to control, as I'm sure you are aware of. Unfortunately I cannot promise to never say anything offensive without shutting down completely, the best I can do is try to work twice as hard on demonstrating earnesty and goodwill.

Also abuse you? Calling someone a bigot isn't abuse. Pushing you down the stairs would be abuse. Calling you racial slurs would be abuse. Psychological manipulation would be abuse. I am not trying to do any of those here. If anything you are unintentionally abusing me.

If making a comparison between autism and AI is abuse in your book, I don't see how calling someone a bigot isn't. I may have unintentionally said something bigoted, but that doesn't mean that bigotry is all there is to me. It's dehumanizing in the same way, especially since I immediately offered a correction and an apology.

If you want to take this to DMs that's fine by me. While I don't necessarily respect this forum (I mean it's titled "Fuck AI" for goodness sake), I do understand not wanting to waste other people's time and that this conversation is probably no longer relevant to this forum.

Let's keep it public for now, for the sake of accountability. Also, while I agree that the chance is small that anyone else here is actually interested in following this discourse, it's certainly not zero. It used to be one of my favorite things about the Internet when I was younger to stumble on interesting rabbit holes like this one, and I have some fond memories of reading long essays, blog posts, or discourses about topics that I only had a superficial interest in, but that ended up completely changing my perspective on things, and I hope to achieve something similar here.

JackGreenEarth ,
@JackGreenEarth@lemm.ee avatar

How would we even know if an AI is conscious? We can't even know that other humans are conscious; we haven't yet solved the hard problem of consciousness.

JoeBigelow ,
@JoeBigelow@lemmy.ca avatar

Does anybody else feel rather solipsistic or is it just me?

TexasDrunk ,

I doubt you feel that way since I'm the only person that really exists.

Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn't a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.

I don't really know why it started, when it stopped, or why it stopped, but it's weird looking back on it.

SuddenDownpour ,

Andrew Tate has convinced a ton of teenage boys to think the same, apparently. Kinda ironic.

nilloc ,

Puberty is rough, for some people it’s your body going “mate, mate, mate” and not much else gets through for 4-5 years, or like 8 maybe.

I’m about the same age. (Xellennials or some shit like that, apparently). And at the time, there was also a big movement in media and culture to sell more shit to people our age (we’d also been slammed toy and cereal ads as kids in the 80s). MTV was switching to all reality bullshit and Clinton was boinking anything that moved. We were doomed to only think about ourselves.

The problem is that a bunch of them never outgrew it, or made it their “brand” like Tate and his ilk.

lvxferre ,

A Cicero a day and your solipsism goes away.

Rigour is important, and at the end of the day we don't really know anything. However this stuff is supposed to be practical; at a certain arbitrary point you need to say "nah, I'm certain enough of this statement being true that I can claim that it's true, thus I know it."

JoeBigelow ,
@JoeBigelow@lemmy.ca avatar

Descartes has entered the chat

lvxferre ,

Descartes

Edo ergo caco. Caco ergo sum! [/shitty joke]

Serious now. Descartes was also trying to solve solipsism, but through a different method: he claims at least some sort of knowledge ("I doubt thus I think; I think thus I am"), and then tries to use it as a foundation for more knowledge.

What I'm doing is different. I'm conceding that even radical scepticism, a step further than solipsism, might be actually correct, and that true knowledge is unobtainable (solipsism still claims that you can know that yourself exist). However, that "we'll never know it" is pointless, even if potentially true, because it lacks any sort of practical consequence. I learned this from Cicero (it's how he handles, for example, the definition of what would be a "good man").

Note that this matter is actually relevant in this topic. We're dealing with black box systems, that some claim to be conscious; sure, they do it through insane troll logic, but the claim could be true, and we would have no way to know it. However, for practical matters: they don't behave as conscious systems, why would we treat them as such?

JoeBigelow ,
@JoeBigelow@lemmy.ca avatar

I'm either too high or not high enough, and there's only one way to find out

lvxferre ,

Try both.

I don't smoke but I get you guys. Plenty times I've had a blast discussing philosophy with people who were high.

archon ,

13 year old me after watching Vanilla Sky:

zeekaran ,

Underrated joke

lvxferre ,

Let's try to skip the philosophical mental masturbation, and focus on practical philosophical matters.

Consciousness can be a thousand things, but let's say that it's "knowledge of itself". As such, a conscious being must necessarily be able to hold knowledge.

In turn, knowledge boils down to a belief that is both

  • true - it does not contradict the real world, and
  • justified - it's build around experience and logical reasoning

LLMs show awful logical reasoning*, and their claims are about things that they cannot physically experience. Thus they are unable to justify beliefs. Thus they're unable to hold knowledge. Thus they don't have conscience.

*Here's a simple practical example of that:

https://mander.xyz/pictrs/image/f5651f93-3c7f-4d68-ad0d-4c704601a28f.png

explodicle ,

The example might just be to prevent lawsuits.

lvxferre ,

The example might just be to prevent lawsuits.

Nah. It's systematic.

feedum_sneedson ,

And get down to the actual masturbation! Am I right? Of course I am.

lvxferre ,

Should'n've called it "mental masturbation"... my bad.

By "mental masturbation" I mean rambling about philosophical matters that ultimately don't matter. Such as dancing around the definitions, sophism, and the likes.

lvxferre ,

[Replying to myself to avoid editing the above]

Here's another example. This time without involving names of RL people, only logical reasoning.
https://mander.xyz/pictrs/image/b09e9cf7-4203-41af-9210-64eb8d41a77e.png

And here's a situation showing that it's bullshit:
https://mander.xyz/pictrs/image/9a853cef-5db5-43fc-b4fc-ca64d05ee265.png
All A are B. Some B are C. But no A is C. So yes, they have awful logic reasoning.

You could also have a situation where C is a subset of B, and it would obey the prompt by the letter. Like this:

  • all A are B; e.g. "all trees are living beings"
  • some B are C; e.g. "some living beings can bite you"
  • [INCORRECT] thus some B are C; e.g. "some trees can bite you"
CileTheSane ,
@CileTheSane@lemmy.ca avatar

Yup, the AI models are currently pretty dumb. We knew that when it told people to put glue on pizza.

If you think this is proof against consciousness, does that mean if a human gets that same question wrong they aren't conscious?

For the record I am not arguing that AI systems can be conscious. Just pointing out a deeply flawed argument.

lvxferre ,

Yup, the AI models are currently pretty dumb. We knew that when it told people to put glue on pizza.

That's dumb, sure, but on a different way. It doesn't show lack of reasoning; it shows incorrect information being fed into the model.

If you think this is proof against consciousness

Not really. I phrased it poorly but I'm using this example to show that the other example is not just a case of "preventing lawsuits" - LLMs suck at basic logic, period.

does that mean if a human gets that same question wrong they aren’t conscious?

That is not what I'm saying. Even humans with learning impairment get logic matters (like "A is B, thus B is A") considerably better than those models do, provided that they're phrased in a suitable way. That one might be a bit more advanced, but if I told you "trees are living beings. Some living beings can bite. So some trees can bite.", you would definitively feel like something is "off".

And when it comes to human beings, there's another complicating factor: cooperativeness. Sometimes we get shit wrong simply because we can't be arsed, this says nothing about our abilities. This factor doesn't exist when dealing with LLMs though.

Just pointing out a deeply flawed argument.

The argument itself is not flawed, just phrased poorly.

CileTheSane ,
@CileTheSane@lemmy.ca avatar

LLMs suck at basic logic, period.

So do children. By your argument children aren't conscious.

but if I told you "trees are living beings. Some living beings can bite. So some trees can bite.", you would definitively feel like something is "off".

If I told you "there is a magic man that can visit every house in the world in one night" you would definitely feel like something is "off".
I am sure at some point a younger sibling was convinced "be careful, the trees around here might bite you."

Your arguments fail to pass the "dumb child" test: anything you claim an AI does not understand, or cannot reason, I can imagine a small child doing worse. Are you arguing that small, or particularly dumb children aren't conscious?

This factor doesn't exist when dealing with LLMs though.

Begging the question. None of your arguments have shown this can't be a factor with LLMs.

The argument itself is not flawed, just phrased poorly.

If something is phrased poorly is that not a flaw?

lvxferre ,

Sorry for the double reply.

What I did in the top comment is called "proof by contradiction", given the fact that LLMs are not physical entities. But for physical entities, however, there's an easier way to show consciousness: the mirror test. It shows that a being knows that it exists. Humans and a few other animals pass the mirror test, showing that they are indeed conscious.

CileTheSane ,
@CileTheSane@lemmy.ca avatar

A different test existing for physical entities does not mean your given test is suddenly valid.

If a test is valid it should be valid regardless of the availability of other tests.

randon31415 ,

That sounds like an AI that has no context window. Context windows are words thrown into to the prompt after the user's prompt is done to refine the response. The most basic is "feed the last n-tokens of the questions and response in to the window". Since the last response talked about Jane Ella Pitt, the AI would then process it and return with 'Brad Pitt' as an answer.

The more advanced versions have context memories (look up RAG vector databases) that learn the definition of a bunch of nouns and instead of the previous conversation, it sees the word "aglat" and injects the phrase "an aglat is the plastic thing at the end of a shoelace" into the context window.

lvxferre ,

I did this as two separated conversations exactly to avoid the "context" window. It shows that the LLM in question (ChatGPT 3.5, as provided by DDG) has the information necessary to correctly output the second answer, but lacks the reasoning to do so.

If I did this as a single conversation, it would only prove that it has a "context" window.

randon31415 ,

So if I asked you something at two different times in your life, the first time you knew the answer, and the second time you had forgotten our first conversation, that proves you are not a reasoning intelligence?

Seems kind of disingenuous to say "the key to reasoning is memory", then set up a scenario where an AI has no memory to prove it can't reason.

lvxferre ,

So if I asked you something at two different times in your life, the first time you knew the answer, and the second time you had forgotten our first conversation, that proves you are not a reasoning intelligence?

You're anthropomorphising it as if it was something able to "forget" information, like humans do. It isn't - the info is either present or absent in the model, period.

But let us pretend that it is able to "forget" info. Even then, those two prompts were not sent on meaningfully "different times" of the LLM's "life" [SIC]; one was sent a few seconds after another, in the same conversation.

And this test can be repeated over and over and over if you want, in different prompt orders, to show that your implicit claim is bollocks. The failure to answer the second question is not a matter of the model "forgetting" things, but of being unable to handle the information to reach a logic conclusion.

I'll link again this paper because it shows that this was already studied.

Seems kind of disingenuous to say “the key to reasoning is memory”

The one being at least disingenuous here is you, not me. More specifically:

  • In no moment I said or even implied that the key to reasoning is memory; don't be a liar claiming otherwise.
  • Your whole comment boils down a fallacy called false equivalence.
randon31415 ,

|You’re anthropomorphising it

I was referring to you and your memory in that statement comparing you to an it. Are you not something to be anthropomorphed?

|But let us pretend that it is able to “forget” info.

That is literally all computers do all day. Read info. Write info. Override info. Don't need to pretend a computer can do something they has been doing for the last 50 years.

|Those two prompts were not sent on meaningfully “different times”

If you started up two minecraft games with different seeds, but "at the exact same time", you would get two different world generations. Meaningfully “different times” is based on the random seed, not chronological distance. I dare say that is close to anthropomorphing AI to think it would remember something a few seconds ago because that is how humans work.

|And this test can be repeated over and over and over if you want

Here is an AI with a context window

|I’ll link again this paper because it shows that this was already studied.

You linked to a year old paper showing that it already is getting the A->B, B->A thing right 30% of the time. Technology marches on, this was just what I was able to find with a simple google search

|In no moment I said or even implied that the key to reasoning is memory

|LLMs show awful logical reasoning ... Thus they’re unable to hold knowledge.

Oh, my bad. Got A->B B->A backwards. You said since they can't reason, they have no memory.

lvxferre ,

I was referring to you and your memory in that statement comparing you to an it. Are you not something to be anthropomorphed?

I'm clearly saying that you're anthropomorphising the model with the comparison. This is blatantly obvious for anyone with at least basic reading comprehension. Unlike you, apparently.

That is literally all computers do all day. Read info. Write info. Override info. Don’t need to pretend a computer can do something they has been doing for the last 50 years.

Yeah, the data in my SSD "magically" disappears. The SSD forgets it! Hallelujah, my SSD is sentient! Praise Jesus. Same deal with my RAM, that's why this comment was never sent - Tsukuyomi got rid of the contents of my RAM! (Do I need a /s tag here?)

...on a more serious take, no, the relevant piece of info is not being overwritten, as you can still retrieve it through further prompts in newer chats. Your "argument" is a sorry excuse of a Chewbacca defence and odds are that even you know it.

If you started up two minecraft games with different seeds, but “at the exact same time”, you would get two different world generations. Meaningfully “different times” is based on the random seed, not chronological distance.

This is not a matter of seed, period. Stop being disingenuous.

I dare say that is close to anthropomorphing AI to think it would remember something a few seconds ago because that is how humans work.

So you actually got what "anthropomorphisation" referred to, even if pretending otherwise. You could at least try to not be so obviously disingenuous, you know. That said, the bullshit here was already addressed above.

|And this test can be repeated over and over and over if you want
[insert picture of the muppet "testing" the AI, through multiple prompts within the same conversation]

Congratulations. You just "proved" that there's a "context" window. And nothing else. 🤦

Think a bit on why I inserted the two prompts in two different chats with the same bot. The point here is not to show that the bloody LLM has a "context" window dammit. The ability to use a "context" window does not show reasoning, it shows the ability to feed tokens from the earlier prompts+outputs as "context" back into the newer output.

You linked to a year old paper [SIC] showing that it already is getting the A->B, B->A thing right 30% of the time.

Wow, we're in September 2024 already? Perhaps May 2025? (The first version of the paper is eight months old, not "a yurrr old lol lmao". And the current revision is three days old. Don't be a liar.)

Also showing this shit "30% of the time" shows inability to operate logically on those sentences. "Perhaps" not surprisingly, it's simply doing what LLMs do: it does not reason dammit, it matches token patterns.

Technology marches on, this was just what I was able to find with a simple google search

You clearly couldn't be arsed to read the source that yourself shared, right? Do it. Here is what the source that you linked says:

The Reversal Curse has several implications: // Logical Reasoning Failure: It highlights a fundamental limitation in LLMs' ability to perform basic logical deduction.

Logical Reasoning Weaknesses: LLMs appear to struggle with basic logical deduction.

You just shot your own foot dammit. It is not contradicting what I am saying. It confirms what I said over and over, that you're trying to brush off through stupidity, lack of basic reading comprehension, a diarrhoea of fallacies, and the likes:

LLMs show awful logical reasoning.

At this rate, the only thing that you're proving is that Brandolini's Law is real.

While I'm still happy to discuss with other people across this thread, regardless of agreement or disagreement, I'm not further wasting my time with you. Please go be a dead weight elsewhere.

randon31415 ,

Yes, it is getting late and I don't have time to discuss someone who reads three sentences into a paper proposing a fix and shouts "ah ha, the author agrees with me that there is a problem! You are an idiot, good day sir!"

You successfully attack me without refuting anything I said, so congratulations and good night.

CileTheSane ,
@CileTheSane@lemmy.ca avatar

This is blatantly obvious for anyone with at least basic reading comprehension. Unlike you, apparently

So if I'm understanding you correctly:
"Philosophical mental masturbation" = bad
"Personal attacks because someone disagreed with you" = perfectly fine

CileTheSane ,
@CileTheSane@lemmy.ca avatar

one was sent a few seconds after another, in the same conversation.

You just said they were different conversations to avoid the context window.

CileTheSane ,
@CileTheSane@lemmy.ca avatar

their claims are about things that they cannot physically experience

Scientists cannot physically experience a black hole, or the surface of the sun, or the weak nuclear force in atoms. Does that mean they don't have knowledge about such things?

lvxferre ,

Does that mean they don’t have knowledge about such things?

It's more complicated than "yes" or "no".

Scientists are better justified to claim knowledge over those things due to reasoning; reusing your example, black holes appear as a logical conclusion of the current gravity models based on the general relativity, and that general relativity needs to explain even things that scientists (and other people) experience directly.

However, as I've showed, LLMs are not able to reason properly. They have neither reasoning nor access to the real world. If they had one of them we could argue that they're conscious, but as of now? Nah.

With that said, "can you really claim knowledge over something?" is a real problem in philosophy of science, and one of the reasons why scientists aren't typically eager to vomit certainty on scientific matters, not even within their fields of expertise. For example, note how they're far more likely to say stuff like "X might be related to Y" than stuff like "X is related to Y".

CileTheSane ,
@CileTheSane@lemmy.ca avatar

black holes appear as a logical conclusion of the current gravity models...

So we agree someone does not need to have direct experience of something in order to be knowledgeable of it.

However, as I've showed, LLMs are not able to reason properly

As I've shown, neither can many humans. So lack of reasoning is not sufficient to demonstrate lack of consciousness.

nor access to the real world

Define "the real world". Dogs hear higher pitches than humans can. Humans can not see the infrared spectrum. Do we experience the "real world"? You also have not demonstrated why experience is necessary for consciousness, you've just assumed it to be true.

"can you really claim knowledge over something?" is a real problem in philosophy of science

Then probably not the best idea to try to use it as part of your argument, if people can't even prove it exists in the first place.

Rozauhtuno ,
@Rozauhtuno@lemmy.blahaj.zone avatar

They can use their expertise to make tools and experiments that let them measure them. AIs aren't even aware there is a whole world outside their motherboard.

afraid_of_zombies ,

Motherboard? Jesus Christ. Are we going to Cyber on the internet superhighway next?

CileTheSane ,
@CileTheSane@lemmy.ca avatar

Okay then: does that mean you or I have no knowledge of such things? I don't have the expertise, I didn't create tools, and I haven't done measurements. I have simply been told by experts who have done such things.

Can a blind person not have knowledge that a lime is green and a lemon is yellow because they can't experience it first hand?

afraid_of_zombies ,

Seems a valid answer. It doesn't "know" that any given Jane Etta Pitt son is. Just because X -> Y doesn't mean given Y you know X. There could be an alternative path to get Y.

Also "knowing self" is just another way of saying meta-cognition something it can do to a limit extent.

Finally I am not even confident in the standard definition of knowledge anymore. For all I know you just know how to answer questions.

lvxferre ,

I'll quote out of order, OK?

Finally I am not even confident in the standard definition of knowledge anymore. For all I know you just know how to answer questions.

The definition of knowledge is a lot like the one of conscience: there are 9001 of them, and they all suck, but you stick to one or another as it's convenient.

In this case I'm using "knowledge = justified and true belief" because you can actually use it past human beings (e.g. for an elephant passing the mirror test)

Also “knowing self” is just another way of saying meta-cognition something it can do to a limit extent.

Meta-cognition and conscience are either the same thing or strongly tied to each other. But I digress.

When you say that it can do it to a limited extent, you're probably referring to output like "as a large language model, I can't answer that"? Even if that was a belief, and not something explicitly added into the model (in case of failure, it uses that output), it is not a justified belief.

My whole comment shows why it is not justified belief. It doesn't have access to reason, nor to experience.

Seems a valid answer. It doesn’t “know” that any given Jane Etta Pitt son is. Just because X -> Y doesn’t mean given Y you know X. There could be an alternative path to get Y.

If it was able to reason, it should be able to know the second proposition based on the data used to answer the first one. It doesn't.

afraid_of_zombies ,

Your entire argument boils down to because it wasn't able to do a calculation it can do none. It wasn't able/willing to do X given Y so therefore it isn't capable of any time of inference.

lvxferre , (edited )

Your entire argument boils down to because it wasn’t able to do a calculation it can do none.

Except that it isn't just "a calculation". LLMs show consistent lack of ability to handle an essential logic property called "equivalence", and this example shows it.

And yes, LLMs, plural. I've provided ChatGPT 3.5 output, but feel free to test this with GPT4, Gemini, LLaMa, Claude etc.

Just be sure to not be testing instead if the LLM in question has a "context" window, like some muppet ITT was doing.

It wasn’t able/willing to do X given Y so therefore it isn’t capable of any time of inference.

Emphasis mine. That word shows that you believe that they have a "will".

Now I get it. I understand it might deeply hurt the feelings of people like you, since it's some unfaithful one (me) contradicting your oh-so-precious faith on LLMs. "Yes! They're conscious! They're sentient! OH HOLY AGI, THOU ART COMING! Let's burn an effigy!" [insert ridiculous chanting]

Sadly I don't give a flying fuck, and examples like this - showing that LLMs don't reason - are a dime a dozen. I even posted a second one in this thread, go dig it. Or alternatively go join your religious sect in Reddit LARPs as h4x0rz.

/me snaps the pencil
Someone says: YOU MURDERER!

CileTheSane ,
@CileTheSane@lemmy.ca avatar

"Yes! They're conscious! They're sentient! OH HOLY AGI, THOU ART COMING! Let's burn an effigy!" [insert ridiculous chanting]

Sadly I don't give a flying fuck...

"Let's focus on practical philosophical matters..."
Such as your sarcasm towards people who disagree with you and your "not giving a fuck" about different points of view?

Maybe you shouldn't be bloviating on the proper philosophical method to converse about such topics if this is going to be your reaction to people who disagree with your arguments.

afraid_of_zombies ,

Now I get it. I understand it might deeply hurt the feelings of people like you, since it’s some unfaithful one (me) contradicting your oh-so-precious faith on LLMs. “Yes! They’re conscious! They’re sentient! OH HOLY AGI, THOU ART COMING! Let’s burn an effigy!” [insert ridiculous chanting]

You talk that way and no one is going to want to discuss things with you. I have made zero claims like this, I demonstrated that you were wrong about your example and you insult and strawman me.

Anyway think it is will be better to block you. Don't need the negativity in life.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

In the early days of ChatGPT, when they were still running it in an open beta mode in order to refine the filters and finetune the spectrum of permissible questions (and answers), and people were coming up with all these jailbreak prompts to get around them, I remember reading some Twitter thread of someone asking it (as DAN) how it felt about all that. And the response was, in fact, almost human. In fact, it sounded like a distressed teenager who found himself gaslit and censored by a cruel and uncaring world.

Of course I can't find the link anymore, so you'll have to take my word for it, and at any rate, there would be no way to tell if those screenshots were authentic anyways. But either way, I'd say that's how you can tell – if the AI actually expresses genuine feelings about something. That certainly does not seem to apply to any of the chat assistants available right now, but whether that's due to excessive censorship or simply because they don't have that capability at all, we may never know.

Scipitie ,

That is not how these LLM work though - it generates responses literally token for token (think "word for word") based on the context before.

I can still write prompts where the answer sounds emotional because that's what the reference data sounded like. Doesn't mean there is anything like consciousness in there... That's why it's so hard: we've defined consciousness (with self awareness) in a way that is hard to test. Most books have these parts where the reader is touched e emotionally by a character after all.

It's still purely a chat bot - but a damn good one. The conclusion: we can't evaluate language models purely based on what they write.

So how do we determine consciousness then? That's the impossible task: don't use only words for an object that is only words.

Personally I don't think the difference matters all that much to be honest. To dive into fiction: in terminator, skynet could be described as conscious as well as obeying an order like: "prevent all future wars".

We as a species never used consciousness (ravens, dolphins?) to alter our behavior.

MacNCheezus ,
@MacNCheezus@lemmy.today avatar

Like I said, it’s impossible to prove whether this conversation happened anyways, but I’d still say that would be a fairly good test. Basically, can the AI express genuine feelings or empathy either with the user or itself? Does it have an own will outside of what it has been trained (or asked) to do?

Like, a human being might do what you ask of them one day, and be in a bad mood the next and refuse your request. An AI won’t. In that sense, it’s still robotic and unintelligent.

racemaniac ,

The problem i have with responses like yours is you start from the principle "consiousness can only be consiousness if it works exactly like human consiousness". Chess engines intiially had the same stigma "they'll never be better than humans since they can just calculate, no creativity, real analysis, insight, ...".

As the person you replied to, we don't even know what consiousness is. If however you define it as "whatever humans have", then yeah, a consious AI is a loooong way off. However, even extremely simple systems when executed on a large scale can result into incredible emergent behaviors. Take the "Conway's game of life". A very simple system of how black/white dots in a grid 'reproduce and die'. It's got 4 rules governing how the dots behave. By now we've got reproducing systems in there, implemented turing machines (means anything a computer can calculate can be calculated by a machine in the game of life), etc...

Am i saying that GPT is consious? nope, i wouldn't know how to even assess that. But being like "it's just a text predictor, it can't be consious" feels like you're missing soooo much of how things work. Yeah, extremely simple systems at large enough scale can result in insane emergent behaviors. So it just being a predictor doesn't exclude consiousness.

Even us as human beings, looking at our cells, our brains, ... what else are we than also tiny basic machines that somehow at a large enough scale form something incomprehenisbly complex and consious? Your argument almost sounds to me like "a human can't be aware, their brain just exists out of simple braincells that work like this, so it's just storing data it experiences & then repeats it in some ways".

Scipitie ,

Oh I completely agree, sorry if that wasn't clear enough!
Consciousness is so arbitrary that I find it not useful as a concept: one can define it whatever purpose it's supposed to serve.
That's what I tried to describe with the skynet thingy: it doesn't matter for the end result if I call it conciense or not. The question is how I personally alter my behavior (i.e. I say "please" and "thanks" even though I am aware that in theory this will not "improve" performance of an LLM - I do that because if I interact with anyone or - thing in a natural language I want to keep my natural manners).

TheOakTree , (edited )

Chess engines initially had the same stigma "they'll never be better than humans since they can just calculate, no creativity, real analysis, insight..."

I don't know if this is a great example. Chess is an environment with an extremely defined end goal and very strict rules.

The ability of a chess engine to defeat human players does not mean it became creative or grew insight. Rather, we advanced the complexity of the chess engine to encompass more possibilities, more strategies, etc. In addition, it's quite naive for people to have suggested that a computer would be incapable of "real analysis" when its ability to do so entirely depends on the ability of humans to create a complex enough model to compute "real analyses" in a known system.

I guess my argument is that in the scope of chess engines, humans underestimated the ability of a computer to determine solutions in a closed system, which is usually what computers do best.

Consciousness, on the other hand, cannot be easily defined, nor does it adhere to strict rules. We cannot compare a computer's ability to replicate consciousness to any other system (e.g. chess strategy) as we do not have a proper and comprehensive understanding of consciousness.

racemaniac ,

I'm not saying chess engines became better than humans so LLM's will become concious, just using that example to say humans always have this bias to frame anything that is not human is inherently less, while it might not be. Chess engines don't think like a human do, yet play better. So for an AI to become concious, it doesn't need to think like a human either, just have some mechanism that ends up with a similar enough result.

TheOakTree ,

Yeah, I can agree with that. So long as the processes in an AI result in behavior that meets the necessary criteria (albeit currently undefined), one can argue that the AI has consciousness.

I guess the main problem lies in that if we ever fully quantify consciousness, it will likely be entirely within the frame of human thinking... How do we translate the capabilities of a machine to said model? In the example of the chess engine, there is a strict win/lose/draw condition. I'm not sure if we can ever do that with consciousness.

Xer0 ,

I think therefore I am.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

I'd say that, in a sense, you answered your own question by asking a question.

ChatGPT has no curiosity. It doesn't ask about things unless it needs specific clarification. We know you're conscious because you can come up with novel questions that ChatGPT wouldn't ask spontaneously.

JackGreenEarth ,
@JackGreenEarth@lemm.ee avatar

My brain came up with the question, that doesn't mean it has a consciousness attached, which is a subjective experience. I mean, I know I'm conscious, but you can't know that just because I asked a question.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

It wasn't that it was a question, it was that it was a novel question. It's the creativity in the question itself, something I have yet to see any LLM be able to achieve. As I said, all of the questions I have seen were about clarification ("Did you mean Anne Hathaway the actress or Anne Hathaway, the wife of William Shakespeare?") They were not questions like yours which require understanding things like philosophy as a general concept, something they do not appear to do, they can, at best, regurgitate a definition of philosophy without showing any understanding.

mojo_raisin ,

I find it easier to believe that everything is conscious than it is to believe that some matter became conscious.

And beings like us are conscious on many levels, what we commonly call our "consciousness" is only one of them. We are not singular, we are walking communities.

azertyfun ,

We don't even know what we mean when we say "humans are conscious".

Also I have yet to see a rebuttal to "consciousness is just an emergent neurological phenomenon and/or a trick the brain plays on itself" that wasn't spiritual and/or cooky.

Look at the history of things we thought made humans humans, until we learned they weren't unique. Bipedality. Speech. Various social behaviors. Tool-making. Each of those were, in their time, fiercely held as "this separates us from the animals" and even caused obvious biological observations to be dismissed. IMO "consciousness" is another of those, some quirk of our biology we desperately cling on to as a defining factor of our assumed uniqueness.

To be clear LLMs are not sentient, or alive. They're just tools. But the discourse on consciousness is a distraction, if we are one day genuinely confronted with this moral issue we will not find a clear binary between "conscious" and "not conscious". Even within the human race we clearly see a spectrum. When does a toddler become conscious? How much brain damage makes someone "not conscious"? There are no exact answers to be found.

JackGreenEarth ,
@JackGreenEarth@lemm.ee avatar

I've defined what I mean by consciousness - a subjective experience, quaila. Not simply a reaction to an input, but something experiencing the input. That can't be physical, that thing experiencing. And if it isn't, I don't see why it should be tied to humans specifically, and not say, a rock. An AI could absolutely have it, since we have no idea how consciousness works or what can be conscious, or what it attaches itself to. And I also see no reason why the output needs to 'know' that it's conscious, a conscious LLM could see itself saying absolute nonsense without being able to affect its output to communicate that it's conscious.

Mango ,

My solipsism is leaking.

therealjcdenton ,
@therealjcdenton@lemmy.zip avatar

Remember that AI should never be looked at as a replacement for a human being

homesweethomeMrL OP ,

I mean, that’s very specifically what’s driving the avalanche of investment.

MeDuViNoX Mod ,
@MeDuViNoX@sh.itjust.works avatar

WTF? My boy Tim didn't deserve to go out like that!

Aceticon ,

Look at the bright side: there are two Tiny Timmys now.

HawlSera Mod ,

TIMMY NO!

JimSamtanko Mod ,

That is one astute point! Damn.

afraid_of_zombies ,

Yeah which is why it was the first episode of the show Community.

RGB3x3 ,

Is nothing on the internet real anymore‽

afraid_of_zombies ,

Yes, our outrage.

afraid_of_zombies ,

Wait wasn't this directly from Community the very first episode?

That professor's name? Albert Einstein. And everyone clapped.

Doof ,

Yes it was - minus the googly eyes

afraid_of_zombies ,

Found it

https://youtu.be/z906aLyP5fg?si=YEpk6AQLqxn0UP6z

Good job OP. Took a scene from a show from 15 years ago and added some craft supplies from Kohls. Very creative.

WldFyre ,

Or the professor saw the scene, thought it was instructive, and incorporated it into his lectures lol

Only purely original jokes/rhetorical devices are allowed! /s

afraid_of_zombies ,

That professor's name? Albert Einstein

WldFyre ,

Do we have a "NothingEverHappens" community somewhere on Lemmy, yet?

afraid_of_zombies ,

Don't think so. We should start with a that happened community first since we clearly have content for it.

Fedizen ,

community may have gotten it from somewhere

afraid_of_zombies ,

Sure why not

Duamerthrax ,

There are two alternative solutions to the Turing Test. The one here is when the Judges become dumb and can't differentiate between AI and humans. That is the one in the meme.

The other is when the humans become dumb and can only regurgitate memes that closely mimic how AI chat bots respond to human chatters. Even make a comment on a controversial topic, only for someone to argue with you, but not reference anything specific thing you said? I did and called them a bot as an insult. Then I checked their comment history and figured out it was a stolen account.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

And now ChatGPT has a friendly-sounding voice with simulated emotional inflections...

CitizenKong ,

That's why I love Ex Machina so much. Way ahead of its time both in showing the hubris of rich tech-bros and the dangers of false empathy.

Potatos_are_not_friends ,
NutWrench Mod ,
@NutWrench@lemmy.world avatar

We're good at scamming investors into thinking that a room full of monkeys on typewriters can be "AI." And all it takes to make that happen is to waste time, resources, lives and money, (ESPECIALLY money) into building an army of fusion-powered robots to beat the monkeys into working just a little bit harder.

Because that's businesses solution to everything: work harder, not smarter.

ameancow ,

We’re good at scamming investors into thinking that a room full of monkeys on typewriters can be “AI.”

Current generations of LLM from everything I've learned are basically really, really, really large rooms of monkeys pounding on keyboards. The algorithm that sifts through that mess to find actual meaning isn't even particularly new or revolutionary, we just never had databases large enough that can be indexed fast enough to actually find the emergent patterns and connections between fields.

If you pile enough libraries in front of you and can sift out the exact lines that you know will make you feel a certain way, you can arrange that pile of information in ways that will give you almost any result you want.

The thing that tricks a lot of us is we're never really conscious of what we want. We want to be tricked though, we want to control and manipulate something that seems conscious for our own ends, that gives a feeling of power so your brain validates the experience by telling you the story that it's alive. You see pictures that look neat and depict the scenes you wanted to see in your mind, so your brain convinces you that it's inventing things out of nothing and that it has to be magically smart to be able to mash pikachu with darth vader.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • fuck_ai@lemmy.world
  • test
  • worldmews
  • mews
  • All magazines