YeetPics ,
@YeetPics@mander.xyz avatar

Chill, tech bros are spending billions to oust every unmarketable degree and skillset.

Also unmarketable ≠ "useless"

people_are_cute ,
@people_are_cute@lemmy.sdf.org avatar

AI art tools democratize art by empowering those who weren't born with the affinity, talent or privilege to become artists themselves. They allow regular people the freedom of expression in new dimensions. They are amazing.

They are not made to replace human art. They are made to supplement it. The "artists" who feel threatened and offended at its existence are probably not very good at their art.

Dasus ,

If you think arts and humanities are useless, you probably lack an imagination.

Like completely.

I won't say you're useless, because simple minded grunts are needed.

Humanity wouldn't exist without the arts.

intensely_human ,

Ah yes “the arts”. Definitely the point of humanities, and nothing to do with categorizing the world into “important people” and “simple minded grunts”.

Humanities students don’t read these days, and it shows.

Dasus , (edited )

"Art" as a term is so all-encompassing that it's hard to define what is and isn't art.

I'm sure you can rustle up some very reductive few word definition, but the most popular ones go something like "the expression or application of human creative skill and imagination", and that's a very broad definition, wouldn't you agree?

I'm sure you'd also agree there just are some people who never seem to express or apply any of their creative skill or imagination (and some who genuinely seem to lack any altogether), despite still being productive members or society.

Not everyone needs to be an artist, a minority of the population will do, but without artists, we would all perish. As those people who don't necessarily express or apply creative skill or imagination, still most certainly enjoy it, and probably couldn't get through their jobs without it. (Repetitive work is just so much easier while listening to music, and I'm sure that's not a controversial statement.)

So what do humanities students do these days then, according to you, since they "don't read"?

psud ,

The arts isn't about art. Graduates of an arts degree are not generally artists

Dasus ,

Yes, arts as a university subject is more looking into artists and their work and what it meant/means for everyone/other people.

I was never suggesting "arts" in universities are hand-painting lessons, was I?

StephenTallentyre ,
@StephenTallentyre@lemmy.today avatar

Run on, sentence.

ProfessorOwl_PhD ,
@ProfessorOwl_PhD@hexbear.net avatar

No, it uses appropriate coordinating conjunctions. A run on sentence isn't just one that's long.

intensely_human ,

That’s also not a long sentence. It’s a normal sentence presented in a narrow column format.

M68040 ,
@M68040@hexbear.net avatar

The gutting of the humanities and other things generally written off as "frivolous" kind of terrified me. There's something that feels distinctly wrong about these attempts at destroying and anyone that even might turn an introspective gaze on society itself. Like they don't want anything that might foster self-awareness accessible to the layman.

Tankiedesantski ,

An art major's half asleep doodles can receive copyright protection whereas an image created by a million dollar supercomputer running the most sophisticated AI model possible cannot.

Extremely rare artist x lawyer crossover to dunk on the AI bros.

Frogmanfromlake ,
@Frogmanfromlake@hexbear.net avatar

Tech bros are idiots who greatly overestimate their own intelligence .

intensely_human ,

Humanities students are well-rounded individuals with a healthy sense of self-worth.

sirico ,
@sirico@feddit.uk avatar

Some bad course cope right here don't let the philosophy grads see this

Honytawk ,

There are plenty of things you can shit on AI art for

But it is neither badly approximately, nor can a student produce such work in less than a minute.

This feels like the other end of the extreme of the tech bros

Jax ,

Is English your second language?

shift_four ,

Or was this comment by an AI?

Jax ,

Which, mine or theirs?

shift_four ,

Shampoo_bottle

Honytawk ,

Is it that obvious?

Jax ,

No, actually not at all.

I only ask because if English is your second language then your repetition with "other end of the extreme of the tech bros" makes sense. Your mistake is one that many English-as-first-language writers make.

That's all, I didn't mean to make you feel self-conscious.

intensely_human ,

That is perfectly valid English. You can use the word “the” twice in a sentence.

Jax ,

Of the of the

Shampoo_Bottle ,
@Shampoo_Bottle@lemmy.ca avatar

To me, this feels similar to when photography became a thing.

Realism paintings took a dive. Did photos capture realism? Yes. Did it take the same amount of time and training? Hell no.

I think it will come down to what the specific consumer wants. If you want fast, you use AI. If you want the human-made aspect, you go with a manual artist. Do you prefer fast turnover, or do you prefer sentiment and effort? Do you prefer pieces from people who master their craft, or from AI?

I'm not even sorry about this. They are not the exact same, and I'm sick of people saying that AI are and handcrafted art are the exact same. Even if you argue that it takes time to finesse prompts, I can practically promise you that the amount of time between being able to create the two art methods will be drastic. Both may have their place, but they will never be the exact same.

It's the difference between a hand-knitted sweater from someone who had done it their entire life to a sweater from Walmart. It's a hand crafted table from an expert vs something you get from ikea.

Yes, both fill the boxes, but they are still not the exact same product. They each have their place.

On the other hand, I won't commend the hours required to master the method as if they're the same. AI also usually doesn't have to factor in materials, training, hourly rate, etc.

thedeadwalking4242 ,

Honestly people are trying to desperately to automate physical labor to. The problem is the machines don't understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can. The art and humanities is more a side project

istanbullu ,

Nothing wrong in automating tasks that previously needed human labour. I would much rather sit back and chill, and let automation do my bidding

Siethron ,

If only the people in control of the wealth would let the rest of us chill while the machines do all the labor.

istanbullu ,

that's a social problem, not technology's fault.

intensely_human ,

It’s a psychological problem. I chill quite a bit more than most people in history, and in ways people from twenty years ago couldn’t imagine.

I say it’s a psychological problem because despite how overwhelmingly incredible our society is, people are totally committed to this notion that it sucks.

I love my life. I’d rather be low on the economic ladder in today’s world than anywhere in the hierarchy of any previous incarnation of our civilization. Our world is absolutely fucking amazing, and I thank god I have the presence of mind to see past the anti-everything propaganda and actually have a little gratitude for all I’ve inherited from my ancestors, who actually suffered miserable conditions to give me this world.

intensely_human ,

Yeah if only I didn’t have to farm food all day, and worry about the constant gnawing of my empty stomach, and the predators at my door, then I could maybe sit and watch some netflix or play video games, listen to concerts that took place fifty years ago, or just soak in a hot tub of water, our horrible society keeps all that leisure for the most wealthy.

TCB13 ,
@TCB13@lemmy.world avatar

The art and humanities is more a side project

I'll add:

A side project that isn't a life or death situation like most of those physical labor things you're talking about. Art isn't also bound or constrain by rules and regulations like those jobs and if the AI fails at art then there's no problem. Nobody would care.

Harbinger01173430 ,

Besides, if it fails at art it might even create something we never thought

TCB13 ,
@TCB13@lemmy.world avatar

So... art is essentially failing ahaha.

Harbinger01173430 ,

Yeah pretty much. I think. I am no art connoisseur though. If I see pretty drawings or images or whatever, I like.

MonkeMischief ,

With style!

hypnicjerk ,

this is fundamentally the opposite of what generative AI does. its fail state is basically regurgitating its training data intact.

AVincentInSpace ,

you kidding? that's its success state

dariusj18 ,

if the AI fails at art then there's no problem. Nobody would care.

https://www.smbc-comics.com/comic/art-6

cosmicrookie ,
@cosmicrookie@lemmy.world avatar

I believe that i read a title in my local news about AI being implemented in this country's tax system and evaluation of cancer patients. I could try to find a link although it would be in a different language.

AVincentInSpace ,

The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can.

I am deeply confused by this statement.

A robot that assembles cars does not need to "understand" anything about what it's doing. It just needs to make the same motions with its welding torch over and over again for eternity. And it does that job pretty well.

Further, neural networks as they stand cannot truly understand anything. All classification networks know how to do is point at stuff and say "That's a car/traffic light/cancer cell", and all generation networks know how to do is parrot. Any halfway decent teacher will tell you that memorizing and understanding are completely different things.

thedeadwalking4242 ,

No but a robot that does the dishes needs to know how to know what a dish is and how to clean all different types and what's not a dish. The complexity of behavior needed to automate human tasks that cannot be done by a assembly line robot is immense. Most manual labor jobs are still manual labor because they are too full of unknowns and nuances for a simple logic diagram to be of any use. So yes some robots need to understand what's going on

And as for parroting vs remembering current LLMs are very limited in the capacity of creating new things but they can create novel things bash smashing together their training data. Think about it, that's all humans are too. A result of our training data. If I took away every single one of your sense since the day you where born and removed your ability to remember anything you wouldn't be very intelligent either. With no inputs youcould produce no outputs other than gibberish which an AI can do to. ( And I mean ALL senses you have no form of connection with the outside world )

psud ,

My dish washing robot doesn't need to know anything. It does depend on me loading it, and putting the more heat affected stuff on the top shelf

thedeadwalking4242 ,

Yes it depends on you loading it, doesn't always get all the dishes done, and will melt your dishes if they are heat sensitive. All this because it doesn't understand the task at hand. If it did it could, put them away for you, load them, ensure all dishes are spotless, and hand wash heat sensitive dishes.

KeenFlame ,

The problem is they didn't focus research this tech, or try to make image generators specifically, it was an scientific discovery coming from emulating how brains work and then it worked wonders in these fields

intensely_human ,

Which is why STEM is so cool. Because one is dedicated to an interaction with physical reality, which exists outside the mind, novelty can arise unexpectedly from a simple and honest conversation with deep structures nobody knows about.

STEM is cool because it involves discovery. The fact that amazing things can exist without anyone being (yet) aware of them makes it an open and unpredictable undertaking.

intensely_human ,

Right. That’s why making cars is already automated. But a robot that digs ditches needs to understand context because no two ditches are the same.

bilb , (edited )
@bilb@lemmy.ml avatar

Matthew Dow Smith, whomever the fuck that is, has a sophisticated delusion about what's actually going on and he's incorporated it into his persecution complex. Not impressed.

Wanderer ,

Art itself isn't useless it's just incredibly replicable. There is so much good art out there that people don't need to consume crap.

It's like saying there is no money in being a footballer. Of course there is loads of money in being a footballer. But most people that play football don't make any money.

grrgyle ,
@grrgyle@slrpnk.net avatar

This is a good analogy

livus ,

Pretty sure whoever wrote the meme is talking about essay writing in Arts/Humanities, (not the disciplines where you draw and paint etc which are Fine Arts and are not Faculty of Arts in an academic context.

Evilsandwichman ,

I mean they're kind of succeeding; with AI art, people no longer have to settle with Picasso looking artwork.

zalgotext ,

Hooray, we've automated away one of the things that we do for fun and to bring people joy, now I can spend more time in the mines

juststoppingby ,

Just because something can be automated doesn't mean it can't be performed by humans still, especially if it's something you do for fun.

axont ,

yeah instead I have to settle for the two genres of mangled 18 fingered Lovecraft monster or Dreamworks style anime girl. cool

Immersive_Matthew ,

Tech bros are not really techie themselves as they are really just Wall Street bros with tech as their product. Most claim they can code, but if they were coders they would be coding. They are not coders, they are businessmen through and through.who just happen to sell tech.

ProgrammingSocks ,

This is 100% correct. It can overlap but honestly as someone going into embedded systems I despise tech bros.

evranch ,

Most claim they can code, but if they were coders they would be coding

I dislike techbros as much as you, but this isn't really a valid statement.

I can code, but I can't sell a crypto scam to millions of rubes.

If I could, why would I waste my time writing code?

Many techbros are likely "good enough" coders who have better marketing skills and used their tech knowledge to leverage into business instead.

Immersive_Matthew ,

That is the thing though. The real talented tech people tend to be more in the weeds of the tech and get great enjoyment from that. The “tech bros” are more into groups, people, social structures, manipulation, controlling and such and would go crossed eyed if they really had to code something complex as they could never sit that long and concentrate. These are not these same people. Tech bros want you to think they are tech gurus as that is their brand, but it is a lie.

phoneymouse ,

99% of people in tech leadership are just regurgitating marketing jargon with minimal understanding of the underlying tech.

Gabu ,

That's a pretty shit take. Humankind spent nearly 12 thousand years figuring out the combustion engine. It took 1 million years to figure farming. Compared to that, less than 500 years to create general intelligence will be a blip in time.

Valmond ,

Humanity didn't spend those times figuring out those things though. Humanity grew that time to make it happen (and AI is younger than 500y IMO).

Also, we are the same persons today than people were then. We just have access to what our parents generation made and so on.

Gabu ,

AI is younger than 500y IMO

Hence "will be a blip in time"

we are the same persons today than people were then. We just have access to what our parents generation made and so on.

Completelly disconnected and irrelevant to anything I wrote.

kboy101222 ,

Really only around 80 years between the first machines we'd consider computers and today's LLMs, so I'd say that's pretty damn impressive

Harbinger01173430 ,

That's why the sophon was sent to disrupt our progress. Smh

braxy29 ,

i think you're missing the point, which i took as this - what arts and humanities folks do is valuable (as evidenced by efforts to recreate it) despite common narratives to the contrary.

Gabu ,

Of course it's valuable. So is, e.g., soldering components on a circuit board, but we have robots for doing that at scale now.

explodicle ,

Do you think robots will ever become better than humans at creating art, in the same way they've become better than us at soldering?

exocrinous ,

Not if climate change drives humans extinct before they can make those improvements

Zink ,

I guess any robots we leave behind will win by forfeit!

exocrinous ,

Nah, humans are hardier than robots and will live longer. The power grid will shut down long before the last human settlements near the poles die of crop failure.

Zink ,

Well that seems depressingly likely to be accurate.

exocrinous ,

I'm doing my part by not driving a car, but most people are willing to be part of the problem if it makes their lives easier.

xkforce ,

Yep.

Gabu ,

Quite easily, yes. Unlike humans, with their limited lifespans and slow minds, Artificial Inteligence could create hundreds of different paintings in the time it'd take me to finish one.

poplargrove ,

Being able to put out lots of works isn't the same as being able to come up with good, meaningful art?

Spzi ,

That depends on things we don't know yet. If it can be brute forced (throw loads of computation power, gazillions of try & error, petabytes of data including human opinions), then yes, "lots of work" can be an equivalent.

If it does not, we have a mystery to solve. Where does this magic come from? It cannot be broken down into data and algorithms, but still emerges in the material world? How? And what is it, if not dependent on knowledge stored in matter?

On the other hand, how do humans come up with good, meaningful art? Talent Practice. Isn't that just another equivalent of "lots of work"? This magic depends on many learned data points and acquired algorithms, executed by human brains.

There also is survivor bias. Millions of people practice art, but only a tiny fraction is recognized as artists (if you ask the magazines and wallets). Would we apply the same measure to computer generated art, or would we expect them to shine in every instance?

As "good, meaningful art" still lacks a good, meaningful definition, I can see humans moving the goalpost as technology progresses, so that it always remains a human domain. We just like to feel special and have a hard time accepting humiliations like being pushed out of the center of the solar system, or placed on one random planet among billion others, or being just one of many animal species.

Or maybe we are unique in this case. We'll probably be wiser in a few decades.

poplargrove ,

What does it even mean to bruteforce creating art? Trying all the possible prompts to some image model?

The approach people take to learning or applying a skill like painting is not bruteforcing, there is actual structure and method to it.

Spzi ,

What does it even mean to bruteforce creating art? Trying all the possible prompts to some image model?

Doesn't have to be that random, but can be. Here, I wrote: "throw loads of computation power, gazillions of try & error, petabytes of data including human opinions".

The approach people take to learning or applying a skill like painting is not bruteforcing, there is actual structure and method to it.

Ok, but isn't that rather an argument that it can eventually be mastered by a machine? They excel at applying structure and method, with far more accuracy (or the precise amount of desired randomness) and speed than we can.

The idea of brute forcing art comes down to philosophical questions. Do we have some immaterial genie in us, which cannot be seen and described by science, which cannot be recreated by engineers? Engeniers, lol. Is art something which depends on who created it, or does it depend on who views it?

Either way what I meant is that it is thinkable that more computation power and better algorithms bring machines closer to being art creators, although some humans surely will reject that solely based on them being machines. Time will tell.

naevaTheRat ,

feel free to audit my comments to confirm my distinct lack of gpt enthusiasm but that question is unanswerable.

What is "creating art"? A distinctly human thing? then trivially no. Idk how many people go with this interpretation though. Although I think many artists and art appreciators do at least some of the time.

Is it drawing pretty pictures? Probably too reductive for even the most hardline tech enthusiasts but computers are already very good at this. If I want to say get my face in something that looks like an old timey oil painting computers are way faster than humans.

Is it making things that make us feel something? They can probably get pretty good at this. Although it's unclear how novel the results will be most people aren't exposed to most art so you could probably produce novel feelings on an individual level pretty well.

Art is so fuzzy and used with such a range of definitions it's not really clear what this is asking.

Even if they're better the future might still suck. Machines are technically better at all the components of carpentry than humans but I'd rather furniture wasn't souless minimalist MDF landfill garbage and carpenters could still earn a living. Even if that means my chairs were a bit uneven.

twig ,

This is some pretty weird and lowkey racist exposition on humanity.

Humankind isn't a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.

It's not that it took 1 million years to "figure out" farming. It's that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don't practice agriculture, because it's not actually advantageous in many ways -- stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.

Also where did you come up with the number 12,000 for "figuring out" the combustion engine? Genuinely curious. Like were we "working on it" for 12k years? I don't get it. But this isn't exactly a net positive and has come with some pretty disastrous consequences. I say this because you're proposing a linear path for "humanity" forward, when the reality is that humans are many things, and progress viewed in this way has a tendency toward racism or at least ethnocentrism.

But also yeah, the point of this meme is "artists are valuable."

GreyEyedGhost ,

The first heat engines were fire pistons, which go back to prehistory, so 12k to 25k years sounds about right. The next application of steam to make things move happened about 450 BC, about 2.5k years ago. Although not a direct predecessor to the ICE, they all are heat engines.

twig ,

Fire pistons are so damn cool. Yeah, that makes sense then.

nBodyProblem ,

This is some pretty weird and lowkey racist exposition on humanity.

Getting “racism” from that post is a REAL stretch. It’s not even weird, agriculture and mechanization are widely considered good things for humanity as a whole

Humankind isn't a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.

ANY group of humans beyond the individual is purely just a social construct and classing humans into a single group is no less sensible than grouping people by culture, family, tribe, country etc.

It's not that it took 1 million years to "figure out" farming. It's that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don't practice agriculture, because it's not actually advantageous in many ways -- stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.

Agriculture is certainly more efficient in terms of nutrition production for a given calorie cost. It’s also much more reliable. Arguing against agriculture as a good thing for humanity as a whole is the thing that’s weird.

twig ,

I'm really not "arguing against agriculture," I'm pointing out that there are other modes of subsistence that humans still practice, and that that's perfectly valid. There are legitimate reasons why a culture would collectively reject agriculture.

But in point of fact, agriculture is not actually more efficient or reliable. Agriculture does allow for centralized city states in a way that foraging/hunting/fishing usually doesn't, with a notable exception of many indigenous groups on the western coast of turtle island.

A study positing that in fact, agriculturalists are not more productive and in fact are more prone to famine:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3917328/

But the main point I was trying to make is that different expressions of human culture still exist, and not all cultures have followed along the trajectory of the dominant culture. People tend to view colonialism, expansion and everything that means as inevitable, and I think that's a pretty big problem.

Harbinger01173430 ,

This kind of thinking is dangerous and will hinder planetary unification...

twig ,

All I'm trying to point out is that distinct cultures are worthy of respect and shouldn't be glossed over.

But be real with me: can you think of a single effort for "planetary unification" that wasn't a total nightmare? I sure can't.

Harbinger01173430 ,

This attitude is what prevents us from unifying...smh

melpomenesclevage ,

Llm's are not a step to agi. Full stop. Lovelace called this like 200 years ago. Turing and minsky called it in the 40s.

Gabu ,

Pray tell, when did we achieve AGI so that you can say this with such conviction? Oh, wait, we didn't - therefore the path there is still unknown.

melpomenesclevage ,

Okay, this is no more a step to AGI than the publication of 'blindsight' or me adding tamarind paste to sweeten my tea.

The project isn't finished, but we know basic stuff. And yeah, sometimes history is weird, sometimes the enlightenment happens because of oblivious assholes having bad opinions about butter and some dude named 'le rat' humiliating some assholes in debates.

But llm's are not a step to AGI. They're just not. They do nothing intelligence does that we couldn't already do. Youre doing pareidola. Projecting shit.

Harbinger01173430 ,

When the Jewish made their first mud golem ages ago?

evranch ,

We may not even "need" AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.

LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).

We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM's JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can "tell you what it's going to do".

Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it's just 4 different machine learning algorithms in a trenchcoat.

melpomenesclevage ,

passable imitation of understanding

Okay so there are things they're useful for, but this one in particular is fucking... Not even nonsense.

Also, the ml algos exponentiate necessary clock cycles with each one you add.

So its less a trench coat and more an entire data center

And it still can't understand; its still just sleight of hand.

evranch ,

And it still can't understand; its still just sleight of hand.

Yes, thus "passable imitation of understanding".

The average consumer doesn't understand tensors, weights and backprop. They haven't even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.

Passable imitation.

You don't need a data center except for training, either. There's no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don't actively need it.

I've already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.

You yell out "$wakeword, it's cold in here. Turn up the furnace" and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.

melpomenesclevage ,

One of the engineers who wrote 'eliza' had like a deep connection to and relationship with it. Who wrote it.

Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.

gives them an answer

'An answer' isnt hard. Magic 8 ball does that. So does a piece of paper that says "drink water, you stupid cunt" This makes me think you're arguing from commitment or identity rather than knowledge or reason. Or you just don't care about truth.

Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm's).

Color me skeptical of your claims in light of this.

Aceticon ,

I think it's pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you're communicating with: "If it talks like a medical docture then surelly it's a medical doctor".

Only that's not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn't really guaranteed to tell us all that much about the characteristics of what's on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to "always present a certain kind of image" (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.

All this to say that it doesn't require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn't but that's how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.

They're probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it's pigheaded ignorance.

melpomenesclevage ,

I think the person my previous comment was replying to wasnt malicious; I think they're really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.

Odd loop backs there.

evranch ,

I think you're misreading the point I'm trying to make. I'm not arguing that LLM is AGI or that it can understand anything.

I'm just questioning what the true use case of AGI would be that can't be achieved by existing expert systems, real humans, or a combination of both.

Sure Deepseek or Copilot won't answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.

However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn't that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?

AGI is a fun theoretical concept, but I really don't see the practical need for a "next step" past the point of expanding and refining our current deep learning models, or how it would improve our world.

melpomenesclevage ,

Those are not meaningful use cases for llm's.

And they're getting worse at even faking it now.

Honytawk ,

To create general AI, we first need a way for computers to communicate proficiently with humans.

LLMs are just that.

melpomenesclevage ,

Its not though. It's autocorrect. It is not communication. It's literally autocorrect.

weker01 ,

That is not an argument. Let me demonstrate:

Humans can't communicate. They are meat. They are not communicating. It's literally meat.

melpomenesclevage , (edited )

Spanish is not English. Its spanish.

A lot of people are really emotionally invested in this tool being a lot of things it's not. I think because its kind of the last gasp of pretending capitalism can give us something that isnt shit, the last thing that came out before the end enshitification spiral tightened, nevermind the fact that its largely a cause of that, and I don't think any of you can be critical or clear headed here.

I'm afraid we're too obsessed with it being the bullshit SciFi toy it isnt that we'll ignore its real use cases, or worse; apply it to its real use cases, completely misunderstand what its doing, and adeptus mechanics our way into getting so fucking many people killed/maimed-those uses are mostly medicine adjacent.

weker01 ,

I was just pointing out that your emotional plea, that this technology is just autocorrect is not an argument in any way.

For it to be one you need to explicitly state the implication of that fact. Yes architecturaly it is autocomplete but that does not obviously imply anything. What is it about autocomplete that barrs a system of the ability to understand?

Humans are made of meat but that does not imply they can't speak or think.

melpomenesclevage ,

If I said 'this is just a spoon' you'd know what I meant. This is not an emotional appeal.

I'm not saying computers can't ever think. I'm saying this is just autocorrect, fancy version of the shit I'm using to type this.

Autocorrect is not understanding, and if you don't understand that, you have zero understanding of either tech or philosophy. This topic is about both, so you really shouldn't be making assertions. Stick to genuine questions.

eskimofry ,

less than 500 years to create general intelligence will be a blip in time.

You jinxed it. We aren't gonna be around for 500 years now are we?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • science_memes@mander.xyz
  • test
  • worldmews
  • mews
  • All magazines