@elilla
I think of this every time @kurtseifried brags on the OSSec podcast about his experiments but I suspect he's too much of an old man to listen to reasons and will probably justify himself about how he's not big AI wasting natural resources @alice_watson
Water has some valid uses too, but much like AI, I'd rather not drown in it.
There's a marked difference between applying AI to hard problems that are suited to AI's unique strengths...and scaling up clusters that take more resources than some small countries in order to generate tasty recipes involving bananums and gasoline.
Hey @kurtseifried, I kinda just got @'d into this conversation, and I don't know anything about you past the context in this thread, but I feel like this was getting dogpiley and the arguments were quickly devolving into wasted energy 😉
So let's start over. Hi, thanks for being willing to pop in and discuss a very heated topic with what seems like an already hostile crowd :blobcatsweats:
Quick background on me: my main is @alice, I've worked as the head of data at several companies, I'm currently working with a spacial AI company, and I'm the cofounder of another AI company. I certainly don't claim to be an expert in anything—though some people occasionally mistake me for one 🤷🏼♀️
If I'm going to be dragged into an argument, I'd like it to be one in which everyone comes with the intention to explore the topic in earnest. Otherwise, I'd rather go back to browsing cat pics.
It seems the original post's stance was that the arguments in favor of our current AI trajectory are ignoring or downplaying the harmful side-effects of the technology.
The OP's meme focused on increased energy and water consumption.
I feel like we can probably start off by conceding both that AI has some uses that are beneficial to humanity, and some well-known issues, namely in training data and resource consumption.
@elilla The biggest gotcha I’ve encountered so far is for accessibility but it definitely doesn’t justify how widely this technology is currently marketed and deployed. It’s more like a thin silver lining.
@hypolite the problem with accessibility is that LLMs are inherently unreliable; they by design and inevitably produce errors that seem correct and are hard to spot.
suppose a company fires their accessibility engineers cos now they can autogenerate imagine descriptions with ChatGPT. as is by now beyond question, a large fraction of these will look good enough but be in fact wrong or lacking. do you honestly think that they will invest the labour to proofread it all? it's not even clear if that labour is significantly cheaper than the ones they used the LLM to kill in the first place.
LLM accessibility is like electric cars: a distraction that lets capitalists increase their hold of capital and maintain business as usual while sounding like they're doing something that will prove to be a solution, any day now... at the cost of worsening the actual existing problem, immediately.