I'd been writing a post for #weblogpomo2024 talking about some of the more comical fuck-ups all of these #ai and #llm have been spewing. And now I'm fucking furious.
Note: content warning for depression, self-harm, and suicide
Today is #NewstodonFriday, a day to feature work from newsrooms that have an active presence in the #Fediverse. If you like what you see in the thread below, follow the profiles and boost their stories. If you're a journo or newsroom that we don't know about (or there's someone that should be on our radar), please comment below.
It's fashionable to criticize #LLMs, but can you think of another human invention that allows us to spend the energy budget of Tanzania to lift shitposts out of context and present them as if they were authoritative knowledge?
Shoving #AI in users' faces is a symptom of a larger problem in tech right now. Until recently there was this idea that tech served its users. If you were writing #software, you tried to figure out what your user wanted the software to do, and you wrote software that tried hard to do what the user wanted.
Today huge swathes of software and online services have pivoted to doing what the owning companies/shareholders want it to do. In bygone days, software developers used to see how they could get the user's experience down to the least amount of friction. Hell, Amazon even patented "one-click" shopping.
Today #Google is literally trying to INCREASE the number of clicks before you get what you want because each time you click, they earn a bit more revenue.
Look at any #smartTV or video streaming app. If you're willing to watch whatever they put in front of you, you can do THAT with just a click or two. But if you're specifically seeking one particular thing because it is what YOU WANT to watch, the minimum number of clicks skyrockets, including ads, previews, and suggestions that you need to take action (like clicking "skip") to avoid.
Software and services are prioritizing what THEY want as the lowest-friction result and are making what YOU want the highest friction result. #LLM crap is just one of the many things they want you to interact with, and so they abandon any pretense of listening to you and just force it on you.
@Peter_Arbeitslos@LaFinlandia yes and no. Yes, If they could do it timely. And #fuckrussia milanalysts need to be sharp. Not much proof they are. From this war also another dilemma: 'fog of war' through TOO MUCH data, or better too much 'noise' incl. Deep fakes. Unlikely but not impossible. Then you need proper ML/'#AI' to find signal in that forest of noise. #ukraine#destroyrussiaonceandforall
A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.
If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.
The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.
If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?
Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.
So a German magazine decided to run an “interview” with Michael Schumacher, in which Schumacher’s “responses” were fabulations generated entirely by #AI.
The real Michael Schumacher sustained a brain injury in 2013 and has not recovered.
The Schumacher family sued the magazine and won a settlement. But the fact that this story ran at all is beyond dumbfounding.
The reports of Google's AI results spouting nonsense is no surprise at all. Garbage in, garbage out. If you want to create a good AI, you need to have experts actually vet the training data. Of course, at that point, why not just have experts posting the information rather than AI?
Google disabled their AI so it will no longer give you a stupid answer when searching for "cheese not sticking to pizza," but the 11-year old Reddit shit post advising to use Elmer's Glue as a remedy that the AI regurgitated yesterday is still the top ranked search result and that's probably even worse.
African AI workers, mostly from Kenya, released an open letter to Joe Biden this week asking him stop US tech companies from “systemically abusing and exploiting African workers” and to end the “modern day slavery” they’re subjected to.
🔥Hot take🔥: I'm fully embracing the generative AI revolution!
Controversial, I know. But ever since these tools hit the scene, I've been using them non-stop to fuel my work – from finding leads to crafting engaging social media posts.
Let's be real: How else am I supposed to come up with fresh content for clients who sell things like air conditioners or military-grade tablets? When there's zero news or info to work with, producing two posts a week can be a serious struggle.
Enter the AI cavalry: ChatGPT, Bing Copilot, Gemini – they've all become my trusty sidekicks.
Now I can actually keep up with the workload, even as my agency keeps piling on new clients (without a raise, naturally) and refuses to hire more staff.
Awaiting the day when one of my 2,000+ "Fuck Spez" comments gets used by an AI as someone's definitive answer to their question.
Will they literally seek out Spez and proceed to fuck him silly? Seems likely, given that people are SO mad about the suggestion of putting glue on pizza...🤣
Ukrainian soldiers near the front line were startled by sounds they initially mistook for a Russian tank, but it was a farmer spraying his fields. ( streamable.com )
https://t.me/combatfootageua/16191...