“When I say ‘I am hungry’, I am reporting on my sensed physiological states. When an LLM generates the sequence ‘I am hungry’, it is simply generating the most probable completion of the sequence of words in its current prompt.”
These AI SEO spam operations have used lists of common searches to ensure that their pages come up first in searches in the “long fat tail” the kind of search where it used to be about 50/50 if you’d find a page addressing your needs. But, it used to be if you found something like “The top 15 smallest ants in the world” it wouldn’t be nonsense. It’d either exist and be the work of another person who cared OR you found nothing. Not so now! I can’t possibly over-stress how bad this is! 1/
@futurebird I don't think that you overstress. In social media (the amplifier of the whole thing), one can already recognise tendencies where knowledge loss as a cultural phenomenon is reminiscent of biodiversity loss. Experts still recognise it. But what if the baseline shift is no longer noticeable?
Versuchte, Klassenarbeit (Gedichtinterpretation) damit zu erstellen. Das sah zunächst gut aus. Bat um mehrere alternative Vorschläge: Geboten wurden u. a. Hilde Domin, Sarah Kirsch, Rilke. Ich suchte dann – z. T. online, z. T. in Gesamtausgaben – nach diesen Gedichten – und fand sie nicht!
Erfindet ChatGPT-4o gegebenenfalls Gedichte, die so ähnlich klingen, wie die der Autor:innen? Oder hat es Quellen erschlossen, die nicht veröffentlicht sind?
@herrlarbig in Sprachmodellen ist kein Faktenwissen enthalten, auch wenn es manchmal so aussieht. Die Gedichte sind ziemlich sicher frei zusammen gewürfelt.
The best thing about Linux is that it guarantees you will never have to wait on phone line for Windows customer support ever again 😂
Just enter your problem in your browser and it has probably already been solved and explained by a bunch of nerds on a forum somewhere who are getting impatient about solving such mundane problems for us less experienced folx.
But... Back before smartphones were invented (unless you count Blackberry) and my problem was no internet connectivity, solving problems was much harder lol.
@futurebird@Lucia
🥥 Compare and contrast the incredible speed at which accurate information about Linux can be generated on the Fediverse versus Large Language AI Models instructing us to eat rocks and to use glue with pizza toppings.
Never underestimate the power of tech bros to overlook real problems and obvious solutions while throwing billions of $ at "disruption." 🥥 #LLM, #AI, #Rocks, #Glue, #Linux, #Disruption, #Capitalism, #Greed, #Halucinations, #Hubris, #TuckersBalls
I'd been writing a post for #weblogpomo2024 talking about some of the more comical fuck-ups all of these #ai and #llm have been spewing. And now I'm fucking furious.
Note: content warning for depression, self-harm, and suicide
Shoving #AI in users' faces is a symptom of a larger problem in tech right now. Until recently there was this idea that tech served its users. If you were writing #software, you tried to figure out what your user wanted the software to do, and you wrote software that tried hard to do what the user wanted.
Today huge swathes of software and online services have pivoted to doing what the owning companies/shareholders want it to do. In bygone days, software developers used to see how they could get the user's experience down to the least amount of friction. Hell, Amazon even patented "one-click" shopping.
Today #Google is literally trying to INCREASE the number of clicks before you get what you want because each time you click, they earn a bit more revenue.
Look at any #smartTV or video streaming app. If you're willing to watch whatever they put in front of you, you can do THAT with just a click or two. But if you're specifically seeking one particular thing because it is what YOU WANT to watch, the minimum number of clicks skyrockets, including ads, previews, and suggestions that you need to take action (like clicking "skip") to avoid.
Software and services are prioritizing what THEY want as the lowest-friction result and are making what YOU want the highest friction result. #LLM crap is just one of the many things they want you to interact with, and so they abandon any pretense of listening to you and just force it on you.
"[T]hink about training an #LLM as a student preparing for an exam. #RAG is like taking an open book exam. The LLM can access the relevant information using any retrieval mechanism, such as web browsing or database queries. Fine-tuning is like taking a closed-book exam. The LLM needs to memorize new knowledge during the fine-tuning process, & it answers questions based on its memory." https://thenewstack.io/improving-llm-output-by-combining-rag-and-fine-tuning/
Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.
The pre-eminent philosopher (and recently deceased) Daniel Dennett stated recently that "Large Language Models #LLM are the most dangerous technology ever developed, capable of leading to the collapse of not just #democracy but of #civilization ... This technology can flood the world with manipulative fake people ... Who controls your attention, controls you. We are in danger of losing our free will and being turned into puppets."
@ariadne The most dangerous technology is the steam engine, and it's not only "capable" of collapse of #civilization but that is the path we are currently on. People are manipulated about that fact, their attention diverted, and we actively try to avoid confronting it. All without #LLM|s or #AI. I condemn tech bros who are fascinated with future doom while ignoring what is happening right now.
Now, that being said, here's some GOOD uses of AI:
Correct grammar/spelling/word usage
Summarizing long form text
Suggestions for a wide variety of things.
Searching (Fuck you google)
DnD gamerunning, either assisted or as an actual DM (this one I am probably misrepresenting, unfortunately. I am not an actual DnD player. So I guess I am wishing this😬)
I feel kinda sad for all the people still posting complaints about stuff they know won't go away or be changed. It's like if I went online and bitched EVERY DAY about not getting a million dollars, fully believing that by doing that I'll eventually get a million dollars....
Unfortunately that's not how the world works and people ranting about stuff like AI, thinking it's doing anything besides making them look like a crazy person, is just so sad.🤦♂️
It's not going anywhere. It really isn't. I'm sorry to break it to you like this. I hope you can move on and find something more important to spend your time complaining about.
I get a lot of pushback when I admonish people to accurately describe what an #LLM is doing - I'm told 'that ship has sailed' or 'just deal with the fact that people say they think'.
It matters. It fucking matters. It matters because using the wrong words for it indicates that people think those "aswers" are something that they're not - that they can never, ever be.
Meta: "Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model. (...) We’re committed to the continued growth and development of an open AI ecosystem for releasing our models responsibly."
I would much rather get a "no results" when I'm looking for medical interactions than an #LLM helpfully telling me "Here's some bullshit you don't know enough to know is horribly wrong"
Even something as innocent as acetaminophen can destroy your liver if you overdose on it.
Es wurden Chats abgegriffen, die öffentlich zugänglich sind, und werden nun kostenpflichtig für die Suche freigegeben. Besondere Angebote gibt es wohl für #LLM Trainings.
Schöne neue Welt.
Btw., das kann mit allen öffentlichen (sozialen) Medien passieren. Auch im #Fediverse.