Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.
The pre-eminent philosopher (and recently deceased) Daniel Dennett stated recently that "Large Language Models #LLM are the most dangerous technology ever developed, capable of leading to the collapse of not just #democracy but of #civilization ... This technology can flood the world with manipulative fake people ... Who controls your attention, controls you. We are in danger of losing our free will and being turned into puppets."
@ariadne The most dangerous technology is the steam engine, and it's not only "capable" of collapse of #civilization but that is the path we are currently on. People are manipulated about that fact, their attention diverted, and we actively try to avoid confronting it. All without #LLM|s or #AI. I condemn tech bros who are fascinated with future doom while ignoring what is happening right now.
Now, that being said, here's some GOOD uses of AI:
Correct grammar/spelling/word usage
Summarizing long form text
Suggestions for a wide variety of things.
Searching (Fuck you google)
DnD gamerunning, either assisted or as an actual DM (this one I am probably misrepresenting, unfortunately. I am not an actual DnD player. So I guess I am wishing this😬)
I feel kinda sad for all the people still posting complaints about stuff they know won't go away or be changed. It's like if I went online and bitched EVERY DAY about not getting a million dollars, fully believing that by doing that I'll eventually get a million dollars....
Unfortunately that's not how the world works and people ranting about stuff like AI, thinking it's doing anything besides making them look like a crazy person, is just so sad.🤦♂️
It's not going anywhere. It really isn't. I'm sorry to break it to you like this. I hope you can move on and find something more important to spend your time complaining about.
I get a lot of pushback when I admonish people to accurately describe what an #LLM is doing - I'm told 'that ship has sailed' or 'just deal with the fact that people say they think'.
It matters. It fucking matters. It matters because using the wrong words for it indicates that people think those "aswers" are something that they're not - that they can never, ever be.
Meta: "Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model. (...) We’re committed to the continued growth and development of an open AI ecosystem for releasing our models responsibly."
I would much rather get a "no results" when I'm looking for medical interactions than an #LLM helpfully telling me "Here's some bullshit you don't know enough to know is horribly wrong"
Even something as innocent as acetaminophen can destroy your liver if you overdose on it.
Es wurden Chats abgegriffen, die öffentlich zugänglich sind, und werden nun kostenpflichtig für die Suche freigegeben. Besondere Angebote gibt es wohl für #LLM Trainings.
Schöne neue Welt.
Btw., das kann mit allen öffentlichen (sozialen) Medien passieren. Auch im #Fediverse.
ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds
"MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."
So, my #Copilot trial just expired, and while it did cut down on some typing, it also made me feel like the quality of my code was lower, and of course it felt dirty to use it considering that it's a license whitewashing machine.
I don't think I will be paying for it, I don't think the results are worth it.
Root problem: Demand in research funding is much bigger than supply. Introducing: complicated grant application system to distribute funding.
20xx: Grant proposals are getting very complicated; writing bureaus are increasingly used to support researchers putting their ideas on paper in a way that increases their chances in winning the funding lottery. 1/x #Academia#Funding#LLM#ChatGPT
2025: Many many more research grant applications are submitted. So many that this is not feasible any more. Introducing: using AI to filter through all the submitted proposals.
This is inspired by a conversation I had today with someone working at a University grant office. This is not a joke. I learned that for a national grant it’s estimated that 30% are written using AI. And that funding bodies currently discuss whether to use AI to deal with possible exponentially increased submissions?? Besides the “to disclose” or “not to disclose” debate… 3/x
SERIOUSLY: What are we doing? 😱
“Solving” a problem (too many submissions) with the very same technology that caused it? And probably making it even worse, because what type of proposals are likely to get through? The human-written or the AI-written ones?
We should look closely at the process; not throwing AI at everything. Only silver lining: closer interaction between applicant & funding body is also discussed. Not sure if that is AFTER the AI filter tho… 4/x #Academia#Funding#LLM#ChatGPT
The misleading readout, however, is not unusual and exposes weaknesses in the AI-generated software that many believe still needs fine-tuning.
👏 You 👏 can't 👏 fix 👏 accuracy 👏 problems 👏 with 👏 machine 👏 learning 👏 language 👏 models
They're a fundamental aspect of the technology. It's magnetic poetry with extra steps. Not an answer machine.
In fact, such errors have sparked a bigger backlash worldwide, with a rise in the number of lawsuits over poor accessibility to websites for disabled people.
This will not end. The answer is to hire actual people to provide actual accessibility. Sowwy CEOs :( :( :( :( :(
So annoying that they will not, under any circumstance, understand any of this. All that can happen is the financial loss and legal liability finally become too great.
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
Every time, you think it couldn't get any worse, a new revelation tops it off. As an author, I wonder how long it will take for the book market to be completely enshittified.
Thank you for the #giftArticle! ⬆️ @writers@bookstodon
Someone got Gab's AI chatbot to show its instructions ( mbin.grits.dev )
Credit to @bontchev