rotnroll666 , to random
@rotnroll666@mastodon.social avatar

Absolutely unbelievable but here we are. by default using messages, files etc for building and training models, enabled by default and opting out requires a manual email from the workspace owner.

https://slack.com/intl/en-gb/trust/data-management/privacy-principles

What a time to be alive in IT. 🤦‍♂️

ariadne , (edited ) to random
@ariadne@climatejustice.social avatar

The pre-eminent philosopher (and recently deceased) Daniel Dennett stated recently that "Large Language Models are the most dangerous technology ever developed, capable of leading to the collapse of not just but of ... This technology can flood the world with manipulative fake people ... Who controls your attention, controls you. We are in danger of losing our free will and being turned into puppets."

Do you agree with Dennett?

(interview here, beginning at 06:11, unfortunately only on YT, not available on - https://www.youtube.com/watch?v=gh2dgsaNY3A)

skaphle ,

@ariadne The most dangerous technology is the steam engine, and it's not only "capable" of collapse of but that is the path we are currently on. People are manipulated about that fact, their attention diverted, and we actively try to avoid confronting it. All without |s or . I condemn tech bros who are fascinated with future doom while ignoring what is happening right now.

BeAware , (edited ) to random
@BeAware@social.beaware.live avatar

Now, that being said, here's some GOOD uses of AI:

  1. Correct grammar/spelling/word usage

  2. Summarizing long form text

  3. Suggestions for a wide variety of things.

  4. Searching (Fuck you google)

  5. DnD gamerunning, either assisted or as an actual DM (this one I am probably misrepresenting, unfortunately. I am not an actual DnD player. So I guess I am wishing this😬)

veronica , to random
@veronica@mastodon.online avatar

A quick reminder that humans have not yet invented AI. It's an imitation puffed up by marketing. They've dressed up a parrot as Agent Smith.

BeAware , to random
@BeAware@social.beaware.live avatar

I feel kinda sad for all the people still posting complaints about stuff they know won't go away or be changed. It's like if I went online and bitched EVERY DAY about not getting a million dollars, fully believing that by doing that I'll eventually get a million dollars....

Unfortunately that's not how the world works and people ranting about stuff like AI, thinking it's doing anything besides making them look like a crazy person, is just so sad.🤦‍♂️

It's not going anywhere. It really isn't. I'm sorry to break it to you like this. I hope you can move on and find something more important to spend your time complaining about.

FeralRobots , to random
@FeralRobots@mastodon.social avatar

I get a lot of pushback when I admonish people to accurately describe what an is doing - I'm told 'that ship has sailed' or 'just deal with the fact that people say they think'.

It matters. It fucking matters. It matters because using the wrong words for it indicates that people think those "aswers" are something that they're not - that they can never, ever be.

[srcs: https://bsky.app/profile/phyphor.one-dash.org/post/3knxrotc2k22x, https://bsky.app/profile/astrokatie.com/post/3k5kaswwgpv2u]

I don’t think it can be emphasized enough that large language models were never intended to do math or know facts; literally all they do is attempt to sound like the text they’re given, which may or may not include math or facts. They don’t do logic or fact checking — they’re just not built for that

ALT
  • Reply
  • Loading...
  • + hrefna
    dingemansemark , to random
    @dingemansemark@scholar.social avatar

    Meta: "Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model. (...) We’re committed to the continued growth and development of an open AI ecosystem for releasing our models responsibly."

    Also Meta: releases straight into the bottom 5 of the GenAI openness leaderboard https://opening-up-chatgpt.github.io

    Newsflash: Llama3, like Llama2, is at best "open weights" and not in any sense open source.

    unixorn , to ai group
    @unixorn@hachyderm.io avatar

    Great post about hallucinating safety information. Specifically MSDS (Material Safety Data Sheet), but applies to any other safety information.

    https://www.funraniumlabs.com/2024/04/phil-vs-llms/

    I would much rather get a "no results" when I'm looking for medical interactions than an helpfully telling me "Here's some bullshit you don't know enough to know is horribly wrong"

    Even something as innocent as acetaminophen can destroy your liver if you overdose on it.

    @llm @ai

    beandev , to random German
    @beandev@social.tchncs.de avatar

    Privatsphäre ade: Zugriff auf Chats von 630 Millionen -Nutzern verkauft

    https://www.golem.de/news/privatsphaere-ade-zugriff-auf-chats-von-630-millionen-discord-nutzern-verkauft-2404-184267.html

    Es wurden Chats abgegriffen, die öffentlich zugänglich sind, und werden nun kostenpflichtig für die Suche freigegeben. Besondere Angebote gibt es wohl für Trainings.

    Schöne neue Welt.

    Btw., das kann mit allen öffentlichen (sozialen) Medien passieren. Auch im .

    mgorny , to random
    @mgorny@treehouse.systems avatar

    The Register: " Linux tells -generated code contributions to fork off"

    https://www.theregister.com/2024/04/16/gentoo_linux_ai_ban/

    bibliolater , to ai group
    @bibliolater@qoto.org avatar

    ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds

    "MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."

    https://www.psypost.org/chatgpt-hallucinates-fake-but-plausible-scientific-citations-at-a-staggering-rate-study-finds/

    @science @ai

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    ruby , to random
    @ruby@toot.cat avatar

    Great article about how is like a mechanical psychic, and exactly how that works for both human and software mentalists.

    https://softwarecrisis.dev/letters/llmentalist/

    deirdresm , to random
    @deirdresm@hachyderm.io avatar

    This is the single best explanation (long!) I've read about why LLMs are a con. Great piece from @baldur.

    https://softwarecrisis.dev/letters/llmentalist/

    ALT
  • Reply
  • Loading...
  • ainmosni , to random
    @ainmosni@berlin.social avatar

    So, my trial just expired, and while it did cut down on some typing, it also made me feel like the quality of my code was lower, and of course it felt dirty to use it considering that it's a license whitewashing machine.

    I don't think I will be paying for it, I don't think the results are worth it.

    cgruenloh , to random
    @cgruenloh@hci.social avatar

    Root problem: Demand in research funding is much bigger than supply. Introducing: complicated grant application system to distribute funding.

    20xx: Grant proposals are getting very complicated; writing bureaus are increasingly used to support researchers putting their ideas on paper in a way that increases their chances in winning the funding lottery. 1/x

    cgruenloh OP ,
    @cgruenloh@hci.social avatar

    2023: ChatGPT is getting used to write grants.

    2025: Many many more research grant applications are submitted. So many that this is not feasible any more. Introducing: using AI to filter through all the submitted proposals.

    2/x

    cgruenloh OP ,
    @cgruenloh@hci.social avatar

    This is inspired by a conversation I had today with someone working at a University grant office. This is not a joke. I learned that for a national grant it’s estimated that 30% are written using AI. And that funding bodies currently discuss whether to use AI to deal with possible exponentially increased submissions?? Besides the “to disclose” or “not to disclose” debate… 3/x

    cgruenloh OP ,
    @cgruenloh@hci.social avatar

    SERIOUSLY: What are we doing? 😱
    “Solving” a problem (too many submissions) with the very same technology that caused it? And probably making it even worse, because what type of proposals are likely to get through? The human-written or the AI-written ones?

    We should look closely at the process; not throwing AI at everything. Only silver lining: closer interaction between applicant & funding body is also discussed. Not sure if that is AFTER the AI filter tho… 4/x

    cgruenloh OP ,
    @cgruenloh@hci.social avatar

    In the context of writing and reviewing a letter of reference, this cycle of “writing by AI” - “reviewing b AI” was exactly what @pluralistic predicted:
    https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/

    … and which is what I thought immediately about when I read this nature article:
    https://www.nature.com/articles/d41586-023-03238-5

    What a world we live in 😱

    5/5

    amydentata , to random
    @amydentata@tech.lgbt avatar

    Hmmmmm

    The misleading readout, however, is not unusual and exposes weaknesses in the AI-generated software that many believe still needs fine-tuning.

    👏 You 👏 can't 👏 fix 👏 accuracy 👏 problems 👏 with 👏 machine 👏 learning 👏 language 👏 models

    They're a fundamental aspect of the technology. It's magnetic poetry with extra steps. Not an answer machine.

    In fact, such errors have sparked a bigger backlash worldwide, with a rise in the number of lawsuits over poor accessibility to websites for disabled people.

    This will not end. The answer is to hire actual people to provide actual accessibility. Sowwy CEOs :( :( :( :( :(

    So annoying that they will not, under any circumstance, understand any of this. All that can happen is the financial loss and legal liability finally become too great.

    https://www.ft.com/content/3c877c55-b698-43da-a222-8ae183f53078

    cassidy , to random
    @cassidy@blaede.family avatar

    “AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

    It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

    https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

    NatureMC , to writers group
    @NatureMC@mastodon.online avatar

    Every time, you think it couldn't get any worse, a new revelation tops it off. As an author, I wonder how long it will take for the book market to be completely enshittified.
    Thank you for the ! ⬆️ @writers @bookstodon

    books

    ami , to random
    @ami@mastodon.world avatar

    I keep seeing posts about , and s... and the majority are vastly incorrect and full of fear.

    There's a lack of understanding of what they are, how they work, what they are capable now and in the future.

    Governments are looking to legislate on AI and they lack understanding as well.

    AI is a tool that will improve other tools from traffic lights to your oven and climate control.

    Banning AI is like banning pens because people write nasty things that make you scared.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines