matthewskelton , to random
@matthewskelton@mastodon.social avatar

"Remember that the outcomes of Large Language Models are not designed to be true — they are merely designed to be statistically likely. " - ERYK SALVAGGIO

This should basically exclude the use of LLMs for entire classes of user-facing services.

https://cyberneticforests.substack.com/p/a-hallucinogenic-compendium

DigitalHistory , to histodons group German
@DigitalHistory@fedihum.org avatar

, aber prompto! 🤖

Im morgigen demonstrieren Torsten Hiltmann, Martin Dröge & Nicole Dresselhaus (HU Berlin, ) am Bsp. des Baedeker-Reiseführers von 1921 die Potenziale von & prompt-basierten Ansätzen für die in historischen Textquellen.

Offen für alle!

🔜 Wann? Mi., 26.06., 4-6 pm, Zoom
ℹ️ Abstract: https://dhistory.hypotheses.org/7870


@nfdi4memory @histodons

parismarx , to random
@parismarx@mastodon.online avatar

OpenAI CTO Mira Murati says some creative jobs shouldn’t exist if their “content” is “not very high quality.” Who’s judging that? The tech people or artists?

More on that, plus recommended reads, labor updates, and other news in the Disconnect Roundup.

https://disconnect.blog/roundup-openai-says-some-artistic-jobs-shouldnt-exist/

nopatience , to random
@nopatience@swecyb.com avatar

Just read this article by Molly White @molly0xfff a well reasoned and articulated thought piece on LLMs, potential use cases but perhaps most importantly arguments against their use (and alleged usefulness).

AI isn't useless. But is it worth it?

https://www.citationneeded.news/ai-isnt-useless/

> AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.

DrFerrous , to random
@DrFerrous@hachyderm.io avatar

As educators and scientists, we can and should communicate clearly that generative AI tools are not sentient, have no capacity for truth, and are merely complex statistical algorithms dressed up in a plain language outfit.

jon , to random
@jon@henshaw.social avatar

Joanna Maciejewska with the best take.

The quote was from Edge magazine and you can learn more about Joanna on her website: https://authorjm.com

btravern , to random
@btravern@dice.camp avatar

To no surprise, /Hasbro has gone back on their word about not using in or .

dingemansemark , to random
@dingemansemark@scholar.social avatar

In early 2023 we wrote a piece on human creativity in the age of text generators. TL;DR: With synthetic text generators on the rise, there was never a better time to cultivate the artisanal and interactive roots of human creativity.

It was desk-rejected so often that we stopped counting (journals seemed to prefer puff pieces on the amazing opportunities of ). I am still proud of it. Perhaps you would like to read it? https://ideophone.org/human-creativity-in-the-age-of-text-generators/

ALT
  • Reply
  • Loading...
  • timbray , to random
    @timbray@cosocial.ca avatar

    It’s nauseating that the hyperscalers are crankin’ the carbon to inflate the AI bubble like there’s no tomorrow (which there won’t be, for my children, if we don’t cut back) but hey, don’t forget that Bitcoin is still in the running for the single most dangerous-to-the-planet use of computers.

    https://www.theverge.com/2024/5/15/24157496/microsoft-ai-carbon-footprint-greenhouse-gas-emissions-grow-climate-pledge

    j2bryson , to random
    @j2bryson@mastodon.social avatar

    Please don’t call #genAI#ai”. Hopefully generative AI’s days are numbered, but AI and both the advantages and the problems of automating behaviour and discovery will be with us for at least the duration of the digital era. #aiethics #aiact #aia #digitalGoverance https://chaos.social/@HonkHase/112435548946270854

    j2bryson OP ,
    @j2bryson@mastodon.social avatar

    Also note that last I knew #genAI uses less energy than #cryptocurrency and a lot less water than #golf , not that any of those are okay. (Original Scottish golf without watering is if course fine :) #aiethics #sustainability

    dingemansemark , to random
    @dingemansemark@scholar.social avatar

    At the request of our faculty board I drafted some basic guidance on generative AI and research integrity (v1). With valuable input from @Iris @olivia @andreasliesenfeld among others. Primarily aimed at academics and written from a values-first rather than a tech-first perspective

    Produced for @Radboud_uni Faculty of Arts but since some folks asked for a shareable version I've preprinted it at https://osf.io/2c48n/

    ALT
  • Reply
  • Loading...
  • Em0nM4stodon , to random
    @Em0nM4stodon@infosec.exchange avatar

    Controversial opinion (apparently):

    With so called AI, they started by trying to automate the things that should honestly never be automated:

    Art,
    Communication,
    Decision Making.

    Those are all very human things that humans are actually quite good at.

    What would actually be useful is to automate the things that humans can't do, or that are dangerous for humans to do.

    Then, everyone who gets their job automated should have their wage replaced with UBI (Universal Basic Income) as compensation.

    horovits , to random
    @horovits@fosstodon.org avatar

    took out the fun part of , the creation, leaving us to debug and test auto generated code. Not fun 😕

    And it seems our software has also become worse since the era.

    @kevlin keynote at sharing developer research and thoughts.

    FeralRobots , to random
    @FeralRobots@mastodon.social avatar

    Very few people seem to be dealing with the facts that:

    • services are all being priced as loss-leaders at this point.

    • gen AI is only going to get more expensive, there's literally no forseeable prospect for cost reduction in how AI is produced.

    • Eventually these vendors are gonna be expected to count profits, not revenues, & that's going to mean either fewer services, less service, higher prices, or all of the above.

    futurebird , to random
    @futurebird@sauropods.win avatar

    Over and over AI is being deployed as a way to avoid the high cost of human mental labor.

    You'd rather have a bank of servers huffing clouds of carbon into the air than just paying some people to solve the design problem.

    I know hiring programmers to work on UI isn't glamorous, and the work is slow, the results aren't flashy, but we just can't keep on skipping this step or wishing that some cocktail-shaker full of matrices and stolen data will paper over the issue.

    FeralRobots ,
    @FeralRobots@mastodon.social avatar

    @futurebird
    Brutal irony is that "AI labor" is only cheap because it's a market capture gambit. Nobody is paying real cost of using tools.

    Once genAI vendors think everyone's locked in on AI-driven business processes they'll jack up prices to closer approximations of what it actually costs. Any apparent* cost savings will have been temporary.
    _
    *pretty sure imagined cost savings are a human hallucination, resulting from failure to examine the actual business processes being replaced.

    ceedee666 , to random German
    @ceedee666@mastodon.social avatar

    @noybeu sues for spreading false information.

    https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

    This is going to be interesting as it’s about the very foundation of .

    leaton01 , to EduTooters group
    @leaton01@scholar.social avatar
    leaton01 , to AcademicChatter group
    @leaton01@scholar.social avatar
    PavelASamsonov , to random
    @PavelASamsonov@mastodon.social avatar

    The true power of #genAI is not technological, but rhetorical: almost all conversations about it are about what executives are saying it will do "one day" or "soon" rather than what we actually see (and of course no mention of business model which doesn't exist).

    We are told to simultaneously believe AI is so "early days" as to excuse any lack of real usefulness, and that it is so established - even "too big to fail" - that we are not permitted to imagine a future without it.

    leaton01 , to AcademicChatter group
    @leaton01@scholar.social avatar

    students at College Unbound are AMAZING! Check out this about how they led the way in developing our institutional policy! So cool to get to play a part in this!

    https://www.collegeunbound.edu/apps/news/article/1911138

    @academicchatter @edutooters

    cassidy , to random
    @cassidy@blaede.family avatar

    “AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

    It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

    https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

    abucci , to random
    @abucci@buc.ci avatar

    Regarding the last boost: if there were a functional left politics in the United States, it would have pushed back vigorously and relentlessly against the cloud and would still be doing so today. In Marx's terms, the cloud removes the means of production from the hands of the workers and places them under the control of corporations. In that way the movement of most digital work into the cloud is analogous to the trends the Luddites were fighting against, with the movement of skilled weaving work into factories performed by loom operators and subsequent deskilling of weavers.

    This should have been vigorously resisted as it was unfolding, but it was not as far as I can remember. It should be vigorously opposed now, but it is not. Data centers, our modern mills, are consuming vast quantities of critical resources like electric power and clean water, to the point that there are communities struggling to provide these resources to human beings who live there. Yet the pushback against this expansion is muted, and data centers are expanding rapidly. Where is the left's response to this corporate seizure of the means of production?

    People are worried about generative AI taking jobs, and rightly so, but I think these concerns point to an overarching trend towards a kind of digital feudalization. Generative AI is already created by taking peoples' hard work without any compensation. You're permitted to use the technology "free of charge", but you can't pay the rent or mortgage, or buy food, with ChatGPT output. This essentially renders all of us as peasants.

    The threat from bosses that you could be fired and replaced with generative AI, even if false, presses down wage demands and encourages doing work for no compensation. In this climate, people feel compelled to learn how to use generative AI to do their work because they perceive (again, probably rightly) that if they don't do that they will eventually find themselves without employment opportunities. Once again, if you're in a position of doing uncompensated work like this on behalf of a powerful entity, you are in a relationship distressingly similar to the one a peasant was in to a lord in the feudal system.

    I'm not saying anything new here, just thinking out loud. But doesn't the left have anything to say, loudly proudly and often, about this? These are bread and butter issues for the left, aren't they?

    ajsadauskas , to random
    @ajsadauskas@aus.social avatar

    New York City's new LLM-powered chatbot (chat.nyc.gov) is happy to help you with all sorts of questions.

    For example, how to go about opening a butcher shop for cannibals on the Upper East Side.

    No, really:

    imagepunk , to random
    @imagepunk@mastodon.social avatar

    Ai Art and Animation out now.

    In this issue: Sci-Fi world animation, OpenAI Sora, Stability AI SV3D tool, Adobe Firefly and Substance 3D, YouTube AI disclosure rules, and Apple + Gemini AI, Two T2I prompts., GenAI News-to-know and Tools-to-know.

    video/mp4

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines