"Remember that the outcomes of Large Language Models are not designed to be true — they are merely designed to be statistically likely. " - ERYK SALVAGGIO
This should basically exclude the use of LLMs for entire classes of user-facing services.
Im morgigen #DigitalHistoryOFK demonstrieren Torsten Hiltmann, Martin Dröge & Nicole Dresselhaus (HU Berlin, #4Memory) am Bsp. des Baedeker-Reiseführers von 1921 die Potenziale von #LargeLanguageModels & prompt-basierten Ansätzen für die #NamedEntityRecognition in historischen Textquellen.
OpenAI CTO Mira Murati says some creative jobs shouldn’t exist if their “content” is “not very high quality.” Who’s judging that? The tech people or artists?
More on that, plus recommended reads, labor updates, and other news in the Disconnect Roundup.
Just read this article by Molly White @molly0xfff a well reasoned and articulated thought piece on LLMs, potential use cases but perhaps most importantly arguments against their use (and alleged usefulness).
As educators and scientists, we can and should communicate clearly that generative AI tools are not sentient, have no capacity for truth, and are merely complex statistical algorithms dressed up in a plain language outfit.
In early 2023 we wrote a piece on human creativity in the age of text generators. TL;DR: With synthetic text generators on the rise, there was never a better time to cultivate the artisanal and interactive roots of human creativity.
It’s nauseating that the hyperscalers are crankin’ the carbon to inflate the AI bubble like there’s no tomorrow (which there won’t be, for my children, if we don’t cut back) but hey, don’t forget that Bitcoin is still in the running for the single most dangerous-to-the-planet use of computers.
Also note that last I knew #genAI uses less energy than #cryptocurrency and a lot less water than #golf , not that any of those are okay. (Original Scottish golf without watering is if course fine :) #aiethics#sustainability
At the request of our faculty board I drafted some basic guidance on generative AI and research integrity (v1). With valuable input from @Iris@olivia@andreasliesenfeld among others. Primarily aimed at academics and written from a values-first rather than a tech-first perspective
Very few people seem to be dealing with the facts that:
#genAI services are all being priced as loss-leaders at this point.
gen AI is only going to get more expensive, there's literally no forseeable prospect for cost reduction in how AI is produced.
Eventually these vendors are gonna be expected to count profits, not revenues, & that's going to mean either fewer services, less service, higher prices, or all of the above.
Over and over AI is being deployed as a way to avoid the high cost of human mental labor.
You'd rather have a bank of servers huffing clouds of carbon into the air than just paying some people to solve the design problem.
I know hiring programmers to work on UI isn't glamorous, and the work is slow, the results aren't flashy, but we just can't keep on skipping this step or wishing that some cocktail-shaker full of matrices and stolen data will paper over the issue.
@futurebird
Brutal irony is that "AI labor" is only cheap because it's a market capture gambit. Nobody is paying real cost of using #genAI tools.
Once genAI vendors think everyone's locked in on AI-driven business processes they'll jack up prices to closer approximations of what it actually costs. Any apparent* cost savings will have been temporary.
_
*pretty sure imagined cost savings are a human hallucination, resulting from failure to examine the actual business processes being replaced.
The true power of #genAI is not technological, but rhetorical: almost all conversations about it are about what executives are saying it will do "one day" or "soon" rather than what we actually see (and of course no mention of business model which doesn't exist).
We are told to simultaneously believe AI is so "early days" as to excuse any lack of real usefulness, and that it is so established - even "too big to fail" - that we are not permitted to imagine a future without it.
students at College Unbound are AMAZING! Check out this #PressRelease about how they led the way in developing our #GenAI institutional policy! So cool to get to play a part in this!
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
Regarding the last boost: if there were a functional left politics in the United States, it would have pushed back vigorously and relentlessly against the cloud and would still be doing so today. In Marx's terms, the cloud removes the means of production from the hands of the workers and places them under the control of corporations. In that way the movement of most digital work into the cloud is analogous to the trends the Luddites were fighting against, with the movement of skilled weaving work into factories performed by loom operators and subsequent deskilling of weavers.
This should have been vigorously resisted as it was unfolding, but it was not as far as I can remember. It should be vigorously opposed now, but it is not. Data centers, our modern mills, are consuming vast quantities of critical resources like electric power and clean water, to the point that there are communities struggling to provide these resources to human beings who live there. Yet the pushback against this expansion is muted, and data centers are expanding rapidly. Where is the left's response to this corporate seizure of the means of production?
People are worried about generative AI taking jobs, and rightly so, but I think these concerns point to an overarching trend towards a kind of digital feudalization. Generative AI is already created by taking peoples' hard work without any compensation. You're permitted to use the technology "free of charge", but you can't pay the rent or mortgage, or buy food, with ChatGPT output. This essentially renders all of us as peasants.
The threat from bosses that you could be fired and replaced with generative AI, even if false, presses down wage demands and encourages doing work for no compensation. In this climate, people feel compelled to learn how to use generative AI to do their work because they perceive (again, probably rightly) that if they don't do that they will eventually find themselves without employment opportunities. Once again, if you're in a position of doing uncompensated work like this on behalf of a powerful entity, you are in a relationship distressingly similar to the one a peasant was in to a lord in the feudal system.
I'm not saying anything new here, just thinking out loud. But doesn't the left have anything to say, loudly proudly and often, about this? These are bread and butter issues for the left, aren't they?
In this issue: Sci-Fi world animation, OpenAI Sora, Stability AI SV3D tool, Adobe Firefly and Substance 3D, YouTube AI disclosure rules, and Apple + Gemini AI, Two T2I prompts., GenAI News-to-know and Tools-to-know.