“We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs.”
It's quite wild that most people seem to be vilifying AI these days, but it seems that when I post something that most people can enjoy, they don't care that it's AI assisted...🤨
Just made this song about ADHD with AI. This is pretty amazing, not gonna lie.
Lyrics:
[Verse]
Brain feels like a maze
Can't focus just daze
Thoughts runnin' in loops
Lost in endless hoops
[Verse 2]
Clock ticks time is flyin'
Inside I'm just tryin'
World's loud can't keep up
Sippin' from an empty cup
[Chorus]
ADHD got me spinnin' round
Feet on the ground but my head's in the clouds
Life in chaos hear the silent shouts
Lost in the mind can't figure it out
[Verse 3]
Struggle every day
Mind drifts far away
Heart heavy with the weight
Of things left incomplete
[Bridge]
Isolation feels real
All wounds that won't heal
People don't understand
Livin' with an altered plan
[Chorus]
ADHD got me spinnin' round
Feet on the ground but my head's in the clouds
Life in chaos hear the silent shouts
Lost in the mind can't figure it out
for artists* that haven't transitioned from Adobe's Creative Cloud toward FLOSS alternatives—whether because of work demands, time constraints, learning curve, etc—you can install Objective See's Lulu to block Adobe's software from reaching out to their servers, preventing them from harvesting your work and thereby exploiting you for their own gain.
*creators, photographers, illustrators, retouchers, designers, animators, or any other preferred term for yourself
In the latest episode of the FIRST Impressions Podcast, hosts Chris John Riley and Martin McKay sit down with Satoshi Okada and Takuho Mitsunaga, esteemed researchers from The University of Tokyo and upcoming speakers at #FIRSTCon2024 in Fukuoka, Japan.🇯🇵
Dive into the world of artificial intelligence and explore the implications of large language models (LLMs) like ChatGPT. Okada and Mitsunaga shed light on the pros and cons of LLMs and emphasize the crucial need for multi-stakeholder governance in ensuring safer AI development.
(Ir)rationality and cognitive biases in large language models
“First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.”
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
(Ir)rationality and cognitive biases in large language models
“First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.”
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
(Ir)rationality and cognitive biases in large language models
“_ First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task._”
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
The federal #government is facing a dwindling window to #regulate the use of #ArtificialIntelligence in campaigns before the #2024election. The #FCC chair announced a plan last month to require #politicians to disclose #AI use in TV & radio #ads. But the prop is facing opposition from a #FEC top ofcl, which has been considering its own new rules on AI campaign use.
low-key wanna find a batch of people to fork pixelfed
at the moment, it simply doesn't have the allure of these new apps & platforms sprouting up, there appears to be little desire to collaborate with people with deep industry insight, and aspects i would consider mission critical (e.g. anything related to a seamless transition from Meta products) need serious attention and it just seems that marketability and softening the landing after leaving an ecosystem many of us have spent 20, 50, sometimes even 70% of our lives on is an afterthought
don't get me wrong, i'm still hopeful about the project—and i'm pouring hours into it daily running my own instance—but i do think there needs to be a swift pivot before a entire population jumping ship on Meta picks a different platform altogether and i worry the opportunity could soon be lost