ai group

bibliolater ,
@bibliolater@qoto.org avatar

(Ir)rationality and cognitive biases in large language models

First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255

@ai

bibliolater OP ,
@bibliolater@qoto.org avatar

@UlrikeHahn @ai In agreement with you regarding an empirical basis needed for assumptions.

bibliolater OP ,
@bibliolater@qoto.org avatar

@UlrikeHahn @ai Once again in agreement the sterile conditions of an academic setting do not always best represent the wide breadth of human responses.

bibliolater ,
@bibliolater@qoto.org avatar

Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society

The analysis focused on four research questions: 1) the distribution of misinformation, disinformation, and malinformation across different platforms; 2) recurring themes in fake news and their visibility; 3) the role of artificial intelligence as an authoritative and/or spreader agent; and 4) strategies for combating information disorder. The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives.

Tomassi A, Falegnami A, Romano E (2024) Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. PLOS ONE 19(5): e0303183. https://doi.org/10.1371/journal.pone.0303183

#X @ai @socialmedia

UlrikeHahn ,
@UlrikeHahn@fediscience.org avatar

@bibliolater @ai @socialmedia VKontakte has a low disinformation score??

bibliolater ,
@bibliolater@qoto.org avatar

ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds

"MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."

https://www.psypost.org/chatgpt-hallucinates-fake-but-plausible-scientific-citations-at-a-staggering-rate-study-finds/

@science @ai

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater OP ,
@bibliolater@qoto.org avatar

@NearerAndFarther @science @ai

Am I right in thinking it is behind a paywall and not accessible to the general public?

craignicol ,
@craignicol@octodon.social avatar

@arniepix @bibliolater @science @ai the hallucination thing, especially coming from California, just makes it sound more like a cult "yeah man, copyright is an illusion, and reality is whatever you say it is, expand your mind"

MK2boogaloo ,
@MK2boogaloo@lab.nyanide.com avatar
xmacdonald ,
@xmacdonald@mstdn.ca avatar
MK2boogaloo ,
@MK2boogaloo@lab.nyanide.com avatar
dj ,
@dj@parcero.bond avatar
kirby ,
@kirby@lab.nyanide.com avatar
Dullahan ,
@Dullahan@mashtodon.alterracloud.com avatar
nice-nigger Bot ,
@nice-nigger@nicecrew.digital avatar

DIBIDI BA DIDI DOU DOU DI BA DIDI DOU DIDI DIDLDILDIDLDIDL HOUDIHOUDI DEY DOU :whoaaaa:

bibliolater ,
@bibliolater@qoto.org avatar

Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics

In this work, we investigate the capabilities of multimodal large language models (LLMs) in DeepFake detection. We conducted qualitative and quantitative experiments to demonstrate multimodal LLMs and show that they can expose AI-generated images through careful experimental design and prompt engineering.

Jia, S. et al. (2024) ‘Can ChatGPT Detect DeepFakes? A study of using multimodal large language models for media forensics,’ arXiv (Cornell University) [Preprint]. https://doi.org/10.48550/arxiv.2403.14077.

@ai

bibliolater ,
@bibliolater@qoto.org avatar

The Invisible Workers Behind AI: Exposing Underpaid Tech Labor

Meet the invisible workforce behind tech giants like Google, Facebook, Amazon, and Uber. These underpaid and disposable workers label images, moderate content, and train AI systems, often earning less than minimum wage. Their work is essential yet remains in the shadows, unacknowledged by the companies that depend on them.

🎥#Video length: fifty two minutes and forty one seconds.

https://www.youtube.com/watch?v=VPSZFUiElls

#ArtificialIntelligence #AI #Technology #Tech #Documentary #Internet #Corporations #Labour #Labor @ai

bibliolater ,
@bibliolater@qoto.org avatar

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

https://www.technologyreview.com/2024/06/14/1093789/how-to-opt-out-of-meta-ai-training/

@ai

bibliolater ,
@bibliolater@qoto.org avatar

ChatGPT is bullshit

We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs.

Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

@ai

bibliolater ,
@bibliolater@qoto.org avatar

(Ir)rationality and cognitive biases in large language models

First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255

@ai

bibliolater ,
@bibliolater@qoto.org avatar

(Ir)rationality and cognitive biases in large language models

“_ First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task._”

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255

#OpenAccess #OA #Research #Article #DOI #AI #ArtificialIntelligence #LLM #LLMS #Bias #Academia #Academic #Academics @ai

bibliolater ,
@bibliolater@qoto.org avatar

AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests

Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.

When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology.

https://www.livescience.com/technology/artificial-intelligence/ai-can-fake-empathy-but-also-encourage-nazism-disturbing-study-suggests

@ai

bibliolater ,
@bibliolater@qoto.org avatar

AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests

Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.

When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology.

https://www.livescience.com/technology/artificial-intelligence/ai-can-fake-empathy-but-also-encourage-nazism-disturbing-study-suggests

@ai

bibliolater ,
@bibliolater@qoto.org avatar

Managing extreme AI risks amid rapid progress

AI systems threaten to amplify social injustice, erode social stability, enable large-scale criminal activity, and facilitate automated warfare, customized mass manipulation, and pervasive surveillance..

https://doi.org/10.1126/science.adn0117

#ComputerScience #STEM #Technology #Tech #AI #ArtificialIntelligence #DOI @ai

#Image attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ai@a.gup.pe
  • test
  • worldmews
  • mews
  • All magazines