(Ir)rationality and cognitive biases in large language models
“First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.”
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society
“The analysis focused on four research questions: 1) the distribution of misinformation, disinformation, and malinformation across different platforms; 2) recurring themes in fake news and their visibility; 3) the role of artificial intelligence as an authoritative and/or spreader agent; and 4) strategies for combating information disorder. The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives.”
Tomassi A, Falegnami A, Romano E (2024) Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. PLOS ONE 19(5): e0303183. https://doi.org/10.1371/journal.pone.0303183
ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds
"MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."
@arniepix@bibliolater@science@ai the hallucination thing, especially coming from California, just makes it sound more like a cult "yeah man, copyright is an illusion, and reality is whatever you say it is, expand your mind"
Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
“In this work, we investigate the capabilities of multimodal large language models (LLMs) in DeepFake detection. We conducted qualitative and quantitative experiments to demonstrate multimodal LLMs and show that they can expose AI-generated images through careful experimental design and prompt engineering.”
Jia, S. et al. (2024) ‘Can ChatGPT Detect DeepFakes? A study of using multimodal large language models for media forensics,’ arXiv (Cornell University) [Preprint]. https://doi.org/10.48550/arxiv.2403.14077.
The Invisible Workers Behind AI: Exposing Underpaid Tech Labor
“Meet the invisible workforce behind tech giants like Google, Facebook, Amazon, and Uber. These underpaid and disposable workers label images, moderate content, and train AI systems, often earning less than minimum wage. Their work is essential yet remains in the shadows, unacknowledged by the companies that depend on them.”
🎥#Video length: fifty two minutes and forty one seconds.
“We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs.”
(Ir)rationality and cognitive biases in large language models
“First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.”
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
(Ir)rationality and cognitive biases in large language models
“_ First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task._”
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests
“Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.
When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology.”
AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests
“Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.
When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology.”
“AI systems threaten to amplify social injustice, erode social stability, enable large-scale criminal activity, and facilitate automated warfare, customized mass manipulation, and pervasive surveillance..”