ai group

bibliolater ,
@bibliolater@qoto.org avatar

The Atomic Human -understanding ourselves in the age of AI

In this insightful talk, Neil Lawrence will reveal how AI serves as a powerful assistant to human intelligence, not a replacement. He will discuss the limits of AI in replicating human thought and its profound impact on society and information management.

🎥 length: forty four minutes and fifty seconds.

https://www.youtube.com/watch?v=hcgqkbSknM8

@ai

unixorn ,
@unixorn@hachyderm.io avatar

OH: re

i want to see the bug report about the application plagiarizing proprietary source code

"expected behavior: we don't get sued"
"actual behavior: legal is at my desk"

@ai

unixorn ,
@unixorn@hachyderm.io avatar

OH on slack:
> I will focus on AI by building EMP devices

@ai

bibliolater ,
@bibliolater@qoto.org avatar

The ancient dream of AI?

When it comes to bringing forth artificial human-like life and understanding, mythos preceded logos. The earliest AI stories are Greek mythological narratives.

https://www.biblonia.com/p/the-ancient-dream-of-ai

@ai

bibliolater ,
@bibliolater@qoto.org avatar

</strong>The ancient dream of AI?</strong>

When it comes to bringing forth artificial human-like life and understanding, mythos preceded logos. The earliest AI stories are Greek mythological narratives.

https://www.biblonia.com/p/the-ancient-dream-of-ai

@ai

bibliolater ,
@bibliolater@qoto.org avatar

Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics

In this work, we investigate the capabilities of multimodal large language models (LLMs) in DeepFake detection. We conducted qualitative and quantitative experiments to demonstrate multimodal LLMs and show that they can expose AI-generated images through careful experimental design and prompt engineering.

Jia, S. et al. (2024) ‘Can ChatGPT Detect DeepFakes? A study of using multimodal large language models for media forensics,’ arXiv (Cornell University) [Preprint]. https://doi.org/10.48550/arxiv.2403.14077.

@ai

bibliolater ,
@bibliolater@qoto.org avatar

The Invisible Workers Behind AI: Exposing Underpaid Tech Labor

Meet the invisible workforce behind tech giants like Google, Facebook, Amazon, and Uber. These underpaid and disposable workers label images, moderate content, and train AI systems, often earning less than minimum wage. Their work is essential yet remains in the shadows, unacknowledged by the companies that depend on them.

🎥 length: fifty two minutes and forty one seconds.

https://www.youtube.com/watch?v=VPSZFUiElls

@ai

bibliolater ,
@bibliolater@qoto.org avatar

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

https://www.technologyreview.com/2024/06/14/1093789/how-to-opt-out-of-meta-ai-training/

@ai

bibliolater ,
@bibliolater@qoto.org avatar

ChatGPT is bullshit

We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs.

Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

@ai

bibliolater ,
@bibliolater@qoto.org avatar

(Ir)rationality and cognitive biases in large language models

First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255

@ai

bibliolater OP ,
@bibliolater@qoto.org avatar

@UlrikeHahn @ai In agreement with you regarding an empirical basis needed for assumptions.

bibliolater OP ,
@bibliolater@qoto.org avatar

@UlrikeHahn @ai Once again in agreement the sterile conditions of an academic setting do not always best represent the wide breadth of human responses.

bibliolater ,
@bibliolater@qoto.org avatar

(Ir)rationality and cognitive biases in large language models

First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255

@ai

bibliolater ,
@bibliolater@qoto.org avatar

(Ir)rationality and cognitive biases in large language models

“_ First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task._”

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255

@ai

bibliolater ,
@bibliolater@qoto.org avatar

AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests

Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.

When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology.

https://www.livescience.com/technology/artificial-intelligence/ai-can-fake-empathy-but-also-encourage-nazism-disturbing-study-suggests

@ai

bibliolater ,
@bibliolater@qoto.org avatar

AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests

Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.

When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology.

https://www.livescience.com/technology/artificial-intelligence/ai-can-fake-empathy-but-also-encourage-nazism-disturbing-study-suggests

@ai

bibliolater ,
@bibliolater@qoto.org avatar

Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society

The analysis focused on four research questions: 1) the distribution of misinformation, disinformation, and malinformation across different platforms; 2) recurring themes in fake news and their visibility; 3) the role of artificial intelligence as an authoritative and/or spreader agent; and 4) strategies for combating information disorder. The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives.

Tomassi A, Falegnami A, Romano E (2024) Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. PLOS ONE 19(5): e0303183. https://doi.org/10.1371/journal.pone.0303183

#X @ai @socialmedia

UlrikeHahn ,
@UlrikeHahn@fediscience.org avatar

@bibliolater @ai @socialmedia VKontakte has a low disinformation score??

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ai@a.gup.pe
  • test
  • worldmews
  • mews
  • All magazines