The Atomic Human -understanding ourselves in the age of AI
“In this insightful talk, Neil Lawrence will reveal how AI serves as a powerful assistant to human intelligence, not a replacement. He will discuss the limits of AI in replicating human thought and its profound impact on society and information management.”
🎥 #Video length: forty four minutes and fifty seconds.
I started reading this post until "AI influencers" and was surely about to be annoyed.
Then, I realized that the rest doesn't apply to me. I actually understand the technology and realize its not the end of the world, nor is it the savior of all mankind.
It's a tool. Like all tools, it works for specific things if you use it correctly.
It can summarize a limited (but not small) amount of data accurately. It can correct grammar and spelling mistakes. It can describe an image fairly accurately to help those with visual impairments and to help neurodiverse describe things for folks with visual impairments or for other uses.
It has many uses. Just not "Actual Intelligence", as you can see it's not actually called that. That's why I keep getting SO confused why people keep trying to replace humans with AI for things it obviously needs intelligence for, like interpreting laws, medical advice, etc.
AI is a tool that can assist humans with jobs, given that they know it's limitations and can reliably work around them. It's not a replacement for actual humans.
I use it to make alt text and to assist in realizing my creativity.
“When it comes to bringing forth artificial human-like life and understanding, mythos preceded logos. The earliest AI stories are Greek mythological narratives.”
“When it comes to bringing forth artificial human-like life and understanding, mythos preceded logos. The earliest AI stories are Greek mythological narratives.”
Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
“In this work, we investigate the capabilities of multimodal large language models (LLMs) in DeepFake detection. We conducted qualitative and quantitative experiments to demonstrate multimodal LLMs and show that they can expose AI-generated images through careful experimental design and prompt engineering.”
Jia, S. et al. (2024) ‘Can ChatGPT Detect DeepFakes? A study of using multimodal large language models for media forensics,’ arXiv (Cornell University) [Preprint]. https://doi.org/10.48550/arxiv.2403.14077.
The Invisible Workers Behind AI: Exposing Underpaid Tech Labor
“Meet the invisible workforce behind tech giants like Google, Facebook, Amazon, and Uber. These underpaid and disposable workers label images, moderate content, and train AI systems, often earning less than minimum wage. Their work is essential yet remains in the shadows, unacknowledged by the companies that depend on them.”
🎥#Video length: fifty two minutes and forty one seconds.
One advantage to working on freely-licensed projects for over a decade is that I was forced to grapple with this decision far before mass scraping for AI training.
In my personal view, option 1 is almost strictly better. Option 2 is never as simple as "only allow actual human beings access" because determining who's a human is hard. In practice, it means putting a barrier in front of the website that makes it harder for EVERYONE to access it: gathering personal data, CAPTCHAs, paywalls, etc.
Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems
“Emotional AI’s essential problem is that we can’t definitively say what emotions are. “Put a room of psychologists together and you will have fundamental disagreements,” says McStay. “There is no baseline, agreed definition of what emotion is.”
Nor is there agreement on how emotions are expressed. Lisa Feldman Barrett is a professor of psychology at Northeastern University in Boston, Massachusetts, and in 2019 she and four other scientists came together with a simple question: can we accurately infer emotions from facial movements alone? “We read and summarised more than 1,000 papers,” Barrett says. “And we did something that nobody else to date had done: we came to a consensus over what the data says.”
"Establishing that AI training requires a copyright license will not stop AI from being used to erode the wages and working conditions of creative workers. ... Our path to better working conditions lies through organizing and striking, not through helping our bosses sue other giant multinational corporations for the right to bleed us out." – @pluralistic