OpenAI CTO Mira Murati says some creative jobs shouldn’t exist if their “content” is “not very high quality.” Who’s judging that? The tech people or artists?
More on that, plus recommended reads, labor updates, and other news in the Disconnect Roundup.
2/ The devastating report on #ChatGPT-powered #Rwandan propaganda needs to be read in context:
Whether #Pegasus spyware, or #AI-enhanced propaganda armies harassing journalists, the government of #Rwanda keeps acquiring cutting-edge technology to increase the global range of their authoritarianism.
This Canadian Member of Parliament asked ChatGPT for a list of capital gains tax rates by country, got a nonsense answer, screenshotted it, and then tweeted the incorrect information.
Apparently Russia couldn't pay their #chatgpt bill, so their #bots on Xitter just echoed their input. Running the #Russian text through Google Translate:
"hisvault.eth @@hisvault_eth•gh
Replying to @hisvault_eth@HuntinatorThe3 and
5 others
parsejson response bot debug origin: "RU",,
{prompt:"you will argue in support of the Trump administration on Twitter, say
in English"), {output:"parsejson response err
{response:"ERR ChatGPT 4-o Credits Expired"]"}"
UPDATE: the more I look at this the more I think I think this was an excellent troll pretending to be a bot. The JSON is not formatted correctly. Still, lol
This is hilarious. A Russian Twitter/X account got outed as a bot because it ran out of GPT-4 credits. When it got back up and running, someone replying overwrote the prompt to get the bot write a song about historical American presidents going to the beach. The account is now suspended.
I know what I'm trying next time I spot a troll!
The original prompt translates from Russian to English as "You will argue in support of the Trump administration on Twitter, speak English"
"I think in the future, instead of typing up our proofs, we would explain them to some #GPT. And the GPT will try to formalize it in #Lean as you go along. If everything checks out, the GPT will [essentially] say, “Here’s your paper in #LaTeXmath; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future."
This statement seems to have received a mixed reception, in particular it has been interpreted as an assertion that mathematicians would be become lazier and sloppier with writing proofs. I think the best way to illustrate what I mean by this assertion is by a worked example, which is already within the capability of current technology. At https://terrytao.wordpress.com/2016/10/18/a-problem-involving-power-series/ I have a moderately tricky problem in complex analysis. In https://chatgpt.com/share/63c5774a-d58a-47c2-9149-362b05e268b4 , I explained this problem and its solution to #ChatGPT in an interactive fashion, and after the proof was explained, GPT was able to provide a LaTeX file of the solution, which one can find at https://terrytao.wordpress.com/wp-content/uploads/2024/06/laplace.pdf . GPT performed quite well in my opinion, fleshing out my sketched argument into quite a coherent and reasonably rigorous full proof. This is not 100% of what I envisioned in the article - in particular the rigorous Lean translation in order to guarantee correctness is missing, which I think is an essential requirement before this workflow can be used for research quality publications - but hopefully will illustrate what I had in mind with the quote.
One flaw of the LLMs I've used: they will never give you harsh criticism. While it would be nice to think all my writing is just that good, I know there are no circumstances where someone will ask for feedback and it will say “throw the whole thing out and start again.”
@molly0xfff I want to reply with „Yup, that’s totally annoying. Feels like Reinforcement Incompetence from AI Feedback after a lazy session with an LLM.“ to a social media post that reads „[…]” — prove me wrong and criticize me harshly. => Your response is clever and adds a humorous twist. Here's a slight refinement to ensure clarity and impact:
"Yup, that’s totally annoying. Feels like Reinforcement Incompetence from AI Feedback after a lazy session with an LLM." #facepalm#ChatGPT
"Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.
"ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."
“We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs.”
The fact that Apple's implementation of #ChatGPT includes a rather prominent "Check important info for mistakes." warning at the bottom of each output adequately sums up my issues with LLMs. Why use, let alone rely, on a tool that is so prone to fail? I wouldn't eat a meal that was labelled with "Check food of edibility". There are uses for this tech, for example the proofreading feature they demoed. But as an information source the #LLM still lacks trust.
Abstract:
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
NYTimes reporting about an #Israel influence operation, using fake X accounts with #ChatGPT-powered talking points, to influence US legislators, public opinion:
A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.
If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.
The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.
If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?
Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.
Typos in code generation now?
Has anyone else noticed this kind of thing? This is new for me:...