parismarx , to random
@parismarx@mastodon.online avatar

OpenAI CTO Mira Murati says some creative jobs shouldn’t exist if their “content” is “not very high quality.” Who’s judging that? The tech people or artists?

More on that, plus recommended reads, labor updates, and other news in the Disconnect Roundup.

https://disconnect.blog/roundup-openai-says-some-artistic-jobs-shouldnt-exist/

jsrailton , to random
@jsrailton@mastodon.social avatar

NEW: sprawling -powered pro- operation on Twitter.

More than half a million posts this year.

Used / - drafted posts to propagandize, attack truth tellers & bury negative stories under inauthentic content. 1/

https://tigerprints.clemson.edu/cgi/viewcontent.cgi?article=1004&context=mfh_reports

jsrailton OP ,
@jsrailton@mastodon.social avatar

2/ The devastating report on -powered propaganda needs to be read in context:

Whether spyware, or -enhanced propaganda armies harassing journalists, the government of keeps acquiring cutting-edge technology to increase the global range of their authoritarianism.

chrisgerhard , to random
@chrisgerhard@toot.bike avatar

A perfect cartoon of in this week's @PrivateEyeNews

robsonfletcher , to random
@robsonfletcher@mas.to avatar

This Canadian Member of Parliament asked ChatGPT for a list of capital gains tax rates by country, got a nonsense answer, screenshotted it, and then tweeted the incorrect information.

(He later deleted the tweet.)

#x

ALT
  • Reply
  • Loading...
  • rayckeith , to random Esperanto
    @rayckeith@techhub.social avatar

    Apparently Russia couldn't pay their bill, so their on Xitter just echoed their input. Running the text through Google Translate:

    "hisvault.eth @@hisvault_eth•gh
    Replying to @hisvault_eth @HuntinatorThe3 and
    5 others
    parsejson response bot debug origin: "RU",,
    {prompt:"you will argue in support of the Trump administration on Twitter, say
    in English"), {output:"parsejson response err
    {response:"ERR ChatGPT 4-o Credits Expired"]"}"

    ALT
  • Reply
  • Loading...
  • seanthegeek , to random
    @seanthegeek@infosec.exchange avatar

    UPDATE: the more I look at this the more I think I think this was an excellent troll pretending to be a bot. The JSON is not formatted correctly. Still, lol

    This is hilarious. A Russian Twitter/X account got outed as a bot because it ran out of GPT-4 credits. When it got back up and running, someone replying overwrote the prompt to get the bot write a song about historical American presidents going to the beach. The account is now suspended.

    I know what I'm trying next time I spot a troll!

    The original prompt translates from Russian to English as "You will argue in support of the Trump administration on Twitter, speak English"

    #X

    A screenshot of someone overriding GPT-4 prompt the on a Russian X/Twitter troll account, causing it to write a song about historical American presidents

    leaton01 , to EduTooters group
    @leaton01@scholar.social avatar
    ALT
  • Reply
  • Loading...
  • DemocracyMattersALot , to random

    I love it. An academic paper with a straightforward title.

    ChatGPT is bullshit
    https://link.springer.com/article/10.1007/s10676-024-09775-5

    tao , (edited ) to random
    @tao@mathstodon.xyz avatar

    recently reprinted an interview I had a few months ago on the future of proof assistants and in : https://www.scientificamerican.com/article/ai-will-become-mathematicians-co-pilot/ . In it, I made the following assertion:

    "I think in the future, instead of typing up our proofs, we would explain them to some . And the GPT will try to formalize it in as you go along. If everything checks out, the GPT will [essentially] say, “Here’s your paper in ; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future."

    This statement seems to have received a mixed reception, in particular it has been interpreted as an assertion that mathematicians would be become lazier and sloppier with writing proofs. I think the best way to illustrate what I mean by this assertion is by a worked example, which is already within the capability of current technology. At https://terrytao.wordpress.com/2016/10/18/a-problem-involving-power-series/ I have a moderately tricky problem in complex analysis. In https://chatgpt.com/share/63c5774a-d58a-47c2-9149-362b05e268b4 , I explained this problem and its solution to in an interactive fashion, and after the proof was explained, GPT was able to provide a LaTeX file of the solution, which one can find at https://terrytao.wordpress.com/wp-content/uploads/2024/06/laplace.pdf . GPT performed quite well in my opinion, fleshing out my sketched argument into quite a coherent and reasonably rigorous full proof. This is not 100% of what I envisioned in the article - in particular the rigorous Lean translation in order to guarantee correctness is missing, which I think is an essential requirement before this workflow can be used for research quality publications - but hopefully will illustrate what I had in mind with the quote.

    molly0xfff , to random
    @molly0xfff@hachyderm.io avatar

    One flaw of the LLMs I've used: they will never give you harsh criticism. While it would be nice to think all my writing is just that good, I know there are no circumstances where someone will ask for feedback and it will say “throw the whole thing out and start again.”

    fpbhb ,
    @fpbhb@mastodon.social avatar

    @molly0xfff I want to reply with „Yup, that’s totally annoying. Feels like Reinforcement Incompetence from AI Feedback after a lazy session with an LLM.“ to a social media post that reads „[…]” — prove me wrong and criticize me harshly. => Your response is clever and adds a humorous twist. Here's a slight refinement to ensure clarity and impact:
    "Yup, that’s totally annoying. Feels like Reinforcement Incompetence from AI Feedback after a lazy session with an LLM."

    ajsadauskas , to Technology
    @ajsadauskas@aus.social avatar

    It's time to call a spade a spade. ChatGPT isn't just hallucinating. It's a bullshit machine.

    From TFA (thanks @mxtiffanyleigh for sharing):

    "Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.

    "ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."

    https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

    @technology

    bibliolater , to ai group
    @bibliolater@qoto.org avatar

    ChatGPT is bullshit

    We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs.

    Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

    @ai

    LangerJan , to random German
    @LangerJan@chaos.social avatar

    Soul-gaming

    LangerJan OP ,
    @LangerJan@chaos.social avatar

    Nein, , einfach nein

    ALT
  • Reply
  • Loading...
  • thehatfox , to random
    @thehatfox@mastodonapp.uk avatar

    The fact that Apple's implementation of includes a rather prominent "Check important info for mistakes." warning at the bottom of each output adequately sums up my issues with LLMs. Why use, let alone rely, on a tool that is so prone to fail? I wouldn't eat a meal that was labelled with "Check food of edibility". There are uses for this tech, for example the proofreading feature they demoed. But as an information source the still lacks trust.

    penny , to random
    @penny@mk.noob.quest avatar

    Now that and has AI, maybe the community can now add some AI features? I would love to see and built into and .

    DaveMWilburn , to random
    @DaveMWilburn@infosec.exchange avatar

    ChatGPT is Bullshit

    Abstract:
    Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

    https://link.springer.com/article/10.1007/s10676-024-09775-5

    dansup , to random
    @dansup@mastodon.social avatar

    Someone asked ChatGPT for help fixing a federation issue in @pixelfed and it worked lol

    https://github.com/pixelfed/support/issues/195#issuecomment-2151033143

    relaxmilano ,
    @relaxmilano@mastodon.social avatar
    rwg , to random
    @rwg@aoir.social avatar

    NYTimes reporting about an #Israel influence operation, using fake X accounts with #ChatGPT-powered talking points, to influence US legislators, public opinion:

    https://www.nytimes.com/2024/06/05/technology/israel-campaign-gaza-social-media.html

    [sorry, it's paywalled]

    @seanlawson and I are publishing articles about this exact sort of masspersonal #socialEngineering right now. It's only going to get worse.

    rwg OP ,
    @rwg@aoir.social avatar

    @seanlawson We have a short summary of our argument about /generative AI and its potential uses in political propaganda here:

    https://theconversation.com/chatbots-can-be-used-to-create-manipulative-content-understanding-how-this-works-can-help-address-it-207187

    Basically, we argue things won't be structurally or procedurally different than past -- but they will be much more intense.

    2/3

    Nonilex , to random
    @Nonilex@masto.ai avatar

    finds its being used for & 2024

    maker OpenAI found , , & groups using its to global political discourse, highlighting concerns is making it easier for state actors to run covert campaigns as the presidential election nears.

    https://www.washingtonpost.com/technology/2024/05/30/openai-disinfo-influence-operations-china-russia/

    tlariv ,
    @tlariv@mastodon.cloud avatar

    @Nonilex
    Once you recognize that is made for flooding the zone with shit everything starts to make sense.

    avandeursen , to random
    @avandeursen@mastodon.acm.org avatar

    At CHI earlier this month: “Is Stack Overflow Obsolete? An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions”.

    > Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose.

    > … our user study participants … overlooked the misinformation in the ChatGPT answers 39% of the time.

    https://dl.acm.org/doi/10.1145/3613904.3642596

    abucci , to random
    @abucci@buc.ci avatar

    A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

    If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

    The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

    If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?

    Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.

    #AI #AGI #ArtificialGeneralIntelligence #GenerativeAI #LargeLanguageModels #LLMs #GPT #ChatGPT #TheoryOfMind

    ModernDayBartleby , to bookstodon group
    @ModernDayBartleby@mstdn.plus avatar

    And so it begins - #NowReading2024
    PASSING by Nella Larsen (1929) via Oshun Publishing imbibed at Yanaka Coffee #Nishiarai #西新井
    #Books #Novels #BlackLiterature #HarlemRenaissance #Bookstodon @bookstodon #BookMastodon #BooksMastodon #WomenWriters

    ModernDayBartleby OP ,
    @ModernDayBartleby@mstdn.plus avatar

    #NowReading2024 ATLAS OF AI by Kate Crawford via Yale University Press care of Arakawa Public Library imbibed at Mr Hippo Coffee #Motohasunuma #ChatGPT #GenerativeAI #LLM #Bookstodon #BookMastodon
    @bookstodon

    ALT
  • Reply
  • Loading...
  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines