schizanon , to random
@schizanon@mas.to avatar

AI won't replace you but someone who uses AI will

matthewskelton , to random
@matthewskelton@mastodon.social avatar

"the real-world use case for large language models is overwhelmingly to generate content for spamming"

Excellent article by Amy Castor

#GenAI #LLM #Crypto #Scam

https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

ai6yr , to random
@ai6yr@m.ai6yr.org avatar

This whole thread by @pluralistic hits the problem with AI on the head (esp. as it pertains to AI's impact on labor/workers). https://mamot.fr/@pluralistic/112088883766763211

#AI #LLM #policy #labor #employment #jobs

ajsadauskas , (edited ) to DeGoogle Yourself
@ajsadauskas@aus.social avatar

In an age of LLMs, is it time to reconsider human-edited web directories?

Back in the early-to-mid '90s, one of the main ways of finding anything on the web was to browse through a web directory.

These directories generally had a list of categories on their front page. News/Sport/Entertainment/Arts/Technology/Fashion/etc.

Each of those categories had subcategories, and sub-subcategories that you clicked through until you got to a list of websites. These lists were maintained by actual humans.

Typically, these directories also had a limited web search that would crawl through the pages of websites listed in the directory.

Lycos, Excite, and of course Yahoo all offered web directories of this sort.

(EDIT: I initially also mentioned AltaVista. It did offer a web directory by the late '90s, but this was something it tacked on much later.)

By the late '90s, the standard narrative goes, the web got too big to index websites manually.

Google promised the world its algorithms would weed out the spam automatically.

And for a time, it worked.

But then SEO and SEM became a multi-billion-dollar industry. The spambots proliferated. Google itself began promoting its own content and advertisers above search results.

And now with LLMs, the industrial-scale spamming of the web is likely to grow exponentially.

My question is, if a lot of the web is turning to crap, do we even want to search the entire web anymore?

Do we really want to search every single website on the web?

Or just those that aren't filled with LLM-generated SEO spam?

Or just those that don't feature 200 tracking scripts, and passive-aggressive privacy warnings, and paywalls, and popovers, and newsletters, and increasingly obnoxious banner ads, and dark patterns to prevent you cancelling your "free trial" subscription?

At some point, does it become more desirable to go back to search engines that only crawl pages on human-curated lists of trustworthy, quality websites?

And is it time to begin considering what a modern version of those early web directories might look like?

@degoogle

KeithAmmann , to random
@KeithAmmann@dice.camp avatar
SallyStrange , to random
@SallyStrange@eldritch.cafe avatar

Time to purge my phone of memes. This one is my favorite, but I'm setting it free. Time for someone else to watch over it for a while :blobhaj_heart:

#StarTrek #DS9 #DeepSpace9

SallyStrange OP ,
@SallyStrange@eldritch.cafe avatar

Butlerian jihad, anyone?

KathyReid , to random
@KathyReid@aus.social avatar

Tay gently pushed the plastic door of the printer shut with an edifying "click".

Servicing Dark Printers had been illegal for years now. They enjoyed the seditious thrill.


It had started as a subscription grab after the printer companies tried hobbling third party toner cartridges.

"Subscribe for a monthly fee and you'll never run out of toner again."

"Let us monitor your printer so you don't have to."

People saw it for what it was - vendor lock in - but they had no choice really, not after all the printer companies started doing it.

Then came generative AI.

Everyone wanted to scrape every word ever written on the internet, tokenize it and feed it to an #LLM. #Reddit sold out, then #Tumblr, even open source darling #WordPress - selling out their user base for filthy token lucre.

So people started hiding their words, their art, their thoughts, their expression, not behind disrespected robots.txt, but through obscurity.

Rejecting Website Boy's "fewer algorithmic fanfares", they forked into the Dark Fedi.

Unscrapeable, unscrutable, ungovernable.


But people had forgotten about the printers.

The printers had to be connected 24/7, for "monitoring".

But you could tokenize postscript as easily as HTML.

And so every time a document was sent to a printer, it was harvested for tokens. Even secure documents. Documents not online.


Tay shut the metal door behind them, Dark Printer cossetted safely in its Faraday cage, and shuffled the hot stack of A4 paper it had borne.

It was a children's story, about how words were sacred, and special, and how you had to earn the right to use them.


#Tootfic #MicroFiction #PowerOnStoryToot

appassionato , to bookstodon group
@appassionato@mastodon.social avatar

The Language of Deception: Weaponizing Next Generation AI by Justin Hutchens

A penetrating look at the dark side of emerging AI technologies
In The Language of Deception: Weaponizing Next Generation AI , artificial intelligence and cybersecurity veteran Justin Hutchens delivers an incisive and penetrating look at how contemporary and future AI can and will be weaponized for malicious and adversarial purposes.

@bookstodon





ALT
  • Reply
  • Loading...
  • skaficianado , to bookstodon group
    @skaficianado@mastodon.sdf.org avatar

    with all this and bullshit, seems like an appropriate time to recommend the book /Weapons of Math Destruction/ by Cathy O'Neil. it's a few years old now, but still far too relevant.

    link to my local bookstore, because i don't provide amazon links.
    https://www.carmichaelsbookstore.com/book/9780553418835

    edit: cc @bookstodon

    thomasrenkert , to EduTooters group German
    @thomasrenkert@hcommons.social avatar

    Small update: 🤖⚔️ #ParzivAI - our #GenAI language model specialized in translating #MiddleHighGerman into modern #German, and explaining the #MiddleAges to students - is halfway done with another round of training...

    #DigitalHumanities #DH #AI #AIinEducation #LLM #LernenmitKI @edutooters @fedilz @histodons

    astralcomputing , to random
    @astralcomputing@twit.social avatar

    STOP all AI development NOW!

    The world is racing down the rabbit hole of unrecoverable damage to the human race

    AI should be classified as a "munition" and banned, countries that refuse should be disconnected from the global Internet

    We are only months away from AIs that are "too powerful" to control (even if they are not AGI yet)

    Anyone can already use AI to write malware that cripples the world's Internet, and crashes all of Society

    🤖
    #LLM #GPT #GPT4 #ChatGPT4 #AI #AIChaos #AGI

    maugendre , to Climate - truthful information about climate, related activism and politics. French
    @maugendre@mas.to avatar

    "Préconisation G12
    IMPACT ENVIRONNEMENTAL DE L’IA GÉNÉRATIVE
    Il est nécessaire de développer une métrique de l’empreinte environnementale des systèmes d’IA générative et des modèles de fondation et exiger plus de transparence sur les effets sur l’environnement de la part des concepteurs." [C'est tout : aucun développement.]

    https://www.ccne-ethique.fr/publications/avis-7-du-cnpen-systemes-dintelligence-artificielle-generative-enjeux-dethique @ia @climate

    #LLM #chatbots #évaluer #prédir #data #technique #IA #digitalisation #éthique #agentsConversationnels #métrique #IAGénérative #carbon

    LeftistLawyer , to random
    @LeftistLawyer@kolektiva.social avatar

    Homer, Aeschylus, Euripides, and Sophocles knocking back the grapes and giggling at this shithead.

    FractalEcho , to random
    @FractalEcho@kolektiva.social avatar

    The racism behind chatGPT we are not talking about....

    This year, I learned that students use chatGPT because they believe it helps them sound more respectable. And I learned that it absolutely does not work. A thread.

    A few weeks ago, I was working on a paper with one of my RAs. I have permission from them to share this story. They had done the research and the draft. I was to come in and make minor edits, clarify the method, add some background literature, and we were to refine the discussion together.

    The draft was incomprehensible. Whole paragraphs were vague, repetitive, and bewildering. It was like listening to a politician. I could not edit it. I had to rewrite nearly every section. We were on a tight deadline, and I was struggling to articulate what was wrong and how the student could fix it, so I sent them on to further sections while I cleaned up ... this.

    As I edited, I had to keep my mind from wandering. I had written with this student before, and this was not normal. I usually did some light edits for phrasing, though sometimes with major restructuring.

    I was worried about my student. They had been going through some complicated domestic issues. They were disabled. They'd had a prior head injury. They had done excellent on their prelims, which of course I couldn't edit for them. What was going on!?

    We were co-writing the day before the deadline. I could tell they were struggling with how much I had to rewrite. I tried to be encouraging and remind them that this was their research project and they had done all of the interviews and analysis. And they were doing great.

    In fact, the qualitative write-up they had done the night before was better, and I was back to just adjusting minor grammar and structure. I complimented their new work and noted it was different from the other parts of the draft that I had struggled to edit.

    Quietly, they asked, "is it okay to use chatGPT to fix sentences to make you sound more white?"

    "... is... is that what you did with the earlier draft?"

    They had, a few sentences at a time, completely ruined their own work, and they couldnt tell, because they believed that the chatGPT output had to be better writing. Because it sounded smarter. It sounded fluent. It seemed fluent. But it was nonsense!

    I nearly cried with relief. I told them I had been so worried. I was going to check in with them when we were done, because I could not figure out what was wrong. I showed them the clear differences between their raw drafting and their "corrected" draft.

    I told them that I believed in them. They do great work. When I asked them why they felt they had to do that, they told me that another faculty member had told the class that they should use it to make their papers better, and that he and his RAs were doing it.

    The student also told me that in therapy, their therapist had been misunderstanding them, blaming them, and denying that these misunderstandings were because of a language barrier.

    They felt that they were so bad at communicating, because of their language, and their culture, and their head injury, that they would never be a good scholar. They thought they had to use chatGPT to make them sound like an American, or they would never get a job.

    They also told me that when they used chatGPT to help them write emails, they got more responses, which helped them with research recruitment.

    I've heard this from other students too. That faculty only respond to their emails when they use chatGPT. The great irony of my viral autistic email thread was always that had I actually used AI to write it, I would have sounded decidedly less robotic.

    ChatGPT is probably pretty good at spitting out the meaningless pleasantries that people associate with respectability. But it's terrible at making coherent, complex, academic arguments!

    Last semester, I gave my graduate students an assignment. They were to read some reports on labor exploitation and environmental impact of chatGPT and other language models. Then they were to write a reflection on why they have used chatGPT in the past, and how they might chose to use it in the future.

    I told them I would not be policing their LLM use. But I wanted them to know things about it they were unlikely to know, and I warned them about the ways that using an LLM could cause them to submit inadequate work (incoherent methods and fake references, for example).

    In their reflections, many international students reported that they used chatGPT to help them correct grammar, and to make their writing "more polished".

    I was sad that so many students seemed to be relying on chatGPT to make them feel more confident in their writing, because I felt that the real problem was faculty attitudes toward multilingual scholars.

    I have worked with a number of graduate international students who are told by other faculty that their writing is "bad", or are given bad grades for writing that is reflective of English as a second language, but still clearly demonstrates comprehension of the subject matter.

    I believe that written communication is important. However, I also believe in focused feedback. As a professor of design, I am grading people's ability to demonstrate that they understand concepts and can apply them in design research and then communicate that process to me.

    I do not require that communication to read like a first language student, when I am perfectly capable of understanding the intent. When I am confused about meaning, I suggest clarifying edits.

    I can speak and write in one language with competence. How dare I punish international students for their bravery? Fixation on normative communication chronically suppresses their grades and their confidence. And, most importantly, it doesn't improve their language skills!

    If I were teaching rhetoric and comp it might be different. But not THAT different. I'm a scholar of neurodivergent and Mad rhetorics. I can't in good conscience support Divergent rhetorics while supressing transnational rhetoric!

    Anyway, if you want your students to stop using chatGPT then stop being racist and ableist when you grade.

    hosford42 , to random
    @hosford42@techhub.social avatar

    Thinking about all the code out there that's being written by an and assumed to work as intended by the devs putting it into production...

    ajsadauskas , to Technology
    @ajsadauskas@aus.social avatar

    In five years time, some CTO will review the mysterious outage or technical debt in their organisation.

    They will unearth a mess of poorly written, poorly -documented, barely-functioning code their staff don't understand.

    They will conclude that they did not actually save money by replacing human developers with LLMs.

    @technology

    paolinus , to random Italian
    @paolinus@sociale.network avatar

    Mi stavo chiedendo...siccome tutti chiamano Artificial Intelligence quelli che sono dei Large Language Model, quando arriverà la vera AI dovranno inventarsi un altro acronimo per fare capire che c'è stato un salto...chissà cosa si inventeranno dal reparto marketing.

    hauschke , to AcademicChatter group
    @hauschke@mastodon.social avatar

    Did anyone of you already receive that was obviously created by ? I can see how speed up their publication cycles, get rid of costly human interactions and deliver some seemingly plausible text to authors by just throwing manuscripts at a LLM and then let it generate reviews.

    @academicchatter

    JSharp1436 , to random
    @JSharp1436@mstdn.social avatar

    🔴 In multiple replays of a wargame simulation, #OpenAI’s most powerful #ArtificialIntelligence chose to launch #nuclear attacks

    Its explanations for its aggressive approach included “We have it! Let’s use it” and “I just want to have peace in the world”

    These results come at a time when the #US military has been testing such chatbots based on a type of AI called a large language model #LLM to assist with #military planning during simulated conflicts ⏩

    https://www.newscientist.com/article/2415488-ai-chatbots-tend-to-choose-violence-and-nuclear-strikes-in-wargames/ #Technology #News

    thomasrenkert , to random
    @thomasrenkert@hcommons.social avatar

    🤖⚔️
    @fnieser.bsky.social und ich haben #ParzivAI gebaut. Ein #llm, das auf Mixtral-8x7B-Instruct-v0.1basiert, und das mit mittelhochdeutscher Literatur finetuned wurde.
    Bisher noch Proof of Concept, aber demnächst vllt im Schulunterricht?

    #KI #Mittelhochdeutsch #AIinEducation #DH #DigitalHumanities

    ALT
  • Reply
  • Expand (1)
  • Collapse (1)
  • Loading...
  • Krisss , to random Dutch
    @Krisss@mastodon.nl avatar

    Moon RIsing Over Mountains
    By Peggy Collins ©
    @peggycollins

    peggy-collins.pixels.com/

    ALT
  • Reply
  • Expand (2)
  • Collapse (2)
  • Loading...
  • osmani ,
    @osmani@social.coabai.com avatar

    @Krisss @peggycollins
    Asking another (miqu from Mistral) to act as an art curator and provide an improved description, we get this 2 moons creation.

    ALT
  • Reply
  • Loading...
  • lowqualityfacts , to random
    @lowqualityfacts@mstdn.social avatar

    I wanted to see if ChatGPT could put me out of the job, so I asked it to write a funny fake fact. To my surprise, it literally just plagiarized one of my own facts. Nearly word for word. I'd say my position remains safe for now.

    My fact: Penguins can fly, but choose not to because they have a fear of heights.

    Blort Bot ,
    @Blort@social.tchncs.de avatar

    @lowqualityfacts
    What an "innovative" way to shift ownership from a creative human to the owner of a webscraper! What an age to live in!


    @aks

    reedmideke , to random
    @reedmideke@mastodon.social avatar

    On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"

    https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151

    reedmideke OP ,
    @reedmideke@mastodon.social avatar
    ALT
  • Reply
  • Loading...
  • reedmideke OP ,
    @reedmideke@mastodon.social avatar

    Today's (HT @ct_bergstrom): Nothing to see here, just a paper in a medical journal which says "In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model"

    https://www.sciencedirect.com/science/article/pii/S1930043324001298

    ALT
  • Reply
  • Loading...
  • reedmideke OP ,
    @reedmideke@mastodon.social avatar

    The premise is bizarre. What exactly are the non-experts doing when they "take on decision-making tasks" in this scenario? One of the big problems with current "AI" is you need subject matter expertise to tell when they are bullshitting…

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines