emilymbender ,
@emilymbender@dair-community.social avatar

Big Tech likes to push the trope that things are moving and changing too quickly and there's no way that regulators could really keep up --- better (on their view) to just let the innovators innovate. This is false: many of the issues stay stable over quite some time. Case in point: Here's me 5 years ago pointing out that large language models shouldn't be used as sources of information about the world, and that doing so poses risks to the information ecosystem:

https://x.com/emilymbender/status/1766634514946945414?s=20

marshray ,
@marshray@infosec.exchange avatar

@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":

  • Social media
  • Wikipedia
  • Web search engines
  • YouTube
  • The Internet
  • Web pages
  • Anything you see on TV or film
  • Anything from a politically affiliated source
  • Anything from an astronaut
  • Anything from a Freemason
  • Anything from an interested party
  • Anything from a detached academic (particularly economists)
  • Anything from a corporation
  • Anything from any elected official
  • Anything from any government agency
  • Anything from any Western medicine doctor or Big Pharma
  • Anything from an advocate of [economic system]
  • Anything from a [gender]
  • Anything from a [race]
  • Anything from a [nationality]
  • Anything from a believer of [specific religion]
  • Anything not in [ancient text]
  • Anything from a believer of any religion
  • Anything from an atheist
  • Everything you read
  • Everything you hear

The point here is that such advice is generally non-actionable, and that people are almost always better served by practical risk- and harm-reduction strategies than abstinence-only advocacy.

futurebird ,
@futurebird@sauropods.win avatar

@marshray @emilymbender

Actions:

-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.

there are a lot of ways this is actionable. Not often things individuals have control over, but this tech is being injected into all sorts of paces where it doesn't belong.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines