FeralRobots ,
@FeralRobots@mastodon.social avatar

I get a lot of pushback when I admonish people to accurately describe what an #LLM is doing - I'm told 'that ship has sailed' or 'just deal with the fact that people say they think'.

It matters. It fucking matters. It matters because using the wrong words for it indicates that people think those "aswers" are something that they're not - that they can never, ever be.

[srcs: https://bsky.app/profile/phyphor.one-dash.org/post/3knxrotc2k22x, https://bsky.app/profile/astrokatie.com/post/3k5kaswwgpv2u]

I don’t think it can be emphasized enough that large language models were never intended to do math or know facts; literally all they do is attempt to sound like the text they’re given, which may or may not include math or facts. They don’t do logic or fact checking — they’re just not built for that

ALT
  • Reply
  • Loading...
  • + hrefna
  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines