Lemvi ,

Ok, maybe I didn't make my point clear:
Yes they can produce a text in which they reason. However, that reasoning mimics the reasoning found in the training data. The arguments a LLM makes and the stance it takes will always reflect its training data. It cannot reason counter to that.

Train a LLM on a bunch of english documents and it will suggest nuking Russia. Train it on a bunch of Russian documents and it will suggest nuking the West. In both cases it has learned to "reason", but it can only reason within the framework it has learned.

Now if you want to find a solution for world peace, I'm not saying that AI can't do that. I am saying that LLMs can't. They don't solve problems, they model language.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • noncredibledefense@sh.itjust.works
  • test
  • worldmews
  • mews
  • All magazines