Lixen ,

Ascribing reasoning and thinking to an LLM starts to become a semantic discussion. Hallucinations are a consequence of parametrizing a model in a way to allow more freedom and introducing more randomness, but deep down, the results still come from a statistical derivation.

The vastness of the data makes the whole system just a big blackbox, impossible for anyone to really grasp, so of course it is nearly impossible for us to explain in detail all behaviors and show data to backup our hypotheses. That still doesn't mean there's any real logic or thinking going on.

But again, it is difficult to really discuss the topic without clear semantics that define what we mean when saying "thinking". Your definition might differ from mine in a way that will never make us agree on the subject.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • technology@beehaw.org
  • test
  • worldmews
  • mews
  • All magazines