jarfil , (edited )
@jarfil@beehaw.org avatar

Can LLMs Really Reason and Plan?

do LLMs generate their output through a logical process?

Shifting goalposts. I've claimed a single reasoning iteration for an LLM, per prompt; both "planning" and a "logical process" require multiple iterations. Check Auto-GPT for that.

PS: to be more precise, an LLM has a capacity of self-reflection defined by the number of attention heads, which can easily surpass the single-iteration reasoning capacity of a human, but still require multiple iterations to form a plan or follow a reasoning path.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • programming@beehaw.org
  • test
  • worldmews
  • mews
  • All magazines