AeonCypher ,
@AeonCypher@lgbtqia.space avatar

is not going to make , ever.

The performance of LLMs is the log of their parameter size. The resource use is proportional to their parameter size.

The human brain is approximately equal to a 60 quintilion parameter model running at 80hz. It consumes the energy of a single lightbulb.

GPT-5 is likely a 1 trillion parameter model, and requires massive energy.

LLMs are amazing, but they're nowhere near approaching AGI.

https://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines