AeonCypher , to random
@AeonCypher@lgbtqia.space avatar

I got into an argument recently about whether or not is dead.

It's not, but I'm going to explain exactly why it's not, and how there isn't even a foreseeable horizon to it.

Now, Moore's Law as originally stated, is "the number of transistors in an integrated circuit (IC) doubles about every two years".

Okay, so there's a problem here. By one definition, Moore's Law has been dead for a long time. See, when the Law was coined, an integrated circuit was always a chips and transistors were always the same silicon thing, and they were always done in two dimensions.

This was largely true up until the Pentium (I'm sure some computer historian here can refer to earlier incarnations, but we'll go with what I know).

So, with the Pentium, the CPU was actually composed of multiple chips. It, specifically, ad caching and controlling as separate chips that interacted with the main processor component (registers, instruction processors, etc). This was unlike the 386. The 386 had specialized parts of the chip, but it was all a single integrated circuit.

So, here we already run into a question, "Does more transistors on a Pentium 'count' when talking about Moore's Law?" This problem gets even worse when we start running into multi-core CPUs, and modern CPUs contain multiple independent processors with often radically different architectures. It's not clear what constitutes an "integrated circuit", and to some extent it doesn't matter.

This gets worse as we go along. We could define Moore's Law as transistor density. However, we long since hit the theoretical limit of 2D silicone. Powering a modern computer on 2D silicone would require chips that violate the uncertainty principle. Likewise, we know, that there's a limit to even 3D transistors, and we're approaching that limit on an x86 architecture.

From the perspective of computer science, none of this matters. To a hardware engineer it does, but computer scientists care about Ops. Classically we've measured FlOPS (Floating point Operations Per Second), more recently we care about TOPS (Tera Operations per Second), which is literally any operation at all. We could also talk about "BOPS" (Binary Operations Per Second). What we actually care about is FOPS (Functional Operations Per Second).

A theoretical FOP, would be an operation that meets the demand of a programmer. It's thus impossible to define. We could, in theory, run a ton of random code on it and see relative performance to create a FOPS measure, but we know for a fact that it would be incredibly highly correlated to FLOPS, TOPS, and BOPS.

So, if you look at the history of FLOPS and TOPS, you find something interesting. Locally, you find variations, but we've had a doubling of FOPS less than 2-years since well before the integrated circuit was invented. Let's call this Moore's Law Prime.

So, before I continue, I want you to understand that there is a very physical reason for Moore's Law, and it's not the cynical "business pressure to validate it". It is because last generations chips are used to create the next generation of chips. Meaning, that every generation of chip is getting twice as hard as the last, but we have tools that are three times as powerful.

We have seen the x86 slow down the increases in performance year over year. However, at the same time we've seen a huge explosion of the performance of ARM and NVIDIA chips.

Simultaneous to that we've seen compute per dollar plummet, and computer per watt also plummet. So, with the exception of minor year over year variations, we have absolutely no reason to believe Moore's Law Prime is dead.

Furthermore, there is no established theoretical limit to Moore's Law Prime, except at the density of a black hole. We have theoretical models that show light based computer could easily perform Zettascale Computing with each, because every wavelength of light could be a separate processing stream in a single white beam. We don't know what we don't know in terms of possible models of computation.

There is the comment from Jensen Huang declaring Moore's Law dead. However, what you're actually getting is a CEO making a wild claim to justify the companies Rent-Seeking prices for their computing. NVIDIA is so far ahead of everyone else in terms of AI compute, that they're intentionally engaged in monopolistic pricing until the rest of the industry catches up.

(Yes, they have a monopoly. Their chips are capable of 10x the TOPS of their competitors.)

I've been hearing for decades about how Moore's Law is almost dead, or even that it's dead. It's all complete and utter nonsense. In the last four years (two doubling) we've gone from GPT-3 running on a supercomputer, to AIs almost as capable running on a tablet. We've gone form graphics cards that were good enough to render a scene to ray-tracing and DLSS super resolution. Phones can now run 4k games for hours.There's been plenty of change and growth in the last four years.

ChrisMayLA6 , to random
@ChrisMayLA6@zirk.us avatar

Despite rumours of its demise, it would seem that still holds.




  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines