Like many people are saying that’s 2.6W per core. Which is actually very good.
My laptop is running an Intel CPU with a TDP of 45W, which doesn’t seem as bad as that one until you realize that it’s only 6 cores meaning it uses 7.5W per core. If we multiply by the number of cores this CPU has we get 1440W if it were as inefficient as my Intel CPU, and that’s a very conservative estimate which assumes my CPU is as efficient as intel claims the
Intel Core i7-9750H
is, it actually might be much worse considering how hot this laptop gets, especially when gaming (though I don’t game on this laptop anymore for that reason).Bottom line, this is a very efficient CPU, but it’s also an insanely ovepowered CPU that most people will not use or need. Only datacenters and extremely dedicated power users need a CPU anywhere near this powerful.
2.6 Watt per core, that’s pretty efficient. My desktop PC’s CPU uses twice as much.
Does it also have a base clock of 2250 MHz?
Base clock means nothing these days, every cpu in existence (even server parts!) will boost to something higher and downclock to 800MHz
No, but considering how rarely I use the full power of my CPU, I doubt it would make a big difference. Which means that I could probably halve the TDP of my CPU, but “about the same efficiency as my throttled desktop CPU” is still pretty alright for a server.
384 threads is fucking bizarre!
I would like to see the fans needed to cool this in a 2U or even 1U case. They must be comparable to a leaf blower…
I mean you can dissipate all the heat of a 2kW electric space heater with a single fan. 500W isnt that much compared to GPU farms with a bunch of gpus in a single rack slot.
difference is temperature, it’s easier to cool things when there is a large temperature difference
Considering that any server with one of these is likely to have two of them, that’s quite a lot of heat to dissipate.
A cpu also generally needs to be kept cooler than a space heater.
My home server has a row of surprisingly powerful and small fans and that’s just for a few years old dual Xeon system. I have never personally been (knowing) near a GPU farm but I have been behind a crazy ass router (Cisco ASR 9000 something) that’s like 10+ U. The airflow behind the router is crazy.
x86 was a mistake
Divide the tdp by the number of cores. You’re getting 2,6 watts per core, making it an extremely efficient cpu.
While that is true, it definitely can’t compete with ARM, even when that ARM CPU is on a 5nm node:
And if you think 192 cores is a lot, Ampere has also announced a 512 core CPU.
ah ur right
I got a 1000w PSU so I could run something like this!
…I will never have a processor like this.
When will we see water heaters with a computer rack mount?
That’s… not a bad idea.
That’s not a CPU, that’s a fucking space heater.
Only half of a toaster (in the US at least)
It’s a nice easy unit to compare against because all of the ones I’ve seen draw basically exactly 1000 watts
It’s also less than double what my desktop draws (11700k, rtx 3060) and (those aren’t particularly demanding components, and I only get 16 threads) (that’s basically exactly 6x more power draw per core, although the cores themselves perform differently ofc)
It is slightly silly to have that many cores tho. I guess the main reason to not just use a gpu would be because pcie doesn’t have enough bandwidth, or if you need a ton of RAM? For a pure compute application I don’t think there are many cases where a GPU isn’t the obvious choice when you’re going to have almost 400 threads anyways. An A100 has half the tdp and there’s no way the epyc chip can even come close in performance (even if you assume the cpu can use avx512 while the gpu can’t use it’s tensor cores, it having about a third of the memory bandwidth isn’t exactly encouraging about the level of peak compute they’re expecting)
Maybe its meant for hyperscalars who will rent it out in smaller units of say 16, 32, and 64 core instances to customers.
I stopped reading your comment when you suggested calling 1000w as a toaster. Most murican thing. Why don’t you also measure the size of the CPU in buses?
Intuition
Another application of those things are virtualization. Throw in 3 x4 TB NVMe SSDs, 384 GB of memory, and a 25G NIC.
Off a single unit, you would be able to sell 12 VPS instances with 16 cores, 32 GB of memory, 1 TB of storage, and a guaranteed 1.5Gbps link.
Why do CPUs that power hungry exist? I can barely support the thought that my MODERN laptop sucks up to 40W on heavy loads
192 cores. That’s not for home use.
That’s an EPYC. It’s a datacenter CPU, and it’s priced accordingly. Nobody uses these at home outside of hardcore homelab enthusiasts with actual rack setups.
That’s 2.6 Watt per core, about half of what my desktop PC’s CPU uses. And yeah, that’s not for home users.
Like others have noted, it’s 2-3 Watt’s per core, that’s pretty incredible given how it encompasses all the extra things the CPU does/supports and the inherent cost of it being not a big ol chip.
Specifically, they support substantially more memory at 12 channels, compared to the typical 2 and 128(+) lanes of PCIe 5 connectivity!
Because these systems are so dense, data centres can condense N servers into just a couple. And now, you only need 1 set of ancillary components like network cards or fans.
So, they’re significantly more efficient from a few perspectives.
Okay, but I got a laptop and I miss my desktop because my room is perpetually cold now. My deaktop used to keep my room warm .