• Draconic NEO@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Like many people are saying that’s 2.6W per core. Which is actually very good.

    My laptop is running an Intel CPU with a TDP of 45W, which doesn’t seem as bad as that one until you realize that it’s only 6 cores meaning it uses 7.5W per core. If we multiply by the number of cores this CPU has we get 1440W if it were as inefficient as my Intel CPU, and that’s a very conservative estimate which assumes my CPU is as efficient as intel claims the Intel Core i7-9750H is, it actually might be much worse considering how hot this laptop gets, especially when gaming (though I don’t game on this laptop anymore for that reason).

    Bottom line, this is a very efficient CPU, but it’s also an insanely ovepowered CPU that most people will not use or need. Only datacenters and extremely dedicated power users need a CPU anywhere near this powerful.

      • ferret@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Base clock means nothing these days, every cpu in existence (even server parts!) will boost to something higher and downclock to 800MHz

      • rumschlumpel@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        21 hours ago

        No, but considering how rarely I use the full power of my CPU, I doubt it would make a big difference. Which means that I could probably halve the TDP of my CPU, but “about the same efficiency as my throttled desktop CPU” is still pretty alright for a server.

  • lud@lemm.ee
    link
    fedilink
    English
    arrow-up
    40
    ·
    2 days ago

    384 threads is fucking bizarre!

    I would like to see the fans needed to cool this in a 2U or even 1U case. They must be comparable to a leaf blower…

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      I mean you can dissipate all the heat of a 2kW electric space heater with a single fan. 500W isnt that much compared to GPU farms with a bunch of gpus in a single rack slot.

      • Jumuta@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        difference is temperature, it’s easier to cool things when there is a large temperature difference

      • lud@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Considering that any server with one of these is likely to have two of them, that’s quite a lot of heat to dissipate.

        A cpu also generally needs to be kept cooler than a space heater.

        My home server has a row of surprisingly powerful and small fans and that’s just for a few years old dual Xeon system. I have never personally been (knowing) near a GPU farm but I have been behind a crazy ass router (Cisco ASR 9000 something) that’s like 10+ U. The airflow behind the router is crazy.

  • Rai@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    I got a 1000w PSU so I could run something like this!

    …I will never have a processor like this.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Only half of a toaster (in the US at least)

    It’s a nice easy unit to compare against because all of the ones I’ve seen draw basically exactly 1000 watts

    It’s also less than double what my desktop draws (11700k, rtx 3060) and (those aren’t particularly demanding components, and I only get 16 threads) (that’s basically exactly 6x more power draw per core, although the cores themselves perform differently ofc)

    It is slightly silly to have that many cores tho. I guess the main reason to not just use a gpu would be because pcie doesn’t have enough bandwidth, or if you need a ton of RAM? For a pure compute application I don’t think there are many cases where a GPU isn’t the obvious choice when you’re going to have almost 400 threads anyways. An A100 has half the tdp and there’s no way the epyc chip can even come close in performance (even if you assume the cpu can use avx512 while the gpu can’t use it’s tensor cores, it having about a third of the memory bandwidth isn’t exactly encouraging about the level of peak compute they’re expecting)

    • cabb@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Maybe its meant for hyperscalars who will rent it out in smaller units of say 16, 32, and 64 core instances to customers.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      24 hours ago

      I stopped reading your comment when you suggested calling 1000w as a toaster. Most murican thing. Why don’t you also measure the size of the CPU in buses?

    • pivot_root@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Another application of those things are virtualization. Throw in 3 x4 TB NVMe SSDs, 384 GB of memory, and a 25G NIC.

      Off a single unit, you would be able to sell 12 VPS instances with 16 cores, 32 GB of memory, 1 TB of storage, and a guaranteed 1.5Gbps link.

  • Richard@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    Why do CPUs that power hungry exist? I can barely support the thought that my MODERN laptop sucks up to 40W on heavy loads

    • pivot_root@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      That’s an EPYC. It’s a datacenter CPU, and it’s priced accordingly. Nobody uses these at home outside of hardcore homelab enthusiasts with actual rack setups.

    • rumschlumpel@feddit.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      That’s 2.6 Watt per core, about half of what my desktop PC’s CPU uses. And yeah, that’s not for home users.

    • disconsented@lemmy.nz
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Like others have noted, it’s 2-3 Watt’s per core, that’s pretty incredible given how it encompasses all the extra things the CPU does/supports and the inherent cost of it being not a big ol chip.

      Specifically, they support substantially more memory at 12 channels, compared to the typical 2 and 128(+) lanes of PCIe 5 connectivity!

      Because these systems are so dense, data centres can condense N servers into just a couple. And now, you only need 1 set of ancillary components like network cards or fans.

      So, they’re significantly more efficient from a few perspectives.

  • HexadecimalSky@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Okay, but I got a laptop and I miss my desktop because my room is perpetually cold now. My deaktop used to keep my room warm .