Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc

Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.

  1. rlk

    rlk Notebook Evangelist

    Reputations:
    132
    Messages:
    561
    Likes Received:
    292
    Trophy Points:
    76
    I don't entirely agree with your conclusion, but I agree with the questions you're asking.

    So the question is, where does desktop end and workstation begin? It's hard for me to see how a true desktop system is going to use 72 PCIe gen4 lanes. That's just a stupendous amount of I/O capacity, potentially 144 GB/sec. That's more than just storage in most cases; it would be multiple high end graphics adapters, video acquisition/distribution, things that just sound a lot more like workstation than desktop.

    Intel seems to be classifying parts as server (Xeon), workstation (Xeon-W), HEDT (X), high end mainstream desktop (K), and normal desktop (non-K). With that kind of classification, Threadrippers (particularly Xen 2) are just too powerful -- and more importantly, have vast I/O and memory capacity, and support functionality such as ECC RAM -- to really be desktop; they are full-fledged workstation parts. They have a lot more in common with Xeon-W than with, say, the 10980X, which only has 48 gen3 PCIe lanes and no ECC support.

    I agree with you that the 3950X and 3900X are really high end desktop. They are more powerful than most people need for desktop purposes, and have I/O capability very close to the Intel X series parts (in terms of MT/sec rather than lanes) and in a lot of cases outperform those parts in computation power. But they don't have any distinction besides number of CPU cores from the entry level and midrange CPU parts. I think looking at price that I'd have to call the 8 core parts high end mainstream, which may be a distinction without a difference.
     
    TheDantee and hmscott like this.
  2. rlk

    rlk Notebook Evangelist

    Reputations:
    132
    Messages:
    561
    Likes Received:
    292
    Trophy Points:
    76
    Other than the 3900X, my trip to Micro Center wound up being wasted and I have a bunch of things to return. Silly me for not doing more research:
    • Radeon RX5500XT isn't yet supported well enough under Linux to be of use. It resolves the problem I'm having with Radeon cards not link training successfully when connected through a DisplayPort KVM switch, but it's just not ready for prime time.
    • Asus RT-AC66U just doesn't have the capabilities I need (DNS server, custom routing rules, VLAN). I'm just going to have to get a DD-WRT router for that purpose.
    • CoolerMaster Hyper 212 cooler, in addition to being a pain to install (I'll have to lift the motherboard) isn't much quieter than the Wraith Prism.
    So foo on me.
     
    hmscott likes this.
  3. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,530
    Messages:
    9,535
    Likes Received:
    4,939
    Trophy Points:
    431
    I was tempted to go 3960x route but have decided for now against it. We are talking over $2,000 now for plenty of performance gain but not a required one. My 1950x is more than ample for anything I have to throw at it. 99% of my time is just simple stuff handled by the 2500u as well. Maybe one day down the line, we shall see.
     
    ajc9988 likes this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,072
    Messages:
    20,418
    Likes Received:
    25,218
    Trophy Points:
    931
    LTT builds a new Epyc Storage Server to replace their Intel Server, as always - cringe worthy entertainment. 28GB/sec Read - 22GB/sec Write - as individually loaded drives, they've got some debugging to get through to make Linux / Windows work as pooled or RAID of some sort. Cutting edge, it's sharp stuff.

    Where Intel is in REAL Trouble...
    Dec 25, 2019
    Our editing server is in dire need of an upgrade, and let's just say that this one is going to be EPYC...
     
    Last edited: Dec 27, 2019
  5. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    Although I liked this one, did you see the one where he used a 32 core Epyc chip to build his home systems? He used fiber optics to run the USB and HDMI lines while doing VMs for reach different system, similar to his 7 editors one system rig.

    Now, if AMD does an OCable 64 core with the memory channels and lanes available (like the upcoming workstation boards, or possibly the TRX80 or if they make a class of boards like that of the W3175 for the 64 core OCable chips so that the VRM stay cool, but with all the lanes available), that may be a preferred way over doing client pass through machines that screen mirror (like developer boards being used) in regards to latency.

    The fact we are now talking about home centralized servers, a single system, being able to host all other systems that were in the home, we are looking at changing what personal computing is. This is thanks to AMD chasing the core counts for the commercial space while bringing it to the consumer. Considering it is replacing so many other CPUs (you still have dedicated GPUs per VM, so cannot discount the electrical costs of those), you should be able to do a decent OC on a 64 core chip and still be at a lower power consumption than all the systems being replaced.

    I don't know if you posted that one, but it was in the past week or so of videos from LTT. Worth a peak.

    Edit: here's the video-
     
    hmscott likes this.
  6. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,385
    Messages:
    5,798
    Likes Received:
    3,716
    Trophy Points:
    431


    400w, this be nice with 24 cores threadripper in the future. with zen4 24 cores or intel 18 cores.
     
    Papusan and ajc9988 like this.
  7. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    Agreed, but I'll probably keep water for the main system for the foreseeable future. Already have 3x480mm rads. This cannot do that. But it does look like a great product, one I'd consider over AIOs depending on the finished product's performance.
     
  8. Papusan

    Papusan JOKEBOOKs Sucks! Dont waste your $$$ on FILTHY

    Reputations:
    26,417
    Messages:
    24,843
    Likes Received:
    43,966
    Trophy Points:
    931
    Put 3 similar Delta fans on the water cooler and you'll see temp will drop with the tested AIO cooler as well:) Saved by Delta.
     
    hmscott and ajc9988 like this.
  9. rlk

    rlk Notebook Evangelist

    Reputations:
    132
    Messages:
    561
    Likes Received:
    292
    Trophy Points:
    76
    A personal cloud! Wow.

    The _last_ thing I would want to do with that is overclock the CPU, though. I'd be a lot more inclined to run it at a lower TDP for long-term reliability, in addition to putting it on a big UPS, using ECC RAM, a good RAID setup, and a commercial-grade hypervisor to manage all those VMs.
     
    ajc9988 likes this.
  10. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    I agree with what you said except for the overclock. You would be thermally limited and VRM limited, if not electrically limited if you don't run it on a proper circuit, possibly to itself with a 25-30A breaker dedicated, before you ever had a chance of blowing up that chip. Bearded hardware showed overclocking the 3970X on water could shut down on the VRM on the TRX40 boards before heat on the CPU or voltage were in dangerous places.

    This is why I mentioned a 32 phase VRM, made so that a custom waterblock can cool them, that way you could approach a high 3GHz to 4GHz all core OC with 64 cores on package. Granted, the TDP comes from base, not boost, which is why the 7H12 increased the base, not the boost really, over the prior top end 64-core.

    Basically, what I'm getting at is getting the boost speed high enough that it is a close to 4GHz machine, which sense some are meant for gaming, this would help with frame rate. As IPC continues to increase in the next couple generations, along with new games and game engines better able to scale with cores and threads, a need for higher frequency may be less relevant.

    But this is why I brought that up.

    Edit: reason thinking the dedicated line and breaker is you also will have 4+ graphics cards in there, so you can wind up having a lot of pull if all were under sustained load, which isn't likely depending on your household. If overclocking with a 64 core plus multiple graphics cards, that is a lot of power draw. Add anything else, like a large scale NAS of 30-96 drives, and you are looking at needing to take a lot into consideration. Add on network switch(es), router, dedicated servers aside from the one for the home cloud, and you are now talking about a good power draw. One cabinet and you would be great. And all sound is far away from regular rooms in the house.
     
    Last edited: Dec 27, 2019
Loading...

Share This Page