Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc

Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.

  1. rlk

    rlk Notebook Evangelist

    Reputations:
    132
    Messages:
    561
    Likes Received:
    292
    Trophy Points:
    76
    And my point is that if IPC is high enough that that will compensate for clock rate. Different architectures will get different IPC; if the rumors are correct that Zen3 will get a big IPC boost over Zen2 (which admittedly is a mighty big if), you might not need such a high clock rate. 4 GHz on one architecture is not the same as 4 GHz on another.

    And even if we assume that GHz = GHz, overclocking is not the issue. Does it matter if you get (say) 4.5 GHz from a chip rated for 4.5 GHz or from one rated at 4 GHz that you overclock to 4.5 GHz?
     
  2. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    Stop trying to obfuscate. The estimate on Zen 3 is 17% IPC. That does not equal the mainstream being 26% faster. Zen 2 is the first time game performance between AMD and Intel became negligible, even though Intel still has the crown. And no, I would not recommend an Intel platform on this because of inadequate PCIe and power draw for using a dual CPU board.

    Frequency is a measure of cycles per second. So, when you multiple the instructions per cycle by the cycles per second, you get the instructions per second because the cycles cancel out.

    Because of this, to get the same performance, any reduction in the cycles per second (frequency) must be made up for in instructions per cycle to get the same performance. Add to this that due to APIs, a single thread takes the bulwark of instructions for gaming, instead of it being spread out more, along with crap schedulers involved, and you are looking at not being there yet.

    Maybe in 2021. Or having it able to reach 3.7-3.8GHz zen 3 would be close to equivalent for that purpose, as you slapped 10% more frequency on, but that is next year's line.

    So I get what you are saying. I do. But math is a fickle mistress and happens to love me today. Math is on my side here.

    Further, you are confusing my arguments with those of overclockers that just want headroom to feel special. Why bring that up? We are talking enterprise hardware and ability to get comparable performance with a single home server to desktop. That means comparing it to desktop performance is apt. As explained, due to frequency and IPC calculations, the current Zen 2 Epycs are running too slow without being overclocked, the server boards don't have sufficient VRM to overclock, and to my knowledge, only an all core overclock is currently possible with Zen 2 Epyc anyways.

    This is why I hope the TRX80 is a robust overclocking monster with the same accouterment of features from EPYC but with overclocking fully supported. That allows servers to remain tailored for their purpose, while also giving home enthusiasts a platform that can meet these measurements.
     
  3. rlk

    rlk Notebook Evangelist

    Reputations:
    132
    Messages:
    561
    Likes Received:
    292
    Trophy Points:
    76
    We're talking about two different things. My original response was to this:

    My point, which I stand by, is that it does not matter whether the performance -- single thread or otherwise -- comes from overclocking, stock frequency, or improved IPC. There's a lot of emphasis on overclocking here that I consider to be misplaced. AMD could easily make their chips more "overclockable" by dialing back the stock frequency and having less aggressive boost algorithms, but that would only reduce performance for most people.

    I agree that it's not likely that core for core a 3.7~3.8 GHz Zen3 would generally match even a 9900KS -- that would require something like a 30% IPC increase. But that's not the comment I was originally responding to that sparked this.

    But if you're really building a gaming cloud-in-a-box, you need more than single thread performance anyway. Sure, if each VM were isolated on its own chiplet or at least CCX there would be less thermal or power cross-interference, but there would likely be some, and if your gaming mix can take advantage of even 4 threads, you're going to find yourself looking at something rather close to all-core performance. Anyway, one of these days the game programmers are going to have to learn how to write proper MT code, because that's where the big improvements are going to come from.
     
  4. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    You are missing my point, and annoyingly so. Literally, I am saying at 3.45GHz, the top frequency an Epyc CPU can reach on Zen 2, you will bottleneck the GPU you are passing through to the VM. I'm not saying it has to be 9900K performance, but at 4.3GHz, AMD 3000 CPUs are now beating the 6700k and 7700K in gaming. That is quad core performance.

    Now, to achieve similar PERFORMANCE, since IPC is FIXED, and because AMD does not utilize their boost algorithm to allow two, 4, and 8 cores to boost into the 4GHz range when thermals allow and the CPU as a whole is NOT under load on Epyc CPUs, and noting that household servers will NOT be fully loaded at the same time, then increasing frequency is the ONLY way to get rid of the bottleneck.

    This isn't about their boost algorithm, it isn't about if frequency is left on the table or not, etc. Server chips are built to operate at a SPECIFIC TDP. They are not meant to overclock (even though you can), nor are they often of need to have the highest single threaded performance.

    BUT, we are not talking about using those chips in their ordinary use case for which they are designed. We are talking about a specific use case that WAS NOT ENVISIONED when the chips were designed. That means that we need to modify the behavior of the chip to fit OUR NEEDS. And that means increasing frequency in this instance. EVEN LINUS MENTIONED IN PASSING OVERCLOCKING THE EPYC CPU! He didn't cover it, but sure did mention it, and that means a frequency overclock.

    Moreover, people WILL use 8-cores per machine. Even Linus did that, except one was 6 cores to leave 2 cores for the host.

    And it is fine to say games need to become more MT. I've screamed in this forum about software optimizations FOR YEARS, and even made that point above. But until we have larger DX12 and Vulkan adoption, and for occasions when people STILL WANT TO PLAY THEIR OLD GAME LIBRARY, they need the ST performance, and to get that, you need frequency.

    Now, because EPYC doesn't boost in the same way Ryzen does and just limit it by temp limits, which would allow for a server chip of this type to not need overclocking, you have to include the overclocking in the discussion. I even covered this previously.

    Because of that, having a board designed with 32-phases, similar to the Intel $1000-2000 boards, but made as TRX80 with all the PCIe lanes of Epyc and with 8-channel memory, that would allow for it to boost quite high, so long as you had cooling. I am assuming AMD is going to have boost similar to Ryzen on their 64-core TR. So, this is why I'm saying what I would like to see that platform be/become.

    Instead, all you see it as is complaining about overclockability instead of going beyond that to see what features are being requested and WHY they are being requested.

    I even pointed out that a 3.7-3.8GHz speed is acceptable for Zen 3 based CPUs, and 3.4 will likely be acceptable for Zen 4 CPUs. Instead of acknowledging why I said those per generation, basing it off of IPS, you immediately complain about people focusing on overclocking without examining WHY!

    That is the problem I have with you. I see your point, but you are ignoring mine COMPLETELY, even when I say how your points are already incorporated into my analysis.

    Instead, try asking questions, as it seems I've given this more thought than you have. After finding out the WHY and the FACTORS that I weighed into coming to the conclusions I did on frequency and overclocking, then you can make better responses.

    You can even go back and re-read my posts. Also, I'm and enthusiast that does overclocking for a hobby. I'm the first person that was telling people that boost, although killing my hobby, is the way things are going. I've given a fair amount of credit to AMD's boost algorithm. But that ignores the context I am discussing now, which is EPYC SERVER CPUs. They function differently.

    Hell, I'm one of the people able to run my 1950X at 4.2GHz@1.375V. I have a cherry chip. It made the 2950X not worth it to me and the higher core count chips, due to architecture and scheduler optimization issues, are not worth it. So when I am talking about using a chip for a non-stated purpose that is outside the norms, there might be a reason that you need to change parameters to make it useful for one's needs.

    Is this getting through to you yet?
     
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,072
    Messages:
    20,418
    Likes Received:
    25,218
    Trophy Points:
    931
    The Hardware Unboxed guys Q&A, some interesting subscriber questions and answers:

    December Q&A [Part 3] Would Cheap Intel 10th-Gen CPUs Be Worth Buying vs Ryzen?
    Jan 3, 2020
    Hardware Unboxed
    02:31 Do you think Intel has anything left to offer?
    04:39 Do you think you might switch Intel cpu with AMD for gpu testing?
    06:49 Do you feel Intel can price their cpu's to gain recommendations in the desktop user space equal to 3600?
    16:29 Do you think in the current time space that Intel having it's own fabs is a blessing or curse?
    16:29 Do you think Intel's dive into GPU and memory market is a sign of wanting to better diversify their portfolio?
    18:14 How much CPU threads and memory is enough in 2020 for playing back videos?
    19:42 Do you think we can expect AMD to finally update their encoder and get better with their drivers?
    26:19 Should I wait for new GPUs or upgrade now (from 1060 6gb to 5700X)?
    Leo 3 hours ago (edited)
    00:36 Front or top mounted radiator? Which has the lower temps?
    02:31 Do you think Intel has anything left to offer?
    04:39 Do you think you might switch Intel cpu with AMD for gpu testing?
    06:49 Do you feel Intel can price their cpu's to gain recommendations in the desktop userspace equal to 3600?
    08:38 Why is it that Steve is already rocking 20+ different AIB models of 5800xt while Tim is using a humble reference model?
    09:51 Does every new gpu launch bring this much criticism from the respective brands fanboys?
    13:10 Will we ever see a game that use DLSS 2X, the one that is supposed to improve images at native res.?
    16:00 With the end of the Skywalker franchise would you guys be a Jedi or Sith?
    16:29 Do you think in the current time space that Intel having it's own fabs is a blessing or curse?
    16:29 Do you think Intel's dive into GPU and memory market is a sign of wanting to better diversify their portfolio?
    18:14 How much CPU threads and memory is enough in 2020 for playing back videos?
    19:42 Do you think we can expect AMD to finally update their encoder and get better with their drivers?
    21:25 Tim, are there any curved color accurate monitors? Or are they all gamer focused?
    23:42 Are there any free / affordable monitor display calibration?
    24:08 How much of an impact on SSD (M.2) thermals does a thick layer of insulating plastic have?
    26:19 Should I wait for new GPUs or upgrade now (from 1060 6gb to 5700X)?
    28:39 Your Christmas and NY plans?
     
  6. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,385
    Messages:
    5,798
    Likes Received:
    3,716
    Trophy Points:
    431
    time to gave up my crappy years old adobe premier. start tapping into handbrake and x265 looking good. amd cpu looking more and more promising. workload change with hardware and vice versa.
     
    hmscott likes this.
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,072
    Messages:
    20,418
    Likes Received:
    25,218
    Trophy Points:
    931
    Here's a user review of their experience OC'ing their 3950x vs their old Intel system, 2nd video with OC first:

    Ryzen 9 3950X Max System Overclock (Memory, Fabric, CPU, GPU)
    Jan 2, 2020
    Nick Muir
    Continuation of my 3950X real world video


    It's here! RYZEN 3950X - How does it perform in the real world vs i7-4770k?
    You be the judge!
    Premiered Dec 21, 2019
    Nick Muir
    New RYZEN 3950X vs old i7-4770k Skip ahead links in the first comment. I sped up the computing footage for the i7 and the 3950X by the same amount in each group of comparisons. That way you can see the difference in real time - without watching hours of video. The 100 photos that were used were Nikon D750 .NEF (RAW) files (~25MB each). The video that was used was GH5 4k 24fps 10 bit ALL-I (~54 GB).
    Vaguely scientific real world tests: Photo editing in Adobe Lightroom 4k and 1080p video editing in Adobe Premiere Pro ASUS RealBench Red Dead Redemption 2 benchmark Borderlands 3 benchmark Link to build: https://kit.co/NickMuir/diy-pc-build-...
     
  8. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,072
    Messages:
    20,418
    Likes Received:
    25,218
    Trophy Points:
    931
    Hardware Unboxed covers AMD and Intel news + fills in AMD news not covered during the presentation - not enough time for AMD to share it all on stage:

    News Corner | AMD Talks Big Navi & Zen 3 (Briefly), Nvidia 360Hz G-Sync, Intel Comet Lake Teaser

    Jan 8, 2020
    Hardware Unboxed
    [spoiler = News Topics Index and Sources:]
    00:00 - Lisa Su on High-End Navi, Zen 3, Desktop APUs
    05:04 - Intel Teases Comet Lake H and Tiger Lake
    08:02 - Intel Ghost Canyon NUC
    10:06 - Nvidia and Asus Partner on 360Hz G-Sync
    11:19 - Nvidia Game Ready Driver for CES
    12:26 - What is Thunderbolt 4?
    Sources:
    https://www.anandtech.com/show/15344/...
    https://videocardz.com/newz/amds-lisa...
    https://www.tomshardware.com/news/int...
    https://koolshare.cn/thread-168913-1-...
    https://www.theverge.com/2020/1/7/210...
    https://www.gizmodo.com.au/2020/01/ra...
    https://www.nvidia.com/en-us/geforce/...
    https://www.nvidia.com/en-us/geforce/...
    https://www.tomshardware.com/news/wha...
    [/spoiler]


    AdoredTV weigh's in on Intel and AMD at CES 2020


    Intel and AMD at CES 2020
    Jan 8, 2020
    AdoredTV
    My recap and analysis of the big two at CES.
    0:00 - More Dodgy Intel Marketing
    5:39 - Spence Leaks Xbox Series X APU
    6:17 - AMD at CES
    24:18 - Intel at CES
    30:06 - Nvidia at CES
    30:11 - Summary
     
    Last edited: Jan 8, 2020
  9. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,942
    Messages:
    12,320
    Likes Received:
    2,343
    Trophy Points:
    631
  10. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    Do you know the origin of the TRX80 and WRX80 rumor?

    It was from the official filing regarding a specific generation of USB support.

    Now not all filings, such as patents, result in products that are made and sold to consumers.

    Personally, I took the filing, comments from AMD clearly stating that memory bandwidth is an issue for the highest core count chips, part of the reason why, most likely, the 64-core chip only shows 50% more scaling over the 32-core chip shown at CES, the other factors being software optimization, frequency difference on boost, etc.

    When you examine these together, you wind up with the logical conclusion that something would need done to resolve the memory bandwidth issue.

    In comes increasing channels. This is the logical extension, with the other logical recourse being to wait for DDR5 in 2021. Even with that increased bandwidth, there is a possibility of more cores with Zen 4 on 5nm.

    But adding extra channels also comes with challenges, from signal integrity on motherboard traces, which will be even more rigorous for DDR5, to the design of the I/O die, including size constraints (assuming they would also want to increase the channels for server boards).

    Another option marries waiting for DDR5 with placing HBM on package, which acts as a massive first line with much higher bandwidth versus either DDR standard, so long as you can keep it feed from the DDR efficiently (which shouldn't be much of an issue).

    So, are you going to say the filing was false now? Or rather wait for confirmation that a product will see the light of day? The latter is sound, the first is not.
     
Loading...

Share This Page