AMD's Ryzen CPU's (Ryzen/TR/Epyc) & Vega/Polaris GPU's

Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. Deks

    Deks Notebook Prophet

    Reputations:
    983
    Messages:
    4,038
    Likes Received:
    1,243
    Trophy Points:
    231
    Again,GLOFO 7nm will be used for CPU's... not GPU's - so, yes, 40% increase in performance is valid for same power draw (over Ryzen1).
    GPU's will use 7nm TSMC which according to their own technical information gives 35% performance increase over 16nm TSMC process on same power draw.

    There is no magic involved here... we know the reasons for Vega's larger power draw:
    1. 14nmLPP which resulted in high power draw on relatively low clocks because the process was designed for low clocks and mobile parts (hence why Vega is far more efficient at lower frequencies) - 7nm TSMC is designed for high performance and efficiency instead.
    - You couldn't push the clocks very high on 14nmLPP because the process can't handle it and you end up pumping up higher voltages to sustain them when compared to Intel's 14nm and TSMC 16nm.
    2. 14nmLPP lower yields resulting in high voltages.
    3. 40% higher CU's.

    Can you demonstrate that the graph is displaying performance gains when used on Zen2 CPU's... or upcoming Vega and Navi GPU's?
    The article in question is comparing Glofo 14nmLPP and Glofo 7nm (latter will be used for CPU's... not gpu's).
     
    Last edited: Jun 11, 2018
  2. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,150
    Messages:
    4,706
    Likes Received:
    6,576
    Trophy Points:
    581
    You are so wrong, it is starting to annoy.

    1. The power consumption was cut in half to 150W, about. Since it is an and/or situation, TSMC having 35% at the same power draw IS IMPOSSIBLE HERE. That is what you seem to be missing, as if no matter how many times I repeat that, you ignore it.
    2. Then why is there not higher than 35% performance gains considering HBM2 clocked at 1.2GHz is used? Just overclocking HBM2 a little gave 5% or so on performance, and this is with 50-150MHz on it. So, you now have a problem explaining this performance increase missing from the product.
    3. Power consumption, as a measure, is often in W. W=VxA. So, to cut the power consumption in half, you MUST reduce voltage and amps, or some from each, etc.
    4. Low yields have NO RELATION to high voltages. Qualifying voltage for specific operation DOES effect what counts toward yields. Now, with the power consumption focus, and improving that so much, they could have tightened what qualifies a die as a pass. Also, because of costs, they may have had to raise voltage tolerance to qualify a die for 14nm because of the company's financial situation at the time of release. But that is not what you are saying with your second statement.
    5. If the compute units are the same on Vega 7nm and Vega 14nm, how is this a point?

    And yes, I can show that if I have the hardware to test for empirical evidence or using the estimate using the idealized curve, as the prior image showed. This is based on physics. It isn't magic.

    Also, AMD clearly said they will use both and differentiate on products. Never did they say only GPUs at TSMC and only CPUs at GF. If you can show me an article showing AMD said that, I'll eat my words, but I doubt you can. Why? Because this is what Su said:

    "So in 7nm, we will use both TSMC and GlobalFoundries. We are working closely with both foundry partners, and will have different product lines for each. I am very confident that the process technology will be stable and capable for what we’re trying to do."
    https://www.overclock3d.net/news/misc_hardware/amd_plans_to_tap_globalfoundries_and_tsmc_for_7nm/1

    Instead, there are rumors of differentiating the product stack between consumer and compute products, which would suggest more accelerators and product stacks. For graphics cards, they have low and mid, then high, then commercial tiers. They used GF for practically all of this stack and all of their CPUs at 14nm. You are assuming that means all graphics cards will be at TSMC, but that is an assumption. Better assumptions are that Vega 7nm is being done at TSMC because they are already in volume production, but AMD is a key partner for GF, helped to create the tools for it, used its sway to get GF to change their fin pitch to match TSMC, etc. and likely designed within that range. So, start there and move some over to GF after up and running there makes sense. Then there is a rumor of a 7nm card being made at GF, due to comments made to a reporter visiting GF.

    So, you are building in assumptions that may or may not be true and using false information as the power draw was reduced. That is what I'm trying to get through to you.

    Edit: Now, if you are saying they stopped the practice of increased volts to qualify, which drops the watts used from 280-300W+ down to what the average undervolt achieved, which was closer to 220W, thereby making the energy reduction about 35%, thereby easily explaining the portion of the performance increase that I speculated contributed OVER the amount of 10% for the HBM2 frequency, the increased bandwidth due to interface, and the 5-15% performance increase still coming with the 50% power reduction relative to the maximum possible power reduction on the curve, I'd say you may have a point. But that, also, is not what you said.
     
    Last edited: Jun 11, 2018
  3. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,569
    Messages:
    15,958
    Likes Received:
    19,648
    Trophy Points:
    931
    This is a very shallow / quick on the fly in the Conference Demo Booth review that didn't have time to test a range of games or benchmarks, nor does it tune any of the AMD CPU / GPU tunables - or even mention if they can be tuned.

    It's early days, and like the first Asus G702ZC Ryzen + RX580 software updates improved tuning and performance.

    It is nice to see the big time reviewers have an interest, and hopefully Gordon and other reviewers will do more in depth testing, including performance and thermal tuning using AMD / Acer software.

    I wonder how much the mobile Ryzen 56 as indicated it uses can be tuned?

    We just tested the all-AMD Acer Predator Helios 500 gaming laptop

    Here's how fast the new Predator Helios 500 is with Ryzen 7 2700 and Radeon Vega 56 graphics
    By Gordon Mah Ung Executive Editor, PCWorld | JUN 6, 2018 7:12 AM PT
    https://www.pcworld.com/article/327...d-acer-predator-helios-500-gaming-laptop.html

    "The new Acer Predator Helios 500 has enthusiasts of both Intel and AMD covered. That’s because you can now choose from the model shown here, which is built around an Intel CPU and Nvidia graphics, or go all AMD with the version you see at the top of this article.

    The AMD version of the Helios 500 features a 17-inch, 144Hz FreeSync Panel; 32GB of DDR4; a 256GB M.2 SSD; and 1TB HDD. And most importantly for AMD fans, it features an 8-core Ryzen 7 2700 and Radeon Vega 56 graphics.

    The CPU is the desktop Ryzen 7 2700. It’s an 8-core chip with SMT for 16-threads of computing power. It’s also likely the fastest CPU around for many multi-threaded loads. In Cinebench R15, for example, we saw the Helios 500 spit out a score of 1,512.

    (For reference, a Ryzen 7 2700X is in the 1,800 range. That X part does hit higher clock speeds, though.)

    As far as we’re concerned, the performance of the Radeon Vega 56 chip is even more interesting. We know from our review of the desktop part that it punches beyond its class, and likely caused Nvidia to release the GeForce GTX 1070 Ti in response.

    Although we thought the Radeon Vega 56 was a re-purposed desktop chip, we were told that, no, it’s a part that always been intended for mobile use.

    That tells us it may very well be the very first sighting of the Radeon RX Vega Mobile chip that AMD talked up at CES. Mind you, this is not the same graphics core used in Intel’s Kaby Lake G, that unprecedented Intel/AMD collaboration.

    As its name implies, the Radeon Vega 56 should be a full Vega 56 part. We only had one benchmark available to run, but it’s pretty modern—Ubisoft’s Far Cry 5. We set the laptop to 1920x1080, selected Ultra and also switched off FreeSync to prevent it from interfering with any results.

    We know public results of a desktop GeForce GTX 1060 6GB cards are in the 70 fps range and GeForce GTX 1070 cards sit in the 90 fps range. The Vega 56 in the Helios 500? It hit a pretty respectable 80 fps, but it’s still definitely short of a full desktop Vega 56, which actually pushes the 110 fps range in this game.

    Given the thermal limitations of laptops, we have to assume the chip in the Helio 500 is running the GPU at lower clock speeds.

    The last detail we’ll mention is the battery, a 74-watt hour cell. Like most desktop replacement gaming laptops with big screens and big GPUs and CPUs, we’d expect that you’d be lucky to get an hour under heavy loads. But that’s actually typical."

    Acer Predator Helios 500 with Ryzen and Vega plus benchmarks
    PCWorldVideos
    Published on Jun 6, 2018
    Gordon shows you the Predator Helios 500 with Ryzen 2700 and Vega 56. He even got his hands dirty and pulled benchmarks for Cinebench and Far Cry 5. Melissa already did a hands-on with the Intel and Nvidia option, and now Gordon has all the AMD fans covered.
     
  4. yrekabakery

    yrekabakery Notebook Deity

    Reputations:
    200
    Messages:
    821
    Likes Received:
    751
    Trophy Points:
    106
    For reference:
    [​IMG]

    So the Vega 56 in that Acer must be clocked at ~1000MHz.
     
  5. Deks

    Deks Notebook Prophet

    Reputations:
    983
    Messages:
    4,038
    Likes Received:
    1,243
    Trophy Points:
    231
    You have a point for the most part... and I already conceded this before.
    However, a few things I'd like to address:

    1. I DO understand what you're saying, but you might want to have a look at this and tell me your thoughts about it:
    https://www.tomshardware.co.uk/amd-7nm-gpu-vega-gaming,news-58593.html

    "The new process also affords a 2x increase in power efficiency and AMD also claims it provides a 1.35x increase in performance. "

    Note that the article treats both the 2x increase in efficiency and 35% performance increase as separate... they don't appear to be treating them as if you can only have one of the two, and not both.

    I realize that its usually a cut in power consumption OR increase in performance (at the same power draw), but this is not Glofo 7nm process... and TMSC's process was superior to 14nmLPP in several ways (especially in comparison to GPU's).
    Glofo technical specs clearly stated that its 14nmLPP process was designed for low clocks and mobile parts... whereas the 16nm process details its designed for high performance and efficiency - you cannot simply waver this as a 'inconsequential' because it prevented Ryzen from overclocking reliably beyond 4GhZ in the first place, couldn't sustain more than 200MhZ boost across all cores, and the fact that Pascal was clocked much higher than Vega on the core (though, to be fair, the 40% higher number of CU's on Vega could have also eaten away at that while massively increasing power draw).

    2. Possibly due to the premise that HBM heats up quite a bit when you overclock it... but we won't know how far up AMD clocked the HBM until Navi is fully released. All we saw was a preliminary benchmark that could have been anything.
    Also, if AMD indeed clocks HBM to 1200MhZ on Navi, that would be 27% higher HBM speed vs Vega 64 (air)... couple that with a core clock of about 1200 MhZ for example (with a possibly undefined boost speed and of course hypothetical undisclosed IPC gains from optimizations) and you get decent performance increase... how much exactly we won't know until the product launches.
    But I digress, if AMD doesn't change number of CU's, this could be all we could get.

    3. This is why I mentioned that AMD might be able to achieve the 35% increase in performance or more alongside claimed power reductions because the voltages would likely drop by default on these new GPU's. Specifically because TSMC as a company has experience in producing them on their manuf. process and had better yields - but this is open to change.

    4. Wait a second... I was under the impression that AMD had to raise operational voltages to 1.2V specifically because of process limitations which increased the number of functional GPU dies as a result (and also radically increased power consumption)
    Nvidia didn't have this problem due to using TSMC process that was already tailored towards GPU production and was suited for high clocks (hence why it could operate on lower voltages and higher clocks in comparison to Polaris and Vega).

    It was published that Zen 2 will be produced using Glofo 7nm and that GPU's will be made on TSMC 7nm process... and in addition to that, it was also published that since Glofo won't be able to meet upcoming demand, CPU production would be delegated to TSMC due to process similarities.

    5. Emphasis on 'if'. We don't know if AMD will retain same number of CU's for Navi... they might. However, in hindsight, recall that Intel is able to achieve much higher boost frequencies across all cores vs Ryzen, and that in comparison to Ryzen, Intel doesn't suffer from a proverbial 'overclocking wall' (because the more you bump up the clocks, the hardware requires REALLY high voltages on 14nmLPP- Intel can for example overclock with a smaller bump up in voltages due to their 14nm process being more efficient and suited for high clocks - this is also evident on Pascal which was clocked 32% higher on the core and ran with lower voltages out of the factory due to NOT suffering from yield issues).
     
    Last edited: Jun 12, 2018
  6. Deks

    Deks Notebook Prophet

    Reputations:
    983
    Messages:
    4,038
    Likes Received:
    1,243
    Trophy Points:
    231
    Sources: AMD Created Navi For Sony's PlayStation 5, Vega Suffered

    https://www.forbes.com/sites/jasone...nys-playstation-5-vega-suffered/#361c8fb124fd

    Huh... this would pan well into the most recent tidbits that we're getting into how Vega is undergoing optimizations specifically for AI... but if Navi is targeted at consumers, then what would 'we' get as a result once Navi is launched performance-wise?
    A replacement GPU for Polaris that would effectively give Vega 56/64 performance?
    If that's the case... then the design is probably somewhat different than Vega... possibly more optimized towards gaming (given it might have been designed for PS5).
     
    Vasudev and hmscott like this.
  7. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,150
    Messages:
    4,706
    Likes Received:
    6,576
    Trophy Points:
    581


    Ok, I am breaking this out so that I can address it, as I think I know where the confusion has been on your part now. Whenever a product is made on a process, it has a voltage curve. It is the point at which for a given frequency, the voltage must be increased. Think of it like when you undervolt a CPU, or overclock a CPU at specific multipliers.

    As an example, my 1950X has this for the voltage requirements at these multipliers
    3950MHz = 1.175
    4000MHz = 1.225
    4050MHZ = 1.2875

    If you notice, between the first and second step, it is almost the same voltage as the step between the second and third, but that step takes slightly more voltage to reach than the first step taken. At some point, the voltage increase for the same frequency increase will sharply jump up. That is what the power efficiency/performance curve is. You can either take the power reduction at the same frequency of a set amount for a process, or you can take the performance/frequency boost with the same energy consumption. You CANNOT take both. Then, you can take a little of each and wind up somewhere on that curve, like say if you have a 60% energy efficiency or a 40% performance increase, you could take a 50% energy efficiency boost and a 10-15% boot in performance, or you could take a 30% boost in performance and a 20% reduction in energy, etc. It is wherever it falls on that line for the tradeoffs on the energy/performance curve on that process. Does that make more sense?

    Once again, you use a logical fallacy to ignore the curve, which is a hard fact, to appeal to TSMC process, which is HORSE HOCKEY! You still have not proven what products, with absolute certainty, are coming from TSMC, making it a bad statement AND relying on magic. TSMC has their own curve on performance or energy efficiency, which you referenced a couple posts back. What you are missing is that those numbers refer to two points on a curved line. That means that even TSMC is subject to the same physics that GF is. Finally, the idealized power curves are NOT compared against each company. Why? Too many variables. The only way to remove the architecture, implementation of different aspects of the chip, etc., would be to create the same design at both fabs, which is impossible because you require a redesign to move to a different fab due to process differences. At 7nm, you will have the first chance to test the theory you put forth on process efficiencies, etc. Now, there could be some changes, yes, but you are speculating and need to say you believe, not say it as fact because it IS NOT fact proven empirically, it is a theory based on two different companies producing two different architectures with many different features on those architectures produced on two different processes and two different nodes at two different companies. Have you controlled for all of those variables? No (save you the time of trying to say anything but the truth that those have not been controlled for). Now, in the event that both AMD and Nvidia wind up with roughly equal frequencies while being produced on the same process and node at one company, you have helped solve only partial control on those factors. You still would not have controlled for using the process at the other FAB though. If AMD produces their gaming cards at one company and their compute at another, then you have almost a control there, but because they are diverging the designs for these cards, the architecture can vary, which means frequency can vary, just like size of die and number of cores can vary the end result due to heat production, etc. Are you starting to understand how difficult it is to say what you are parroting so cavalierly? You are acting like certain facts of physics don't apply to both companies.

    As to the clocks on the GF sheet, it said 3GHz on HPC and SERVER chips. Even Intel's chips didn't clock much above that on HPC and Server, especially when they started their 14nm designs. You are confusing looking at what is being made using the process as meaning something it didn't say, and using generalized statements from TSMC that do not correspond to what you point out on the other data sheet to then draw a conclusion not exactly founded on that basis. Then you point to a company's decision on base and boost clocks, which are made independent of the process as a marketing decision, to try to bolster this. Then you try to reprieve a little pointing to architectural differences, which is one of the factors I mentioned in the last paragraph. In other words, stop using what people have been saying for the past year or so and start thinking critically. That is what is missing here.

    2. Actually we do know what the HBM2 clocks will be. We know that Vega 56 had 800MHz or so, Vega 64 had 950MHz, and that the second gen HBM2 are clocked around 1250MHz, which is a significant increase in speed, meaning higher bandwidth. We also know they are using the quad stacks instead of two, meaning they are likely using dual interposers instead of single for each stack (so two stacks to one instead of one), which would allow for double that throughput, further increasing bandwidth, if I am remembering correctly from HBM timeframe (I'd like to double check on the interposers used, so that needs tested, but the speed of the HBM2 is a known, try looking it up).

    Also, this isn't about Navi, this is about Vega. Stop deflecting from the conversation by injecting a variable like a new architecture.

    And yes, you can extrapolate estimates from what people have gotten on performance increase by plotting out what they received from overclocking the HBM2 in increased performance relative to the frequency increase on the HBM2. Many received around 5% on a 50-150MHz overclock, IIRC. So estimating around double that, even though not perfectly linear, is not unreasonable.

    3. Palm to forehead again here. You are literally ignoring the curve and not understanding that taking the power efficiency LITERALLY MEANS reducing the voltage. It also cuts the frequency that can be reached which means less performance. Hence why I reference the curve over and over again. You get one or the other, not both. If it looks like you got both, then you need to examine the other contributing factors that led to getting both, as it is NOT JUST THE PROCESS the die was made on that is causing it.

    Then you bring up TSMC here which is not relevant to the curve line you just said. They are not connected quite in the way you are trying to say. In fact, this second sentence brings up other variables, like TSMC's manufacturing process and potential yields on that process, which is more related to my comment in discussing AMD increasing voltage to qualify more dies, to a degree, although there are other factors than die defect density at play here, which I am supposing you were meaning in part with your statement.

    4. Once again, do you have proof of this statement, such as an article, or is it speculation. Known facts are most AMD Vega's can undervolt and reduce wattage down to around 220 without harming performance, whereas there are few in the line that this is not possible. That means that although the majority of cards would qualify with the lower voltages, the higher voltage was used to qualify more dies as passing. You then assume it is to blame on the process itself, then start throwing around statements addressed above.

    Where was it published on CPU at GF and GPUs at TSMC. Don't say it, I showed you an article that did not say that. Show me an article that proves your assertion.

    Also, you are an idiot on the statement on what the article meant for a shared fin pitch. Where did the article saying that say CPU production would be at TSMC also. SHOW ME THE DAMN ARTICLES because I believe you are making **** up at this point for the articles I've read on the matter.

    5. You have NO ****ING POINT here bringing up Intel and dragging a third process into this. You want an explanation on process differences that can explain the extra boost, how about the cobalt layer being a major difference which then can lower voltage to a degree, thereby allowing for a higher boost frequency for the same power draw. STOP SAYING STUPID ****!
     
  8. yrekabakery

    yrekabakery Notebook Deity

    Reputations:
    200
    Messages:
    821
    Likes Received:
    751
    Trophy Points:
    106
    Wccftech strikes again!
    https://wccftech.com/exclusive-amd-navi-gpu-roadmap-cost-zen/
     
    hmscott likes this.
  9. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,150
    Messages:
    4,706
    Likes Received:
    6,576
    Trophy Points:
    581
    This is BS. What none of these assholes realize is that the multi-die aspect of Navi is to repeat the same success of Zen by doing a multi-die design to try to increase performance and yields per wafer due to defect density while beating Nvidia to a smaller node, which then allows a low embedded type GPU to be used to scale to all parts of the market, from commercial down to the PS5. Also, notice how older articles said Navi was Koduri's baby, now there are articles saying he was upset with 2/3rds of the engineering team on Navi, and low design hours and pay, blah blah blah. This literally contradicts everything from the past year. But, yes, let's not look for the logical development of aiming for designing embedded, low power solutions and moving the zen architect to work on Navi, which directly suggests multi-die is coming, instead focus on bad journalism on rumors off the record and no analysis. That makes more sense, right?
     
  10. Deks

    Deks Notebook Prophet

    Reputations:
    983
    Messages:
    4,038
    Likes Received:
    1,243
    Trophy Points:
    231

    Bad journalism aside, we did get prior indications towards multi-die design through use of Infinity Fabric which as you say can scale from low to high end... however, they do seem to be ignoring the multi-die design aspect, question is why?

    AMD did mention in its older slides (if I remember accurately) that Navi is supposed to be scalable... so, yes, in that regard this 'rumor' doesn't make too much sense.
     
    hmscott and ajc9988 like this.
Loading...

Share This Page