AMD's Ryzen CPU's (Ryzen/TR/Epyc) & Vega/Polaris GPU's

Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    5,456
    Messages:
    18,656
    Likes Received:
    22,946
    Trophy Points:
    931
    Last edited: Jan 19, 2019
    Vasudev likes this.
  2. Deks

    Deks Notebook Prophet

    Reputations:
    1,039
    Messages:
    4,355
    Likes Received:
    1,587
    Trophy Points:
    231
    MSI CEO Dishes on Intel Shortage, AMD Growth, Taking Share from Apple

    https://www.tomshardware.com/news/msi-ceo-interview-intel-shortage-amd,38473.html

    And this is one of the reasons why I don't want to get an Intel/NV laptop.
    The CEO was actually honest and kudos to him for admitting it, but that kind of behavior (of not using AMD hardware in laptops) is really off-putting... Intel bought off most other OEM's before, and some of them (MSI in particular) seem to remain 'brand loyal' and fear losing continuing monetary support.

    AMD should really consider making their own laptops with all their hardware... optimize the heck of the base model and use it as a template to make more.
    Seriously, their reference laptops which they used to demo all kinds of things in the past were better than what most OEM's had the ability to pull off with same hardware.
     
    Last edited: Jan 20, 2019
    Vasudev, hmscott and Arondel like this.
  3. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,331
    Messages:
    5,222
    Likes Received:
    7,382
    Trophy Points:
    681
    Except your guess didn't point out a May launch would be a 13 month cadence, which I mentioned around the same time, and you keep pushing at lack of chips, lack of yield, etc. The lack of chips/yield and fab capacity I keep proving you wrong on TIME & TIME AGAIN. Now if that is what you meant by not far from your guess, or only referring to the timeline, please ignore, as that was accurate. But, there are two issues related to this that explain having to wait for summer:

    1) as gamers nexus pointed out, there are arguments over implementation of the new chipset. Dropping Asmedia is a good thing overall, which we all can agree with (and Intel should follow with their entire lineup, as both companies have been caught exposed security-wise to Asmedia's boondoggles in the past). Also, concerns on backwards implementation of PCIe 4.0, something Intel isn't even doing and that AMD is beating them to the punch on, and when new boards can be ready, which it seems the chips could be ready before the MB vendors are ready for X570, while looking out 4-6 months (can you guess who said MB partners were to blame for a **** X370 launch: this guy). But MB manufacturers already proved they changed to a degree with X470 and also laughed at Intel on MBs for the Intel 28-core OCable Xeon boards being ready and widely available in December. So both sides are being hit by that.

    2) we have to look at the origin of the I/O chip. AdoredTV with some of the better AMD leakers said that there was ONLY one I/O die being made at GF and it was massive (meaning the I/O die of EPYC). Ian Cuttress, when talking to Mark Papermaster, mentioned the die looked symmetrical, while AMD have not discussed that chip much at all. AdoredTV, which I agree with, mentioned those may be able to be cut into quadrants and then individually binned. This is a distinct possibility, which would then mean that you would have to wait on Epyc to create enough I/O dies binned and cut down to be ready for the mainstream chip release.

    Now, adding to that news AMD could use the extra 10-20% capacity TSMC has on 7nm this spring, as cell manufacturers cut their production on a softening in cell phone projections, partially due to the competitive pricing by Huawei in China, and it seems like AMD is missing an opportunity waiting for the new boards to be ready with the launch.

    So kudos on timeline, just not the other crap that came with it.

    Edit: and for those that don't know, around that same time frame, I got the release WRONG, expecting a March or April launch. May was at the late end, with Computex being expected for higher core count chips and motherboards, once the x570 with PCIe 4.0 at computex rumors came out. So more correct on what the hardware was, wrong on timeline.

    Sent from my SM-G900P using Tapatalk
     
    Vasudev and hmscott like this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    5,456
    Messages:
    18,656
    Likes Received:
    22,946
    Trophy Points:
    931
    Why AMD Ryzen 3rd Gen & Zen 2 Should Get You VERY Excited!
    HardwareCanucks
    Published on Jan 20, 2019
    AMD's 3rd generation Ryzen processors are almost being launched and in this video we go over some of Ryzen 3 and Zen 2 features and possible performance. There's A LOT to get excited about.
     
    Raiderman and Vasudev like this.
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    5,456
    Messages:
    18,656
    Likes Received:
    22,946
    Trophy Points:
    931
    Four Versions of AMD Navi appeared in MacOS update
    21. January 2019 Florian Maislinger
    https://www.pcbuildersclub.com/en/2019/01/four-versions-of-amd-navi-appeared-in-macos-update/

    "The AMD Navi graphics cards are expected to launch in the second half of 2019. For the first time, MacOS source code now contains references to Navi.
    Navi, AMD’s large 7nm attack
    In 2019, AMD is the first processor and graphics card manufacturer to set its sights on 7nm on a large scale. While Intel and Nvidia still use 14nm and 12nm respectively, AMD gradually converts all products to 7nm. The graphics card sector makes the start with the Vega 20 chip. This chip is found in the server graphics cards Radeon Instinct MI60 and MI50 as well as in the new Radeon VII. The processor section also receives the first processor dies with a 7nm structure width through Zen 2. The server area also starts with this. In this case, Epyc 2 comes onto the market with up to 64 cores. In the middle of the year Ryzen 3000 will follow, possibly with up to 16 cores on AM4.

    However, customers are expecting the next generation of graphics cards, codenamed Navi, almost even more eagerly. While Vega 20 is only a 7nm version of the 14nm Vega 10 GPU, Navi is the first graphics card generation based exclusively on the 7nm process. With the new generation AMD wants to replace 2019 the whole product portfolio and bring it to 7nm. It replaces the Polaris and Vega graphics cards. Whether Navi will also replace the Radeon VII is still unknown. There will also be solutions based on the new generation for the mobility market. This is now also indicated by the source code of a MacOS update.

    Four Navi GPUs appeared in MacOS Mojave
    In the TonyMacx86 forum (via Videocardz) a user discovered something in the MacOS Mojave source code. In a file called “AMDRadeon6000HWServiceskext” the GPUs Navi 9, Navi 10, Navi 12 and Navi 16 can be found. The chip might always be the same, while the names only indicate variants or expansion levels. It is still uncertain which expansion stages are involved. However, the number could be the number of compute units, which is why the Navi graphics cards in the source code are probably mobile GPUs. The largest of these GPUs could therefore have 1,024 shader units. Another theory is that these are already the finished chip names, which is why Navi 10 could also be the top model with 64 CUs.

    The fact that Apple is already adapting the first lines of code to Navi could also mean that the new graphics cards will come onto the market much earlier than previously assumed. Six weeks before the launch of Vega, the MacOS source code showed signs of it for the first time. Other sources mention a presentation of the Navi graphics cards at E3 2019 in June. It is possible that AMD might launch the mobile versions earlier, while the desktop versions will follow at a later date."

    AMD Radeon Navi
    https://www.insanelymac.com/forum/topic/336366-amd-radeon-navi/

    Navi Patch in macOS Mojave update (includes device ID and mentions Navi 9, Navi 10, Navi 12, and Navi 16)
    https://www.reddit.com/r/Amd/comments/ai5j31/navi_patch_in_macos_mojave_update_includes_device/

    The AMD Radeon VII supports DLSS via DirectX 12
    21. January 2019 Florian Maislinger
    https://www.pcbuildersclub.com/en/2019/01/the-amd-radeon-vii-supports-dlss-via-directx-12/

    "A feature of the Nvidia Turing graphics cards beside raytracing is DLSS. The AMD Radeon VII also supports this technique via the DirectML API.
    DLSS: 4K appearance, Full HD gaming load
    At the presentation of the new Turing graphics cards in August last year, there were several improvements and new techniques to see. The focus was of course on raytracing. The ray calculation results in much more realistic images, but the required hardware is expensive and has to be very strong. Gaming on UHD and high frame rates is practically no longer possible. However, the effects are very breathtaking, which is why it is a useful addition for some.

    The other technology in focus was Deep Learning Super Sampling (DLSS). This is a new kind of edge smoothing. Super Sampling has been a topic for several years now and is handled by every graphics card. With Turing Nvidia takes the technology to a new level. In its own data centers, hundreds of Tesla V100 graphics cards are used to completely calculate games with the help of Deep Learning. The neural network is trained for the games. The final calculation then takes place on the Turing graphics cards. The smoothing of the edges is far superior to a conventional anti-aliasing technique. With the Turing GPUs and activated DLSS, the image is only rendered in a low resolution and then upscaled to 4K and significantly improved by the edge smoothing. The result is a 4K image that does not require the computing power of a 4K image. Thus, the FPS increase, in some cases DLSS is even more beautiful than native 4K.

    The AMD Radeon Vega VII also masters DLSS
    DLSS is such a topic with the Turing generation because they have built in Tensor and RT cores in addition to normal shader cores for the first time. The Tensor cores perform the inferencing and thus do not impact the shaders. These Tensor cores are currently still completely missing from the competition represented by AMD. However, DLSS could become a topic again with the new Radeon VII. The graphics card is significantly stronger than the previous Radeon graphics cards. So the way for DLSS via the DirectML API could be free. The API was developed by Microsoft and is directly integrated into DirectX 12. Through the API, deep learning optimization can also take place via the shaders without the need for Tensor cores. The DirectML API also works with Nvidia’s Tensor cores. So while the DLSS alternative on the Radeon Vega VII would run on the shaders, Turing graphics cards could continue to run it on the Tensor cores without putting load on the CUDA shaders.

    The question is, as always, the support of the game developers. They are currently reluctant to integrate Nvidia’s DLSS into games. Whether a further alternative can prevail at all is rather uncertain. AMD wants to research in the future also in the direction of Raytracing to bring own graphics cards on the market and whether DLSS over the DirectML-API will be a topic in this case, must be shown first."

    The AMD Radeon VII supports DLSS via DirectX 12
    https://www.reddit.com/r/Amd/comments/ai9ryx/the_amd_radeon_vii_supports_dlss_via_directx_12/
     
    Last edited: Jan 21, 2019
    Vasudev and Deks like this.
  6. Deks

    Deks Notebook Prophet

    Reputations:
    1,039
    Messages:
    4,355
    Likes Received:
    1,587
    Trophy Points:
    231
    Vasudev and hmscott like this.
  7. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,017
    Messages:
    2,930
    Likes Received:
    3,099
    Trophy Points:
    281
  8. Deks

    Deks Notebook Prophet

    Reputations:
    1,039
    Messages:
    4,355
    Likes Received:
    1,587
    Trophy Points:
    231
    Vasudev and hmscott like this.
  9. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    5,162
    Messages:
    8,446
    Likes Received:
    5,910
    Trophy Points:
    681
    I said this few years ago. Most of them prefer gaming on GPUs rather than amateur crypto currency mining or data mining on GPU which AMD excels.
    In my opinion, they should create a profile that Optimises AMD GPU for Gaming, Compute, Gaming+Compute, Gaming+Content creation mode for consumer cards to optimise the performance of their GPUs.
     
  10. Deks

    Deks Notebook Prophet

    Reputations:
    1,039
    Messages:
    4,355
    Likes Received:
    1,587
    Trophy Points:
    231
    Hm.. I wouldn't call AMD's compute capability 'amateur'.
    Professional areas for example (AI learning, medical imaging, GPU accelerated workloads) are something where AMD's compute functions can really shine (provided the devs actually optimize the software for it... but NV with its deep pockets did corner the market and got a lot of devs to use closed-source features as opposed to open ones from AMD which do the same if not better with less of a stress on the hardware).

    However, I do agree that if AMD has a compute heavy uArch like Vega (or Polaris even), they would do well to for example simply disable a portion of compute capability which would leave room open for increasing core and VRAM clocks for example.
    Obviously higher clocks on core and VRAM will only take you so far. Nvidia can technically achieve same or better results in games because they pack their GPU's with more gaming relevant hardware (such as more ROP's, texture units, etc).

    What I find interesting though is that Vega reaches Nvidia's gaming performance (at least in regards to second best top end GPU) with lower core clocks than what Nvidia comparable parts need.

    To me, this suggests that if AMD might have an advantage in raw IPC with GCN... and if they simply disabled via BIOS say 20-30% of compute capability, and ramped up the clocks to Nvidia levels or beyond, it could probably match Nvidia's top end gaming GPU in both gaming and efficiency (possibly even surpass it unless AMD optimizes the voltages from factory).

    But, I don't see what's the big idea in using for example MI50/Radeon VII as a baseline and then just a certain amount of compute so they can increase the clocks.
    It would be like using different profiles indeed, only on a software level (plus, something like this was done before in the past when the PRO cards were turned into gaming GPU's via drivers).
     
Loading...

Share This Page