Nvidia RTX 20 Turing GPU expectations

Discussion in 'Sager and Clevo' started by Fastidious Reader, Aug 21, 2018.

  1. Fastidious Reader

    Fastidious Reader Notebook Consultant

    Reputations:
    1
    Messages:
    243
    Likes Received:
    24
    Trophy Points:
    31
    So when in 2019 will the RTX 20 Clevo laptop GPUs be coming out possibly? Will the work on the Pascal series expedite the process?

    What kind of increase will we be looking at price wise if they are seen more as another tier above the Pascal 10s?

    Will it actually be another leap in performance like last time or will it be more in the capability department due to the Ray Tracing?

    Thoughts?

    Will laptops even benefit from such performance increases?
     
  2. bennyg

    bennyg Notebook Deity

    Reputations:
    1,089
    Messages:
    1,881
    Likes Received:
    1,591
    Trophy Points:
    181
    Looks to me like modest performance gains will be had across the board from memory bandwidth and core count. Nvidia will not be stupid enough to release parts that are not definitively faster than last gen, and able to justify the across the board price increase (and at a time when used crypto parts will only get cheaper)

    Dx12 async compute apps will benefit heavily as it seems Nvidia have made an effort this time.

    Ray tracing is an added tech and how worthwhile in real implementations is yet to be seen (as well as its performance impact), but will not be universal as it'll for sure be a Gameworks feature.

    But Nvidia's history in getting technological innovations deployed is patchy so as always the risk is borne by the early adopters.
     
  3. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    7,561
    Messages:
    47,530
    Likes Received:
    12,832
    Trophy Points:
    931
    A fair chunk of die area has gone to the ray tracing and AI components.
     
    hmscott likes this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,971
    Messages:
    17,555
    Likes Received:
    21,557
    Trophy Points:
    931
    From this photo of an Nvidia presentation slide, it appears as though 50% of the new die real estate has been split between AI (Tensor Core) and Ray-tracing (RT Core), leaving the Shader / Compute traditional functionality with about the same area as the previous generation.
    bigg.JPG Source

    It's hard to measure, but it looks like the RTX Shader and Compute section has less area to me in comparison than the previous Pascal die shown.

    Those Tensor Core and RT Core area's are wasted space for current games, and for me RTX is something I would disable in a game to get better performance, like I disable GameWorks Hair / etc now.
     
    Last edited: Aug 21, 2018
    Mr. Fox likes this.
  5. Fastidious Reader

    Fastidious Reader Notebook Consultant

    Reputations:
    1
    Messages:
    243
    Likes Received:
    24
    Trophy Points:
    31
    So I wonder what the performance will be. Hearing those power needs I'm wondering if they'll even be able to have these in laptops. Clevo Desktop Replacements maybe but possibly not the laptop specs gamer models.

    Interesting. They've been saying that Ray Tracing is a lot more efficient than other methods. So maybe they'll have a good boost. Once games have Ray Tracing implemented that is. That in itself will be a bit till benchmark will be added
     
    hmscott likes this.
  6. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    4,971
    Messages:
    17,555
    Likes Received:
    21,557
    Trophy Points:
    931
    Ray tracing isn't replacing the current shader/compute model, otherwise there would be no shader/compute section in the new die...

    There seems to be additional performance in the traditional area too - Nvidia failed to demonstrate the improvements compared to previous generations.

    Look at all that die real estate taken up that could have been dedicated for real overall gaming performance...

    The RT features are add-on's, eye-candy, like the other Gameworks crap that slows down games. Except this time Nvidia added a hardware assist that is proprietary to their products.

    Nvidia is trying to lock in their lead by redefining the game, given AMD is always nipping at their heals and Intel is once again trying to get it together to put out their own discrete GPU.

    50%+ is a lot of die space to dedicate for eye candy that most of us will end up turning off to reduce power / heat generated by those areas of the die to improve gaming performance. :)

    Edit: I hope there is a way to completely disable / power off the Tensor Core and RT Core's when they aren't useful... that would be most of the time.
     
    Last edited: Aug 21, 2018
  7. Stooj

    Stooj Notebook Deity

    Reputations:
    158
    Messages:
    709
    Likes Received:
    544
    Trophy Points:
    106
    A few of my thoughts:
    1. I suspect we'll be seeing models/announcements late this year. Maybe December to get in with the Christmas timeline or January for "back-to-school/work" type stuff.
    2. The perf/watt change (most important to laptops) isn't terribly large. Maybe 15-20% if we're lucky. The fab change isn't as drastic as Maxwell -> Pascal was so don't expect any miracles. The jump to GDDR6 will also account for a significant portion of that boost.
    3. The largest unknown is the RT cores. We don't have benchmarks yet so it's hard to know how impactful they'll be. RT really only makes sense at the high end anyway.
    4. The TDPs are increased on the desktop cards, but the safe assumption is that is based on all cores being taxed (FP32 + Tensor + RT). So for most games only the FP32 portion should get hit hard so I suspect mobile cards will be able to squeeze into their current TDP brackets.
    5. Nobody knows quite yet if there will even be RT/Tensor cores in mid-low range models (X60 and X50/ti) which makes up the bulk of the market. Chances are they'll be standard FP32 setups and as such a straight upgrade over their Pascal predecessors.

    Most people forget that Ray-Tracing is actually implemented at the API level.

    DirectX and Vulkan will both support native Ray-Tracing calls. All Nvidia is doing is offloading those particular jobs to specialised cores to speed it up significantly. AMD will likely do the same thing. So it's not a lockout as previous Gameworks features which are implemented at an engine level.
    To be honest, assuming AMD isn't too far down the road for their "next-gen" design, Ray-Tracing is actually a good thing for them. AMD arch has always excelled at parallelisation and they can do exceptionally well if they can integrate ray-tracing operations into their existing compute units, which would allow much better allocation of resources and less wasted die space.

    The trick is, Nvidia is also pushing very hard for this to be the future of rendering. This is both a smart business move (if they push it before AMD then they have the next "killer" feature ahead of time which buys mind-share) and a good technological move (ray-tracing is the future and you can now scale 2 processor types instead of 1). That being said, if Ray-Tracing takes off too well, it also cuts off all older GPUs.

    Personally I'll probably end up with a 2080ti in my desktop rig, such as the price is. Currently on a 980Ti, so RT or not, I'll probably be doubling in GPU performance. Even so, it's primarily a VR rig and ray-tracing can be hugely beneficial in VR performance if used correctly. There's a reason why most VR games have piss-poor lighting. It's because most of the tricky lighting we do now either does not translate to simultaneous projection setups or is straight up broken.
     
    hmscott likes this.
  8. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    3,119
    Messages:
    10,511
    Likes Received:
    7,492
    Trophy Points:
    931
    ok so we are talking about REGULAR games here, which is gonna be the absolute majority by FAR for the foreseeable future. thus, no AI, no Tensor cores, no raytracing gimmicks supported.

    based on that the specs indicate a 25-30% performance increase for each of the three new cards. thats it. the regular, run of the mill 25% gen over gen increase weve seen for like...forever? :D

    soooo GAIZ! NOW is the time to go and get urselves 1080s and 1080 Ti cards for CHEAP! perfect example: 1080 Ti Asus Strix went from 870€ to 670€ in ONE FRIGGIN DAY on august 21st. and its still gonna be the second fastest card on the market directly beneath the 2080 Ti, the regular 2080 aint gonna beat it until games are supporting raytracing, tensor cores and AI on a broad basis. not gonna happen until the next or even second after next gpu gen is out.

    mark my words ;)
     
    ajc9988, KY_BULLET and hmscott like this.
  9. yrekabakery

    yrekabakery Notebook Deity

    Reputations:
    466
    Messages:
    1,495
    Likes Received:
    1,431
    Trophy Points:
    181
    25-30% sounds overly optimistic. The 2080 only has 15% more CUDA cores than the 1080 and if anything looks like it clocks lower on the core.

    The 2070 is even worse, only 12.5% more CUDA cores than the 1070 notebook, at lower core clocks.
     
    KY_BULLET likes this.
  10. Fastidious Reader

    Fastidious Reader Notebook Consultant

    Reputations:
    1
    Messages:
    243
    Likes Received:
    24
    Trophy Points:
    31
    Gotta Tuck in all that RT and AI stuff somewhere.

    Honestly that should have stayed with the Business Graphic Arts cards IMO least for the first generation.

    Putting all of that into these when it'll be another Generation or two before full implementation by Game Design Engines able to handle it is just gonna result in a bunch of half baked products.
     
    hmscott and KY_BULLET like this.
Loading...

Share This Page