RTX 3080 trumps 2080ti by a whopping margin

Discussion in 'Gaming (Software and Graphics Cards)' started by JRE84, Jun 24, 2020.

  1. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    More like monopoly. AMD hasn't made a mobile GPU worth caring about in almost a decade.
     
    raz8020, seanwee and Papusan like this.
  2. JRE84

    JRE84 Notebook Virtuoso

    Reputations:
    856
    Messages:
    2,505
    Likes Received:
    1,513
    Trophy Points:
    181
    i heard the 6900xt matches and sometimes beats a 3090 at 4k...hmm probably rigged results but at least its something, also I was thinking of just doing a egpu setup but these cards are next to impossible to buy
     
  3. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,105
    Likes Received:
    3,935
    Trophy Points:
    331
    They had to either severely under clock the 3080 to hit a 150-200w power envelope or use the 3070 which requires less power out of the gate. 3000 series is power hungry especially the 3080 and 3090. You are talking a reduction of ~350w+ to 150-200w for a true mobile 3080. That's pretty substantial. In the end, it may have been a wash benchmark wise: heavily capped 3080 vs near (~10% less @ 150w) normalized 3070 running in a mobile envelope.

    Performance wise, it looks like another big 'ole round of meh. 3080M (aka the artist formerly known as 3070) will bring 2080 super - 2080ti (ish) level of performance which will be a not so large step over the Mobile 2080 super performance, but it is an upgrade...technically. Then again 2080 mobile to 2080 super mobile was severely disappointing too. I do think a 200w version of the 3080M would equal a full fat stock 3070 or better. Something like that in an X170 (or even Alienware 51m r2) would be decent.

    When I read about shoehorning 3080 hardware into laptops but just grossly capping clocks and mem speed, all I see is a small subsection of overclockers bemoaning the fact they won't be able to take a capped "real" 3080 and go in there and tinker with it and shunt it to push normal desktop speeds as much as possible "Don't restrict me in any way bro..." :mad::p

    As for Nvidia being a money grubbing corporation....what else is new? Instead of innovating for mobile solutions, they opted to ramp up desktop power requirements for Ampere with no comparable mobile solution besides using a mid tier card and relabeling it Nvidia old school style. With AMD hot on their heels now and a viable threat, they can't hold anything back on the desktop where they're not constrained but mobile solutions will suffer if the desire is desktop parity.
     
    raz8020 and Papusan like this.
  4. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,105
    Likes Received:
    3,935
    Trophy Points:
    331
    6900xt is the real deal for pure rasterization performance, but it really is just a 6800xt with 8 more CU's enabled. Subpar RT and no DLSS to boot. Overall, on average tested against a wide selection of games, I think the 3090 came out ahead by ~4%? 6800xt, 3080, 6900xt, 3090 are so close in so many games without RT or DLSS enabled anyone will work. Of course if RT or DLSS come into play AMD does fall behind quickly.

    If you go AMD, 6800xt is definitely the sweet spot, but I'd still take a 3080 over a 6800xt...

    ....even with 10GB and modern games knocking on that VRAM limit (that one was for you @yrekabakery !! :p:D)
     
    raz8020, seanwee and yrekabakery like this.
  5. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    I wonder how the maxed out GA104 for mobile 3080 (200w Max-P) will perform vs. a power gimped real 3080 desktop card running 200w.

    It's all about the RT performance nowadays as nvidia said so nice to Hardware Unboxed:D And not comparable max FPS or high numbers in 3DM FS and Time Spy.
     
    Last edited: Jan 1, 2021
    raz8020, JRE84, etern4l and 1 other person like this.
  6. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    I’ve come around to the notion that RX 6800/6900 are overpriced. They simply lack too much in features compared to Ampere. The more RT effects a game employs, the further they fall behind. They don’t even match Turing in fully path-traced games like Quake II and Minecraft. The 6800XT performs half as fast as the lowest RTX card, the 2060, in Minecraft when DLSS is enabled lol. And that’s another thing too, without DLSS AMD simply falls even further behind in both RT and rasterization.

    This is probably why CDPR didn’t enable ray tracing support on AMD GPUs in Cyberpunk. Cyberpunk is the most comprehensive ray-traced AAA title, and requires DLSS for acceptable performance with RT enabled. The performance with RT enabled on AMD would just be too poor and hurt CDPR’s image. There’s no technical reason otherwise that Cyberpunk’s RT shouldn’t work on AMD, since it just uses the universally supported DXR feature of the DX12 API.

    Not to mention, AMD can’t compete with the Turing/Ampere NvENC encoder when it comes to image quality at low, livestream-level bitrates. And performance at 4K+ favors Ampere’s float-heavy SM design and much higher memory bandwidth, allowing it to brute force its way past AMD’s Infinity Cache and much lower bandwidth.

    16GB of VRAM is nice, but I’d rather have 10GB of much faster memory. I’m of the opinion that the 3080 will hit a bottleneck in raw performance before memory capacity, even in future games.
     
    raz8020, JRE84 and electrosoft like this.
  7. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    Well Nvidia could have used the GA102 chip at lower clocks but there wouldn't be any benefit since they would have to clock the GA102 so low that it would match a GA104 anyway. And Nvidia will charge $1200 for a mobile GA104 and that will bring more profit to them than delivering a mobile GA102 with lower clock speed. Also the stock for GA104 seems to be a bit better than the stock for GA102 so it was in fact an easy choice for Nvidia to just use the GA104 instead the GA102.

    I'd highly recommend to everybody to skip this mobile generation and wait for the Super refresh or whatever name Nvidia gave to the Ampere 7nm cards which are going to release in Q3-Q4 of 2021. At 150W cooling limit going from 8nm to 7nm should make an interesting difference in performance.

    Interesting enough the desktop RTX 3080 has a wonderful performance per watt ratio, which is by definition "efficiency". I blame laptops engineers for not being able to find better cooling solutions that allows to laptops to use 300+W GPUs. They barely can handle 150W on the so called "gaming" laptops. Although large Clevo laptops easy can handle 250W GPUs.

    [​IMG]

    [​IMG]

    [​IMG]

    Well to be fair the RTX 3080M should have a good 40% performance improvement in RT over mobile 2080 and 2080S, thanks to the use of the new 2nd gen RT cores also improve DLSS with 3rd gen Tensor cores. But in regular rasterization the difference should be more like 20-25%.

    My friend I wouldn't touch AMD unless you can't get your hands on a RTX graphics card. The RTX 3080 is the sweet spot in this gen of graphics cards. The RTX 3080 has all the performance of the RTX 3090/6900XT minus a 5-10% depending on the resolution plus the good price of $700, and beats the hell out of AMD on RT, and you get Tensor cores for AI/DLSS and broadcasting suite. A really nice product. And if you're willing to expend $1000, just wait for the RTX 3080 Ti, but the performance jump from a RTX 3080 will be something like a 5-7%, you go only after those 20 GB GDDR6X more than anything with the RTX 3080 Ti.

    Nice video, I wonder what the number would have been for 150W cap. Probably more like a 25% performance decrease in 1080p and a 50% performance decrease at 1440p. We need better cooling solutions on laptops. That's the key my friend.
     
    JRE84 likes this.
  8. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    No, they wouldn't have used the GA102 chip anyway, since mobile GeForce has never had a greater than 256-bit bus.
     
    raz8020 likes this.
  9. JRE84

    JRE84 Notebook Virtuoso

    Reputations:
    856
    Messages:
    2,505
    Likes Received:
    1,513
    Trophy Points:
    181
    does anyone have a eta for the mobile 3080
     
  10. aarpcard

    aarpcard Notebook Deity

    Reputations:
    606
    Messages:
    1,129
    Likes Received:
    284
    Trophy Points:
    101
    Really theres two issues here preventing the full fledged 3080 from making it into laptops.

    1) There has never been a mobile gpu made that had a memory bus larger than 256bit. The challenge of cramming that onto an MXM board or laptop motherboard is immense. This is the same reason why laptops never got NVLINK support. Simply not enough real-estate on the PCBs.

    When I saw the 3080 was a gpu with a >256bit bus, I knew it was never going to be put in a laptop.

    2) The average consumer for nearly a decade now has been buying thin and light "gaming" laptops with a ferver. They've shown that they don't care about maximum performance, are perfectly happy with the computer being disposable (bga components) and will pay $2-$2.5k+ every year for a the newest model.

    If that is what the consumer wants, then why would NVIDIA or any other laptop manufacturer undertake the extremely expensive task of trying to cram a gpu with a >256bit bus into a laptop and then cool it at 300 watts? The consumers have said they don't want that.

    I blame the consumer more than the companies. Its not like there weren't other options out there for the past several generations. If the average consumer actually wanted performance, then everyone would be rocking Clevo P870's (or similar) with their 500+ watts of gpu cooling capability, and thin and light "gaming" laptops would be a niche.

    Sent from my SM-G973U using Tapatalk
     
Loading...

Share This Page