Volta: NVIDIA's Next Generation GPU Architecture (2017-2018)

Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Aug 14, 2016.

?

When do you believe Volta will be officially announced?

  1. 1H 2017 - Around GTC '17, at the end of May.

    7.5%
  2. 2H 2017 - Around the end of Summer or early Fall 2017.

    22.4%
  3. 1H 2018 - Early 2018, probably Q1 (Jan., Feb., or Mar.)

    33.6%
  4. I believe there will be another iteration of Pascal for 2017.

    20.1%
  5. I don't care to answer or speculate.

    18.7%
  6. I don't know.

    6.7%
  7. None of the above. I'll post below.

    1.5%
Multiple votes are allowed.
Thread Status:
Not open for further replies.
  1. Support.2@XOTIC PC

    Support.2@XOTIC PC Company Representative

    Reputations:
    486
    Messages:
    3,148
    Likes Received:
    3,490
    Trophy Points:
    331
    I'm expecting to see Volta around summer of 2018.
     
  2. TomJGX

    TomJGX I HATE BGA!

    Reputations:
    1,454
    Messages:
    8,703
    Likes Received:
    3,312
    Trophy Points:
    431
    Agreed, I expect Pascal to be milked to 2018 at least!!

    Sent from my LG-H850 using Tapatalk
     
    killkenny1 and i_pk_pjers_i like this.
  3. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Considering Pascal didn't exist until after Maxwell was announced, when they updated the road map because HBM wasn't ready, it's my belief that there's not much of a reason to stay with Pascal 16nm when you can launch Volta 16nm, and milk Volta for three generations . Out with the old, in with the new.

    AMD is about to release Zen and their HBM2 GPU lineup Q1 2017: Vega. NVIDIA will likely follow and compete with Volta.
     
    Last edited: Aug 19, 2016
    i_pk_pjers_i likes this.
  4. GTO_PAO11

    GTO_PAO11 Notebook Deity

    Reputations:
    173
    Messages:
    1,302
    Likes Received:
    203
    Trophy Points:
    81
    Ah, and I'm expecting to buy a laptop from you a that time. Need that time to save haha
     
    i_pk_pjers_i likes this.
  5. Support.2@XOTIC PC

    Support.2@XOTIC PC Company Representative

    Reputations:
    486
    Messages:
    3,148
    Likes Received:
    3,490
    Trophy Points:
    331
    We'll be here when you're ready!
     
    Prema and i_pk_pjers_i like this.
  6. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    716
    Messages:
    3,046
    Likes Received:
    2,533
    Trophy Points:
    231
    Firstly...
    In essence, the graphics processor now becomes an SoC instead of a PCB, where everything is laid out wide on a PCB, with the VRAM chips separate from the GPU, and connections between the two. This has several benefits:
    1. Smaller physical footprint. Video cards have rivalled their motherboards in size in recent years. I remember having an 8400 GS that was slightly smaller than two credit cards lengthwise. Now, GPUs are behemoths with massive coolers. With HBM/HBM2, something the size of the card above, or even smaller (possibly with the area of a UDIMM or even SO-DIMM) can house processing power well in excess of the current Titan XP.
    2. Waaaay faster memory bandwidth, on the order of a few TB/s.
    That would mean RAM upgrades would be tied to CPU upgrades, as the system is now an SoC. Then we'd be locked down even more. As far as desktops are concerned - this won't happen for a long, long time, possibly a couple of decades at least.
     
    Prototime likes this.
  7. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,681
    Messages:
    6,028
    Likes Received:
    8,564
    Trophy Points:
    681
    Actually, current plans for stacked ram/3d ram/hbm2 on CPUs are to use it for the iGPU with HMA so that when not used by the igpu, it becomes a fourth cache level while maintaining memory on the board for when it is exhausted. So, buying a processor with 16-32GB HBM2 would still allow for however much ddr4 to also be used...

    Sent from my SM-G900P using Tapatalk
     
    CaerCadarn and Prototime like this.
  8. tgipier

    tgipier Notebook Deity

    Reputations:
    197
    Messages:
    1,596
    Likes Received:
    1,572
    Trophy Points:
    181
    Second point is false. First point is kinda true...

    1. Reduce size is true but thermal will still be an issue. Expect similar size as R9 Fury X for HBM2 cards. You still need spaces to lay out your VRMs/cooler/voltage controllers/etc.

    2. Hardly way faster.... 3072bit 12gb HBM2 on PCIE P100 12GB gets you around 540gb/s performance, within reach of good overclocked 10ghz gddr5x on a 384bit. 4096bit 16GB HBM2 on 16GB P100 would give you 720GB/s where a 14ghz GDDR5x on a 384bit bus would give you 672gb/s. Ofcourse HBM2 could be clocked faster in future giving it more edge.

    The power consumption reduction/efficiency and bandwidth are likely to be key points. With bandwidth probably not being a keypoint for consumers right now.

    The con of HBM2 right now is likely to be horrendous yield. NVIDIA have 2 different PCIE P100 configuration with one of them having only 3/4 stacks of HBM2 enabled.

    Do you mean HSA?
     
    ajc9988 likes this.
  9. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,681
    Messages:
    6,028
    Likes Received:
    8,564
    Trophy Points:
    681
    Yep, sorry. Was typing fast and have been off my game lately. HSA. I got heterogeneous memory architecture in my mind somehow.

    Sent from my SM-G900P using Tapatalk
     
  10. tgipier

    tgipier Notebook Deity

    Reputations:
    197
    Messages:
    1,596
    Likes Received:
    1,572
    Trophy Points:
    181
    The thing is, I think the earliest we will see 3d stacked memory on CPU is Zen+ and thats even a very optimistic guess.

    One thing would be interesting... watching Nvidia and Intel fight out the HPC market.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page