*OFFICIAL* Alienware Area-51M R2 Owner's Lounge

Discussion in '2015+ Alienware 13 / 15 / 17' started by Spartan@HIDevolution, May 9, 2020.

  1. Rufaro

    Rufaro Notebook Enthusiast

    Reputations:
    0
    Messages:
    16
    Likes Received:
    9
    Trophy Points:
    6
    I can't wait to get mine. I ordered it on July 10th! Hopefully it ships soon.
     
  2. ratchetnclank

    ratchetnclank Notebook Deity

    Reputations:
    787
    Messages:
    1,275
    Likes Received:
    446
    Trophy Points:
    101
    That's insane. The R2 gets 23196.
     
    etern4l likes this.
  3. alaskajoel

    alaskajoel Notebook Deity

    Reputations:
    1,050
    Messages:
    974
    Likes Received:
    881
    Trophy Points:
    106
    Yes, the dGPU is limited to x8 lanes in the R2

    It is incorrect to generalize the x8/x16 issue in such a way. It is very situational. Some work loads will be affected and others will not. My Titan RTX in the AGA (x4 PCIE 3.0) exhibits between <5% and 10% performance drop for most games played between 1080p and 4k resolution. I can force the difference between the x4 and x16 link to be as much as 50% but that is with an extreme case of playing esports titles at 1024x768 and unrestricted framerates.

    AGA uses x4 lanes and thunderbolt uses another x4, but otherwise your point stands. I knew what you meant, but other readers might not. :)

    x12 lane configurations are not possible and the caldera doesn't support x8, so Alienware hung the TB3 controller off the leftover x4.

    For most but not all. The 13r1 and 13r2 were based on Broadwell-U and Skylake-U respectively and used a PCIe multiplexer to split a single x4 PCIe 2.0/3.0 between the AGA and the dGPU.

    What you don't find appealing about the AGA in the 51m is specifically why others really like the machine.

    First, using the 51mR1 and an AGA with even a desktop 2080 is generally faster than the built-in 2080 dGPU, assuming you are outputting to an external display attached to the AGA. The x4 interface is not a significant bottleneck in most gaming scenarios for anything up to a 2080ti and even the 2080ti is about 10% slower at worst compared to x16. Meanwhile, the GPU in the AGA can easily exceed the 200w power limit the built-in 2080 is limited to and with an AGA, the CPU can run at higher frequencies since it no longer must compete with the dGPU for cooling on the unified heatsink. Faster GPUs and frame rates will exacerbate the AGA's x4 weakness for sure, but that doesn't mean a next generation card is definitively pointless.

    Second, the x8 link for the built-in dGPU is absolutely not a bottleneck, even for the strongest desktop cards. A 2080ti on a PCIe 3.0 x8 interface generally performs within 2-3% of of the same card on an x16 interface. Any discussion about future cards is conjecture, but even if Ampere brings 2080ti levels of performance to a 51m compatible 200w power envelope, the x8 interface should not be a concern.

    This is a nice thought, but the complexity to accommodate this is unfortunately impossible within the current motherboard design. While I would love an x8 AGA, the current caldera is also grossly inadequate for x8.

    I'm hopeful we might get a decent AGA redesign when PCIe gen 4 hits Intel's mainstream platform. Since the x16 PCIe 3.0 link is just now starting to see measurable benefits over it's x8 counterpart with the 2080ti, I'm hopeful an x8 PCIe 4.0 bus with the same bandwidth will give more life to the AGA concept without costing any performance for the built in dGPU.

    Wow, I know this is wccftech, but still...there is a lot of sillyness in that article.
     
    Last edited: Jul 23, 2020
    Rei Fukai, ssj92, MogRules and 6 others like this.
  4. G46VW

    G46VW Notebook Consultant

    Reputations:
    137
    Messages:
    278
    Likes Received:
    215
    Trophy Points:
    56
    Did you post some standard firestrike scores? 1920x1080...

    Here is how my unit is performing after two months with non k 9700 and 2080 R1 after probably 300+ hours of BF5 multiplayer.. Would be nice to put a 9900 in it but i really dont want to deal with the temps as well as faster memory, wish I would have looking back on it but I spent as much as I wanted to spend with 700$ off. I do love my unit though, best gaming laptop I have ever owned by miles. If the R2 used the same keyboard which I love on this unit it would be a no brainer for me as my next in line but there are not that many great games out at the moment unfortently and BF6 will be out probably after the 3080 GPUs hit.
    [​IMG]
     
    Last edited: Jul 23, 2020
  5. ratchetnclank

    ratchetnclank Notebook Deity

    Reputations:
    787
    Messages:
    1,275
    Likes Received:
    446
    Trophy Points:
    101
    Here it is:

    [​IMG]

    Interestingly your graphics score is higher? Is yours overclocked?
     
    G46VW likes this.
  6. jc_denton

    jc_denton BGA? What a shame.

    Reputations:
    8,648
    Messages:
    2,651
    Likes Received:
    4,939
    Trophy Points:
    281
    Interesting how the combined score takes a hit with the CL21 sticks, when compared against R1.
     
    etern4l likes this.
  7. GTVEVO

    GTVEVO Notebook Deity

    Reputations:
    478
    Messages:
    1,656
    Likes Received:
    1,615
    Trophy Points:
    181
    How many watts are being pulled through the GFX card on the R2? Does look lower than others but not too far off, could be the balance shifted completely to CPU due to the power hungry 10 core.
     
  8. Spartan@HIDevolution

    Spartan@HIDevolution Company Representative

    Reputations:
    29,188
    Messages:
    22,567
    Likes Received:
    34,618
    Trophy Points:
    931
    Just for comparison, here is my Fire Strike on my Area-51m R1 with a 9700K/RTX 2080

    Fire Strike (Driver 442.23-BIOS 1.3.2) OC 20 MHz Core/150 MHz Mem

    Fire Strike (Driver 442.23-BIOS 1.3.2) OC 20 Core-150 Mem.png
     
  9. normand668

    normand668 Notebook Guru

    Reputations:
    51
    Messages:
    69
    Likes Received:
    93
    Trophy Points:
    26
    Wow,

    If I'm reading that correctly your Physics score is 2/3 of what @ratchetnclank has and yet your combined score is roughly 10% higher.

    I am shocked that the CPU's apparently vary so much, but also the Physics score when so much higher contributed so little to the combined test?

    Its as suggested that when under combined load the CPU is drawing power away from the GPU?

    What do you think of the scores and could there be any way to improve Rachets? Anyone else have any suggestions?
     
    Papusan and Spartan@HIDevolution like this.
  10. Spartan@HIDevolution

    Spartan@HIDevolution Company Representative

    Reputations:
    29,188
    Messages:
    22,567
    Likes Received:
    34,618
    Trophy Points:
    931
    Since I am running a 9700K only, I have high hopes that I will be able to get much higher scores than what Rachet has. He has a good base score to start with but there is a lot of room for improvement like:

    1) Disabling Telemetry
    2) Disabling most background apps (except the ones related to the system/drivers (ie. Realtek Audio Console, nVIDIA Control Panel)
    3) Undervolting
    4) Not using Windows Defender (the 3rd heaviest Antivirus, see: AV-Comparatives Latest Performance Results)
    5) Ensuring your nvIDIA Control Panel Power Management is NOT set to Optimal
    Right click on your desktop then choose nVIDIA Control Panel. Go to Manage 3D Settings from the left pane, then scroll down a bit until you see power management. If you see it's set to OPTIMAL which is the default setting when you install a driver. That's your issue.

    Optimal Power means when there is no draw on the screen, the GPU clock speeds is set 0 MHz to save power and then ramps up once it needs to. Sounds great on paper, works like crap. This is the number one reason why anyone might experience crappy performance from their nVIDIA GPU. What's worse, is that's the default setting in the nVIDIA Control Panel after you install a new driver often leading people to blame the driver for bad performance when it's just the fault of nVIDIA's clowns.

    Set the power management to Adaptive which puts the GPU at lower clock speeds when no GPU intensive apps are in use and it would ramp the clocks up when needed. That actually works. It's the best balance between getting lower heat from the GPU and good performance in games.

    When benchmarking, for the optimal results, it's best to set the power management to High Performance.

    Mind you, after you change the power management to whatever you set it to, a reboot is mandatory for the new clock speeds to take effect.

    Classical nVIDIA Swiss Cheese
    6) Not sure about the R2 but it's also worth checking if the nVIDIA Temp slider can be raised like on the R1:
    Guide: How to unlock the GPU Thermal Limit Slider in Alienware Command Center
     
    jc_denton, normand668 and etern4l like this.
Loading...

Share This Page