Asus Zephyrus M - CPU undervolt and GPU undervolt + overclock

Discussion in 'ASUS Gaming Notebook Forum' started by lovemyg73, Dec 21, 2018.

  1. lovemyg73

    lovemyg73 Notebook Enthusiast

    Reputations:
    2
    Messages:
    10
    Likes Received:
    13
    Trophy Points:
    6
    My variant: Zephyrus M GM501GS EI002T, i7-8750H, GTX1070-8GB (Full), 16GB RAM single channel. Seems to be an Au spec, comes with 512GB SSD, 1TB SSHD. All else the same.

    Repasting: No, left as stock.

    Big intro here… skip to next posts for CPU / GPU undervolting and OC’ing.

    As my tag implies, I joined when I bought the Asus G73jh and nothing else since then… and after all this time decided an upgrade might be in order. I just got tired of playing games at 1024x…. lowest quality just to get by at 15 FPS I’ve benefitted from these forums so much that I’ve decided to hopefully contribute something. This thread will be mainly a CPU undervolt, and GPU undervolt + OC, but I’ll also consolidate some of my other key observations such as maximising battery life and lifespan, and the nuances of undervolting and OC’ing with this laptop, as this hasn’t really been talked about online at all.

    I’ll start with the results..

    Temperature results here are based on playing games for hours, with ROG Gaming Center set to overboost, GPU in discrete mode (ie G-SYNC = ON), everything else default (including Windows 10 power profile set at High Performance, which seems to be the default this laptop boots into)

    Initial Temps: CPU and GPU in the mid 70s to low 80s after temps have stabilised with fan going full blast. Ambient at 22 to 26 degC – not much difference surprisingly to stabilised gaming temps. My guess is 1-2 degrees is caused by the ambient diff. Not sure if gaming in higher ambient like 30+ degrees C, and I mostly prefer to be in a pool at those temps.

    Final temps: After CPU undervolt and GPU undervolt + OC, temps are at least 10 degrees C less, at mid to high 60s. At 22 degrees C ambient, temps stabilise at 66-68 degrees, while at 26 degrees C ambient temps are 69-70 degrees. In gaming, surprisingly, CPU temps are about 1-2 degrees higher than GPU temps.

    Update: after long term observation gaming, game-streaming, etc, CPU and GPU temps don't go beyond 75 C, ambient 26-28 C, with the UVs and OCs applied.

    Observations on Temps: if long term gaming, don’t bother looking at CPU and GPU temps separately. The shared heat pipe does a great job of almost equalising the temps between them. Running a GPU load (eg stress testing) seems to always uses a little CPU, between 5-15% no matter what, and even this low utilisation CPU temps will rise as well, and since the auto fan responds mainly from temp thresholds, CPU and GPU temps will almost equalise after a 5 mins or so. However, running CPU only loads will shoot up CPU temps mostly, and temp difference between CPU and GPU can be quite huge. GPU temps will also rise slightly due to the shared heat pipe, and the fans not boosting up due to GPU temps not going past some threshold. My conclusion is that the shared heat pipe probably contributes a little to heat transfer from CPU to GPU, but not that much since the fans at low levels seem good enough to dissipate the incoming heat. Also, during gaming CPU temps fluctuate drastically as the load on them is constantly fluctuating. CPU intensive tasks (eg stress tests) tend to cause temps to raise steadily with less fluctuations.

    I think if your observations of temps for your Zephyrus M lappy are similar to the above, then your factory pasting job is probably fine. I took the route of not risking warranty to repaste as I don't see better results online. Keep in mind that even with ROG Gaming Centre set to Overboost for fans, the ramp up is still pretty slow, so you can get some high temps initially in the first 10-20 seconds before it settles down quite drastically once fans are full blast. For my CPU chip specifically, I think there’s probably an imbalance, either in the pasting job or in the chip itself. Cores 0, 2, 4 have temps that are consistently 10 degrees more than cores 1, 3, 5. The temps I'm getting from HwInfo are from the CPU temp sensor, which happens to also correlate to the core with the highest temp and not just the average, so I’m not too worried since that means that during stress testing, CPU temps stabilise at mid-70s, which means the hottest 3 cores re mid-70s, while the coolest 3 cores are mid-60s. I’m not sure how the cores are laid out on the chip, but it looks like a pattern eg of lower temps at left and higher temps at right (my guess) side of the chip – consistently a 9-10 degrees differential. One day (after warranty expires) after a repaste I’ll hopefully remember to recheck this..

    On idle temps. My idle temps are between 36-41 degC, both CPU and GPU depending on ambient. Key thing to note is that this is based on stock FAN settings. You WILL get better results if you play around with the fan settings and make it runs faster at lower thresholds, or if it just full blasts all the time. You are unlikely to change idle temps with undervolting or repasting alone, as there probably isn’t enough heat capacity at these temps vs ambient (unless you’re surfing the web while ice fishing in Norway). I have my Windows 10 power mode = “Balanced”. Note that “High Peformance” puts your CPU at 100% boost ALL THE TIME even at idle resulting in about 5 degs increase in idle temps.. you can change this by editing the power plan’s min processor state when plugged in (eg to 5% like in Balanced mode). You can also change the default boot up power mode through gpedit.msc if you want, and force it to go balanced. Up to you… if you do so you won’t be able to select the High Performance power plan manually, but if you don’t configure this group policy, then you can always select Balanced or other plans after boot while plugged in. For me, that’s a hassle so I edited Balanced mode to what I like, and then just enforced this group policy to select "Automatic" mode (aka Balanced) and left High Performance settings as is. On GPU idle temp, when GPU core speeds are locked high (eg while testing in MSI afterburner and hitting "L"), GPU temps were 51deg idle on G-SYNC=ON. I haven’t tested whether Optimus or G-Sync mode changes the idle GPU temps with locked core frequency. I'm guessing that it would not, but then again I'm not sure if the OS would override by using the onboard GPU instead and give ZERO load to the GTX1070 resulting in even cooler temps in Optimus mode. Did not bother testing this...

    A note on battery life: a lot of youtube reviews say battery life sucks. Yes and no. They say 2.5 hrs is the max you’ll get browsing the internet, and doing a few mundane tasks. Yes, my experience is exactly the same if I was lazy and left the settings at default. Most long-life laptops out there, while packing loads of optimised hardware, also use OS tricks to give the user a reasonably “good experience” while tuning everything down. For me, here’s how I maximise battery life… On battery, my power profile is maximise battery, and editing a few settings like min CPU at 0% and max cpu at 0-5%, screen brightness 1-2 pegs from lowest, Windows 10 Battery Saver mode on, ROG graphics mode Optimus, Fan at balanced or quiet. With these settings, I’m stretching 5hrs+ but not quite 5.5hrs. Battery Saver mode reduces my discharge from about 12.5W to just under 10W measured by BatteryBar.. it seems to disable the Windows 10 theme’s GPU intensive stuff (so things like transparency), but also OneDrive syncing, Bluetooth, location, and probably a few other things. Going by what youtuber Linus suggests, you could probably squeeze out quite a bit more battery life by not using peripherals (the ROG mouse with pulsing LEDs look suspiciously power hungry, but I'm only guessing) and instead using only the laptop’s keyboard and trackpad. I suggest also turning off ROG Aura altogether and screen logo LED. My guess is you probably get a good 30 mins more battery life depending on what peripherals you have. You can still game in this mode and performance is still acceptable, but don’t expect more than 60 mins of gaming. If running G-SYNC and full performance on battery, don’t expect more than 20-30 mins of gaming. Not much different from most serious gaming laptops out there according to online results.

    A note on battery lifespan: Asus has an awesome tool called Asus Health Battery Charging. It’s a software that lets you limit the max capacity the battery will charge to when plugged in, with 3 settings from 60% for best lifespan, to 80% for most people, to 100%. Is it worth it – hell yes. Coming from a hobby that forces me to baby my lithium cells, I can confirm that keeping your battery at 100% charged all the time will definitely shorten the lifespan drastically. As a side note, high-current discharge lithiums are affected the most. I’m talking about those that can discharge 40C (real, not just advertised) or more (that is 40x or more of their rated mAh capacity in current. Eg if rated capacity is 5000mAh, then 40x is 200 amps! Keeping a fully charged one for a couple of weeks will drastically reduce the discharge capability. I consider laptop batteries low tech low discharge. At 55Wh 4 cells and assuming 3.7v nominal per cell, that translates to 3716mAh capcity. For it to be considered high discharge at 40C, it would be need to be able to pump out 150amps continuously… I don’t think so judging by those puny connectors. True, the battery could still be well over-spec’ed, but I doubt Asus is dumb enough to do this as there is not only no need, but high discharge cells cost orders of magnitude more, and if kept charged at 100%, last orders of magnitude less. However, assuming you can drain it in 20 mins without issue (ie 3C discharge), I would assume that it would be at least a 5C battery pack but high quality and properly rated. So yes, keeping it 100% charged all the time isn’t as bad, but bad enough. Apparently I’ve heard claims that keeping the laptop at 60% will double battery lifespan. From my experience, that sounds reasonable.

    For me I chose 60% - all fine and dandy right?? Yes and no. Yes, it will charge to 60% if battery is below 60%, but it won’t discharge the battery to 60% in any meaningful speed if your battery is above 60%. However if you start at say 40% charge, it will stop charging at 59%, and if the battery is higher when you plug in, it will stop charging and sustain whatever battery level it is at. But here’s the catch – the app must be running for this limiter to work! Eg if you’re rebooting or if you’ve shutdown, while still plugged in, then the limit you set will be ignored and it will happily start charging your battery all the way to 100%. In my testing, the various BSODs during CPU undervolt experiments required reboots, and those short moments during reboots left my battery charged to 68% after a couple of hours, even though my limit was 60%. So if you’re using this app, and you love your battery, then after you’re done with the laptop, unplug the power cable if shutting down or putting your laptop to sleep. A little inconvenient and I wish Asus had a hardware-based solution for this as it could easily be implemented… an average techie could hack a digital switch based on polling the battery and running off entirely on the power brick’s DC power. On the other hand, there's the usability side of things as well... if you do ANYTHING else consistently with the laptop other than gaming, I suggest choosing 80% or even 100%. My guess is that the battery would perhaps lose 50% capacity in 3-4 years - by then you would just replace the internal battery and not sacrifice mobility today. As a comparison, my Asus G73JH is already reporting battery issues... it will last 5-10seconds before Windows will try to hibernate... it it will not have enough juice to even complete the write from RAM to disk. But then again, that's a 10-year old laptop!

    Final thoughts on the Zephyrus.
    Gaming horsepower of a Ferrari, but light enough and can be made efficient through tweaks to double as a decent carry around. For comparison, my Surface Pro 3 i5 version claims 9 hrs battery life (!!) but online testing has shown a max of 7.5hrs. My experience is closer to 5.5 hrs for web browsing, but hardly 1.5hrs if youtubing. For the Zephyrus M, youtubing will still give you 3.5-4 hrs – not bad for a gaming laptop with only 55Ah battery. Cooling is a neat idea, but the downside is you lose a lot of cooling with the lid closed... not a common use but for me, I do like RDP'ing into machines from a much lower spec machine (like my work one) and running things remotely. So probably won't affect most people.

    For the price, this laptop is worth it to me today. I bought this locally at Scorptec for $2.8k AUD after $200 Asus cashback. For comparison, pricing up a gaming desktop with the same specs would cost around $1.6k-$1.7k AUD for the box (CPU, GPU, MB, Powersupply, SSD + SSHD, RAM). This seems consistent when comparing gaming rigs you can build for $1.2k USD. Then add in a decent 144Hz screen, nice gaming KB + Mouse and you're up to $2.6k AUD, although you are getting a bigger and better screen this way. And if you want to be cheeky, you have to add in the webcam, speakers (which are surprisingly good with decent bass), but even if you didn't, a $200 AUD premium for mobility and compactness is a no-brainer, as I intend to game on the road often, even during camping at powered sites :) However, if you do build a gaming rig to the same specs, you WILL get better performance, especially if overclocking. This is due to the Zephyrus M's power limitations which will throttle performance not only when temp limits are reached, but power limits too. You'll see this in the OC posts below...

    Then there is the full Thunderbolt 3 with 4 lanes. Yes it will give you the ability to upgrade to external GPU in the future... but whether the bandwidth is even sufficient for future GPUs is questionable, considering there are losses even today when building eGPUs with current GPUs.

    Next up – CPU undervolting…
     
    Last edited: Dec 25, 2018
    taraquin and mediadoctor like this.
  2. lovemyg73

    lovemyg73 Notebook Enthusiast

    Reputations:
    2
    Messages:
    10
    Likes Received:
    13
    Trophy Points:
    6
    Update: I've made some major changes - highlighted in red below. 2 reasons:
    1. I've found out TS Bench would pick up bad cache UV when others (including Prime95) would not
    2. Prime95 AVX are too brutal - and won't work with core UV anything more than -80mV. So abandoning AVX but using SSE instead.

    CPU Undervolting to get better performance


    Yes apparently that’s the rumour, and it is true. You will get better performance to a point and only for some apps that somehow use a lot of POWER, after which power limits are enforced rather than temp limits. Temp limits would throttle in these cases, but it reverts back to full boost once temp drops. Power throttling seems to take some time to kick in though, that’s why I saw the recovery from temp throttling BEFORE the application of power throttling. The one I see being enforced is PL1 in HwInfo, which starts pretty high at 70+W and drops to about 45-50W when stress testing to the max with Prime95.

    Aside: For those with office jobs in front of a screen using a work laptop, I recommend setting up the ability to RDP / VNC from work into your Zephyrus which you’ve left at home, and running stress tests this way. Stress testing takes a whole lot of time but little effort, plus Windows 10 does a great job with BSODs by happily rebooting the machine for you. So unless you’ve set up Bitlocker pin on reboot, it’ll happily rejoin the network and allow you to re-login after writing RAM to dump and rebooting. You can then spend 1 min changing settings, and kick-off another stress test, and then alt-tab back to whatever you’re working on.

    Tools used – all free
    • Throttlestopv8.70
    • HwInfo64v6.0
    • OCCTv4.51
    • IntelBurnTestv2.54
    • Prime95v2.94 - with AVX disabled
    • Cinebench R15.038x64

    Ok I’ve deleted all the previous text below this... I’ve had to redo from scratch. I was wrong about my CPU. I’ll post my undervolt settings, test results compared to stock, and provide some info on some strange ‘interactions’ with nVidia’s Optimus causing total laptop freezes.

    1.Throttlestop A/C Profile: (auto-selected when on AC)
    Core: -119.1, Cache: -42.0
    Intel GPU: -51.8, iGPU Unslice: -51.8
    BD_PROCHOT, SpeedStep, C1E, SpeedShift=128
    2.Throttlestop Battery Profile: (auto-selected when on battery)
    Core: -119.1, Cache: -42.0
    Intel GPU: -51.8, iGPU Unslice: -51.8
    BD_PROCHOT, SpeedStep, C1E, SpeedShift=250
    3.Throttlestop Optimus Profile: Deleted - no longer required


    Testing Results

    Testing Conditions:
    • Ambient: 26°C, Idle CPU temp: 50°C
    • Running modified “High Performance” windows power mode, Fan Overboost, G-SYNC, Aura, Nvidia CP + XP, HWInfo open and monitoring, AV disabled
    IntelBurnTest-VeryHigh - Stock
    • Max Temp: 80-91°C
    • Stabilised Temp @ 5mins mid-run: 72-80°C
    • Max Power: 62W, power throttled down to 48W
    • Core V: 1.15V, power throttled to below 1.0V
    • Clock: 3.59 GHz (power throttled)
    • IBT Result: retested - 98.99s, 77.8 GRops
    IntelBurnTest-VeryHigh - A/C Profile - updated with ambient 28C and new settings
    • Max Temp: 70-75°C = 10-15°C cooler
    • Stabilised Temp @ 5mins mid-run: 69-76°C = 3°C cooler
    • Max Power: 48W, no throttling
    • Core V: 1.030V no throttling
    • Clock: 3.89 no throttling
    • IBT Result: retested - 96.16s, 80.15 = 3% improvement - probably error margin
    Conclusion: Runs cooler and faster, and did not trigger throttling (thermal and power)​
    Updated PRime95 without AVX

    Prime95 - Small FFT (non-AVX) - Stock



      • Max Temp: 81°C
      • Stabilised Temp @ 5mins mid-run: 70-77°C
      • Max Power: 72.5W, power throttled down to 60W
      • Core V: 1.019V, power throttled
      • Clock: 3.59 GHz (power throttled)
    Prime95 - Small FFT (non-AVX) - A/C Profile



      • Max Temp: 78°C
      • Stabilised Temp @ 5mins mid-run: 70-78°C - ran hotter due to holding higher clocks
      • Max Power: 62W, no throttling
      • Core V: 1.029V, no throttling
      • Clock: 3.9 GHz no throttling
    Cinebench CPU Test Score: Stock = 1215, A/C Profile = 1212 … no diff
    Battery Profile test results:
    BatteryBar used to measure consumption when running on battery.
    • Power consumption: 8.8-9.5W idle, 11.5-12.5W Chrome. 15-16W with OCCT’s GPU stress test (!!!)
    • Theoretical battery life: 5.75hrs idle, 4.5hrs Web-Browsing, 3hrs+ of OCCT iGPU stress testing :)
    Note: This battery profile is set up purely for maximising battery life, and best used in Optimus mode. It's tied to ROG Gaming Centre's Battery profile that turns off keyboard lights and fan profile "Silent". In this mode, no heat problems.​


    Why the large undervolt for cache:
    I was wrong... TS Bench picked the errors while the other tests didn't freeze or BSOD. So had to drop cache UV down from -140 to -42

    Temp differential between sets of cores:
    For some reason cores 0,2,4 run around 10°C hotter than cores 1,3,5. Almost like there’s some imbalance from one side of the chip to the other. Only seen at high utilisation, and not present during idle.

    Stress Testing – 3DMark – TimeSpy ???
    Yes. For some strange reason, the most severe CPU stress tests from IBT, OCCT, Prime95 all fail in comparison to the humble 3DMark demo TimeSpy benchmark, and not the CPU part of the test but the Graphics tests! There are certain points in both Graphics 1 and Graphics 2 tests that donkey punches the cache, and total freeze occurs. Every once in a while, the freeze would also occur in the intro (called demo). This is the main reason why I had to redo all the testing, and it was a real PIA to test each undervolting step with 3DMark again. Once all settings were stable over multiple bench runs, the other CPU stress tests were done with no issues.

    Optimus interaction – yes, and it’s a problem

    Quick summary:
    if you only plan to run the Zephyrus M in G-SYNC mode, and never use Optimus, then you’ll be saving yourself a lot of hassle. You can undervolt your cache more in G-SYNC mode, and you won’t need to chase another set of stable numbers for Optimus… feel free to skip my observations below.

    Ok. There’s no info on this I could find anywhere, so here’s my take on Optimus and CPU undervolting. My only conclusion is that Optimus is the disease that links CPU and GPU undervolting, which were both happily independent of each other before Optimus came along.

    Note on Throttlestop and Optimus: Throttlestop can't be disabled through the normal methods such as "Turn Off" or exiting the app. The applied settings stick around - as confirmed by HwInfo still reading in the offsets (and the reason for many frustrating crashes which led me on wild goose chases for a while). Also, if one profile configures something, while another doesn't, such as Intel GPU voltages, then switching profiles will retain the previously configured setting. The workaround is to also configure that parameter in all profiles that you use, but for those that don't need it, then leave the value as default (eg 0.0mV offset). Also, another weird thing in Optimus is that clicking on a throttlestop profile immediately applies it, even without hitting save. If you have a 'naughty' profile which you accidentally click, be prepared for a crash. The workaround is to hit "FIVR" and then edit the profile you want there instead. That way your 'stable' profile won't change.

    Note that my A/C Profile above works in G-SYNC, and that’s because the iGPU isn’t used there. Battery profile used to have the same cache undervolt as A/C profile (-160mV), which also worked in Optimus mode while booting into Windows on battery, and not plugging in. Also, in Optimus, all the MSI Afterburner profiles for the GTX1080 disappeared, and I had recreate them here as well, and then apply them again, and this works.

    However, there is something seriously weird about Optimus. The moment I plug in to A/C (or I boot into Optimus mode in A/C), the system will freeze after a while, usually when it looks like it’s switching over from iGPU to dGPU or vice versa. Hard reset required. If I were to guess, the way Optimus is implemented (either by nVidia or Asus) interacts with the CPU such that there’s a small momentary voltage dip just at the point of switching, and just when the cache hit real hard. This means that whatever undervolt I applied to CPU cache, while ok in G-SYNC, is too much for when GPUs switch in Optimus mode. Also, the reason why it ‘seems’ stable on battery, is because the power mode it goes into forces iGPU usage almost exclusively. However Cinebench OpenGL test which tricks Optimus into using the GTX1070 (even though nvidia control panel is set to globally prefer the iGPU) … immediately froze.

    To confirm my suspicions, I’ve disabled throttlestop and double-checked in Hwinfo that no offsets are applied, then applied MSI AB’s undervolts and OC (which succeeds but takes a bit longer than normal – it’s an Optimus thing). Then I re-ran 3DMark benchmark over and over again, and all completed without issues with the same GPU results as when I ran them in G-Sync ie the Graphics test results around the 5700+ mark, while stock is around 5400+… but for some reason, CPU scores are worse in Optimus at around 5500 compared to G-SYNC which is around 6100, leading to a lower overall score. I'm hesitating in saying it's because of the lower undervolt - doesn't explain the large 10% diff in performance. To me it's a mystery...

    Therefore in Optimus mode, the switching from iGPU <==> dGPU is extremely sensitive to CPU cache voltage (my guess). With none applied, everything works fine, even with MSI AB which looks flaky and the author has outright stated non-support.

    Conclusion: My CPU cache undervolt values while stable in G-SYNC, is completely unstable, and I needed to back off quite a bit. Update: While this statement is true, for me it is no longer an issue - since cache UV has dropped significantly to -42 instead of -140.


    Method I used to ensure stability in Optimus
    The problem only arises on switching GPUs. So I needed to a way to do so repeatedly and stressfully. For me, it was the CPU cache undervolting.
    1. Set your test Throttlestop test profile to apply on boot, and test booting in while plugged in to A/C. At some point within a few minutes, there will be some brief toggle between GPUs. If it survives without freezing, move one to the next test.
    2. Unplug from A/C. This should force iGPU usage. Now plug back to A/C. If nothing happens, you passed. Next.
    3. Fire up 3D Mark, Unigine, OCCT GPU test or any GPU stress test. Monitor HwInfo and ensure the GTX1070 (which disappears when in iGPU mode) reappears with some decent wattage going into it. I recommend 3D Mark - it does a good job of flipping in and out of the tests that you get multiple free GPU switching testing while you go do something else. 3D Mark was the most reliable error detector for me. For the other tests, you don't need to do long tests, just the moment when GPUs switch, then monitor for a few minutes. Next.
    4. End all tests and unplug from A/C again. Ensure you have enough battery (60% is fine, should run down to about 45-50% once done). Fire up OCCT GPU test 1024x768 windowed mode, error detection ON. It should run purely in iGPU (surprisingly). Then, fire up Unigine 1024x768 windowed mode - it should also run on iGPU. Now, fire up Cinebench OpenGL run - this will run on the GTX1070, which you can verify from HwInfo. It's quite comical to see OCCT and Unigine stressing out the Intel UHD 630 AND the GTX1070 both struggling through 3 different screens to render. If everything checks out... don't do anything yet...
    5. With everything still running, select the OCCT testing window. What you want to do now is to MAXIMISE it, then minimise it, then maximise it, then minimise it - to your hearts content. All the while, watch for OCCT errors.
    6. If all is well then plug back into A/C, and the final test is to boot into G-SYNC, then reboot into Optimus, and rerun test 3. If all passes, then you should be stable.
    Different types of errors from the tests, and what to do
    • Screen and system freeze, unresponsive, hard disk LED indicator shows zero activity: for me this was 100% of the time CPU cache voltage too low. Back off the undervolt by 10mV and re-test
    • BSOD: ok you stuffed up. Your core undervolt is too much! Write down what your cache undervolt is, back off back to around 30, then start working purely on finding the right Core undervolt first. Always check that your core undervolt is applied by looking at HwInfo core voltage (and not the offset), because you need a minimum cache undervolt to be able to hit certain core undervolt numbers. Adjust accordingly. Once you've determined the right stable undervolt for core, then come back again and hunt for the cache's undervolt
    • OCCT errors in integrated GPU: for me, this was Intel GPU and iGPU Unslice undervolt being too low. OCCT will happily throw 20,000 errors in a second without BSODs or freezes. It seems the iGPU is more tolerant. Reduce both values by the same amount, say 10mV, and retest
    • 3DMark TimeSpy did not freeze or BSOD, but failed gracefully with the app telling you an error was encountered. Again, for me this was 100% the iGPU undervolting.. OCCT may not have picked it up, but 3DMark is really the hero here. In fact you might actually see some 2D artifacts on the 3DMark app itself and other apps after this failure. Back both values off a bit, and retest. You're nearly there for the iGPU...
    • Freeze on login screen. You probably waited too long to login after a reboot (happened to me when I went for a coffee break). What happened is Throttlestop must have applied a profile with bad cache undervolt, and a GPU switch toggled, causing the freeze. Again, back off cache and retry.
    • Basically any freeze = cache undervolt too much, for me anyway.

    Final thoughts…

    Well, that’s my take on CPU undervolting the Zephyrus M. Cache undervolt hunting was the first thing I did since it was quick – a bad undervolt will insta-freeze. Core undervolt took some time, and Prime95 BSODs the quickest if I went too far. However the biggest surprise was that 3DMark Demo TimeSpy became the default winner in crashing a bad cache undervolt, and I used it exclusively until I found a stable cache undervolt, which then went ahead and passed all other tests.

    With my undervolts the CPU on average definitely ran cooler, and hence faster, because of the drastic drop in power usage and skipping thermal and power throttling. However it seems load dependent. That is, ‘normal’ loads are where the benefits are maximum (eg IBT), while AVX instructions do not give a sH_lt and will disrespect your undervolt efforts. Update: I don't test Prime95's AVX anymore, until I can offset AVX.

    And finally – Optimus. When CPUs and GPUs used to live on different planets, Optimus Prime has decided to join them at the hip. If my guess is correct, I can understand why the switching smashes the CPU cache more than any other test in the world. 3 reasons:
    1. to minimise switching delays / latency
    2. preserve rendering info and state when switching, and finally…
    3. ABSOLUTELY POOR HARDWARE IMPLEMENTATION – which explains why there seems to be so much undervolting headroom these days.. and also probably explains why some vendors either purely go for G-SYNC, or Optimus, but not both.

    Oh well... that’s it for my Zephyrus M CPU undervolting adventure. I haven't had this much fun in a while. Apologies for the long post, just didn't want to miss anything that might be important. Hope this is useful to someone.

    PS. There really isn't a point to undervolting Intel GPU and iGPU Unslice. I did it mainly for fun. Even at -100mV for each, the most I was getting is a 0.5W savings stress testing with OCCT. This would net me no more than a couple of minutes extra battery run time.
     
    Last edited: Dec 27, 2018
    taraquin, LPay and mediadoctor like this.
  3. lovemyg73

    lovemyg73 Notebook Enthusiast

    Reputations:
    2
    Messages:
    10
    Likes Received:
    13
    Trophy Points:
    6
    GTX1070 Undervolt + OverClock - Part 1 - squeezing out more performance for less...

    Again, long post here, so apologies for those looking for a quick read…

    This one is a bit confusing. In the past, I just OC'ed Core and Mem frequencies and be done with it, ensuring I don't thermal runaway during stress tests. Other activities that usually followed are repasting, drilling holes, and even once using folder aluminium foil to fill the gap (it really worked... that Dell Inspiron 9300 is 17 years old and still runs today without me touching anything since then!).

    The 'new' method I’ve learnt through the ytubes is to undervolt through modifying the GPU’s core frequency through editing points on the frequency/voltage graphs, starting with the default for the GTX1070 seen in MSI Afterburner, although you don't have to (I learned the hard way that I always should). This act of undervolting achieves overclocking as a side effect, giving less voltage for a given clock, which is definitely the exact same thing as asking more clock speed for a given voltage. Most other tools just apply a fixed increase / decrease in frequency, which is the old school way I used to do over/underclocking.

    I like that Nvidia has opened this up to us, and for the author of the MSI AB app to open up this level of granularity. Both a blessing and a curse. Curse because, as I found out, the manufacturer got it pretty close to right, and it just gave me more excuses to tinker with things I don't know. Blessing because you learn a lot about all the variables that go into “performance”, because there were so many times I thought to myself “Why won’t I get a better framerate with this?” and then closely inspecting the stats, realise that the GTX1070 would love to, but the power limiters say otherwise.

    My whole purpose of doing this is the same as most people I read or watch online - to get more performance for around the same power consumption, and hopefully end up running the card cooler in the process. The aim is not to get the best possible performance while gaming in a freezer. And, as I found out the hard way, I won't get it on this machine no matter what cooling I do, because of other limiting factors...

    For me, the whole exercise can be broken down into 3 phases:

    Phase 1: Find the absolute minimum voltage in which the card will accept the absolute highest frequency it can handle. Beyond this frequency, the card will just crash no matter how much voltage you feed it. Call this Point A. Easy to do with AB’s lock function. For me, this was around 0.95V, and just over 2030MHz

    Phase 2: Find the absolute highest frequency the card will stably run on, for the lowest voltage point I can edit on the graph. This will be Point B. Again, easy to do with AB’s lock function. For me, this was 1711MHz at the 0.8V point.

    Phase 3: Figure out the curve… did the manufacturers get it right, or is there another curve I can give it that will get me better results? This is the hardest, because chances are, with your two points determined from Phase 1 and 2, your curve “shape’ won’t match the default, and you’ll have to figure out how you’re going to curve from A to B.
    This was certainly the case for me. If i was to follow a smooth curve from A to B, it won't work for my card. It could only tolerate big overclocks from 8.5V onwards, and any voltage below that had to be smaller overclock steps. My curve is more like the default curve, up until 9V, with the final points being 1999MHz at 9.5V, a gentle drop to 1987MHz the step before and then smooth descent all the way to 0.8V, closely tracking the default curve. That is, apart from the last 2 points including 0.95V, every point before that was just a standard increase (eg +185mV) for all points. All other points after the 0.95V is locked to 1999MHz.

    I'll try to put an annotated screen shot here as it's easier to see that describe!

    After much trial and error, I came up with a good repeatable method to get consistent results close to the maximum you can hope to achieve, in the least time possible, hopefully... More on that later

    The KEY to it all: for my GTX1070 and my Zephyrus M, in order to unlock more card performance, I must UNDERVOLT Core AND OVERCLOCK Core AND UNDERCLOCK GPU MEM!!

    That last part was the secret sauce, discovered after a lot of head scratching. I don’t really know why this is the case, but my guess is, it’s something to do with the power limiter.

    Here’s my reasoning, and it comes, again, in 3 parts. Again, this is probably related only the setup in the Zephyrus M.

    Part 1: Because we can’t escape some power cap that is enforced in a threshold manner, we need to balance the power going into the GPU MEM vs GPU Core. In total, they should be within some threshold, or the core WILL BE FORCED TO DOWNCLOCK. You can see that if you keep the graph in MSI Open while testing (ctrl-F). You’ll see a dotted line hunt up and down the graph depending on load, and it doesn’t matter what clock you think it can achieve, if it’s drawing too much power, it will drop to a lower voltage and apply whatever frequency that voltage corresponds to in the graph.

    Part 2: Less Voltage In DOES NOT EQUAL Less Power out… well not always anyway. For a given voltage, the card will consume a given amount of power which INCREASES with more FREQUENCY it's pumping. So at certain points when I fed lower voltages for higher frequency, I actually got MORE POWER consumed, to the point that power throttling kicked in. My stupidity more often than not, made the card spend almost all its time BELOW the 0.8V point on the graph, pretty much ignoring my entire effort and giving me absolutely crap performance even comparing to stock. That’s because of the way power throttling behaves…

    Part 3: GPU MEM performance (MHZ) per WATT is non-linear, and not worth it. The additional synergistic performance contribution to the WHOLE card it gives, is not worth the power sacrifice. On one hand, it doesn’t seem to suffer any throttling… but perhaps it should. The reason I say this is because my experiments seems to indicate, even for stock core curves, performance starts to increase as I drop GPU MEM frequency by about 20MHz at a time. This means that the power it consumes at the 4GHz speed is too much, given power limits in place that I assume are shared between GPU Core, MEM and perhaps other components in the card. Basically, to my horror, I found incrementally increasing GPU MEM speeds resulted in incrementally WORSE performance, while dropping MHz resulted in higher frame rates (I’ll describe a consistent and repeatable way to do this), and better 3DMark scores.

    Summary: to achieve the sweet spot, we need to balance the loss in performance when underclocking the GPU MEM, with the boost in performance due to more available power to the GPU CORE, allowing the core to remain at higher voltage-frequency states for longer.

    I’ll start with my settings, and results.

    My profile: +185 shift (approx.), min. point at 0.8v @1683MHz, to 1999MHz @ 0.95v, -325Mhz for MEM

    Test 1: OCCT GPU DX11 1080p
    full screen, Complexity 7, FPS Limit 1000
    Stabilised FPS after 5 mins… STOCK = 199 FPS, My Profile = 221 FPS, 10% Improvement
    Test 2: 3DMark Demo TimeSpy, multiple runs and averaged
    Overall Score: STOCK = 5459, My Profile = 5843, 7% Improvement
    Graphics Score: STOCK = 5358, My Profile = 5812, 8.5% Improvement
    CPU Score: STOCK = 6112, My Profile = 6028, 1% worse

    More to come later…
     
    Last edited: Dec 24, 2018
    mediadoctor likes this.
  4. lovemyg73

    lovemyg73 Notebook Enthusiast

    Reputations:
    2
    Messages:
    10
    Likes Received:
    13
    Trophy Points:
    6
    GTX1070 Undervolt + OverClock - Part 2 – Method I used

    Phase 1: Find the max core OC, and the min core voltage that would run it stably

    Ensure that Nvidia Control Panel / Manage 3D Settings / vSync = OFF

    This step is very important, since it’s how G-SYNC works to cap FPS to 144 to match screen refresh rate (144Hz). If you don’t do this, you might start wondering why your FPS is magically at 144 continously!

    Run OCCT GPU DX11 1600x900 windowed (makes it easier), Complexity 7, FPS Limit 1000 (you can set FPS Limit as high as you want. You want to make sure you don’t set it lower than what your card can do). Use 512MB for error reporting.

    I found the above to reliably load my card enough so that I can get immediate FPS changes, but more importantly, immediate feedback through errors.

    While that’s running, organise OCCT window so that it’s all the way parked on one side of the screen (either left or right). Next bring up AB and ctrl-F to get the graph, and organise these 2 app UIs so that you can view the graph while at the same time able to access AB’s core and mem sliders, as well as the apply and reset buttons.

    Important: once you’ve organised this to your liking, don’t move the various windows for the entire test runs. Reason is OCCT is now somewhat occluded, but you want this occlusion to remain consistent throughout so your results are reliable.

    Ok, NOW note the FPS you’re getting in OCCT. You can use the OSD from AB, or read out from OCCT or some other method. But what you want is to watch a live FPS counter. At all times watch for errors which OCCT will tell you. Also, the reason why you’ve left this running for a while is because OCCT will start at some FPS and settle down to a consistent FPS after a while. And for me, it was very consistent, mostly hanging around in the +/- 3 FPS around some average FPS.

    Stock FPS: For me, I got 200 FPS.

    In AB, I started with the selecting the point at 1.0V and moving this point up manually by +40mV ,hit “L” to lock, drop -502 MHz on Mem (slider), and apply. With lock on, your card will not go beyond this point, so whatever extreme clocks on the right will never be applied. In fact will always “try” to lock exactly to whatever frequency applies for that point, although most likely your power limits will trip and it will start downvolting (and downclocking) automatically. The reason for this is: 1.0V seems to be sufficient headroom for me to max out my core frequency where any higher voltage offers no additional advantage other than more heat. Your card may be different, but with folks online saying 200+ MHz on core at 0.9V for this card, I used this as a starting point.

    And as far as I know, downclocking MEM has no ill effects apart from supposedly performing worse as you go lower. The reason you want -502 for MEM is to give the Core as much power headroom as possible so you can reach those high voltage states.

    Check FPS and errors. If there are errors, back off on the OC for that 1.0V point. If no errors, keep increasing the frequency at that 1.0V point, restarting the OCCT app each time to retest. The reason is because each time you start, the card will hit the 1.0V clock speed, but over time power throttle kicks in and downvolts.. so you want to ensure that you keep hitting the 1.0V point.

    To know this, you’ll notice during testing that in the chart area, there appears to be a horizontal line that keeps moving up and down. This is AB telling you which voltage on the chart the card is currently operating at. You’ll find after some time that it’ll tend to hang around the 0.825 to 0.90v points consistently, sometimes lower, but rarely going above 0.95. This means that whatever you’ve locked in 1.0v is effectively “untested”, so we’re remedying this by restarting the test each time.

    You should start seeing a trend. For me, I started to see a slight increase in FPS as you Core MHZ increased (with mem still at -502).

    For me, I got to +180, -502 with no errors, and FPS in OCCT has increased to 215!! That’s a 7.5% improvement just like this! I got errors in OCCT over 230 or so for core at 1.0V.

    Once you start seeing errors, start backing off a bit at a time until at some point, there are no errors. I’ll call this OC point “max_stable_core_OC

    You’ve now found the max Core OC the card will take! I’ll call this “max_stable_core_OC”. To double confirm this, reset the chart and this time move up a step, eg to 1.25V, and lock this point at max_stable_core_OC. You should see no errors. However if you start increasing core speed again for this point, errors would return. If no errors, repeat the above by upping the voltage for each time and testing higher and higher OC’s in steps until errors return, then back off. Your card may be different, and may be able to accept higher OCs at higher voltages.

    However, if you did see errors at a slightly higher speed that max_stable_core_OC at 1.25V, then you know for certain that max_stable_core_OC is the maximum speed your card can take. What you want to do now is go the other way… apply max_stable_core_OC to the next voltage step down (eg. 0.975V), restarting OCCT testing each time for 1-2 min runs. At some point you’ll reach start getting errors. For me, this was 0.825V, so for me, 0.85V (or was it 0.875?) was the lowest core voltage that would accept the highest core OC (aka max_stable_core_OC). We’ll call this Point A.

    Either way, whether you hunt upwards or downwards, you would achieve the goal. That is – the lowest core voltage point with the highest core OC that is stable.

    Observations: with this method, hunting for the max Core OC is fast, and reliable and repeatable. You can start at 1.0V like I did, or any other voltage you wish, just make sure that during testing you actually see the card go to that point which you locked. In all, this should take you no more than 15-20 mins.

    Phase 2: Find the max core OC that the minimum adjustable point (0.8V) would run stably

    Easy. Similar to phase 1 above, what you want to do is now select the lowest point ie 0.8V, and start moving that point upwards (keeping mem always at -502). You may need another app to monitor core voltages, as sometimes you may not get errors in OCCT because another trigger is tripped: insufficient voltage for clock speed. Your AB OSD may not be the most reliable, so use GPU-Z or HwInfo for this. You want to make sure that clock speed remains locked at 800 when you start OCCT test, and hangs there for a very long time without downclocking. If is downclocks, it means you’ve hit the limit.

    The hunt ends when at some clock speed, for the 0.8V point, either you get errors, or it starts to downclock (thereby ignoring your OC at this point).

    Done! You’ve found the maximum OC that the 0.8V will take. We’ll call this Point B.

    For me, this was 1730Mhz at 0.8V.

    Observations: this is pretty easy, because at 0.8V, you’re unlikely to run into power throttling issues, so your testing should not be interrupted. In fact, you won’t need to restart OCCT each time, because your card will happily stay locked at 0.8V endlessly. The moment you observe downclocking, you know you’ve hit a snag… so either downclocking or OCCT errors would quickly give you an indication of trouble, waaay before anything crashes badly. The above should take you no more than 5 mins.
     
    Last edited: Dec 25, 2018
    mediadoctor likes this.
  5. lovemyg73

    lovemyg73 Notebook Enthusiast

    Reputations:
    2
    Messages:
    10
    Likes Received:
    13
    Trophy Points:
    6
    Phase 3: The curve…

    We now have Points A & B. For me, point B’s corresponding overclock speed is actually a bit higher over stock, than at point A over stock. That made sense to me as I always understood that max theoretical clock speed was related to voltage supplied. What I did is as follows:

    Since Point A’s stable max overclock speed was the lesser MHz gain of the two, I took this number (minus a bit for headroom) = 185MHz.

    So the steps are:

    Click reset on AB

    Click on the first point (0.8v) and use the slider in AB to raise the whole graph by 185mV. Mine settled around 1683MHz to 1695MHz… it keeps changing its mind depending on how ‘warm’ the card is when you apply.

    Next, I chose the 0.95V to be the point where my max upper OC would be, to give me some voltage headroom. From the previous step of raising by 185MHz, I only got to around 1911MHz or so at 0.95V.

    Selecting that 0.95V point, which is now at 1911MHz, I flatlined the curve after that, so that all other points after 0.95V are 1911MHz. I’ve tried to “L” on 0.95V and do it that way, but sometimes it doesn’t work. A workaround is to drag every point after 0.95V to below the 1911MHz, then when you “L” on 0.95V at 1911MHz, it’ll flatline. Trial and error a couple of times.

    You’ll notice I didn’t get the 0.95V to my theoretical max clock speed Point B. We’ll get to that after…

    Also, we’re not done yet, and while we may get a bit more performance by finetuning each point and testing, at the back of my mind I knew that my -502MHz underclock on the MEM was probably hurting me. Luckily for me, my card also agreed.

    So up next… hunting the right MEM speed.

    Phase 3.5: The best MEM underclock...

    Here’s where it gets a bit interesting. It depends on your application. OCCT’s test is more like a homogenous load on the card. For me, the best MEM for OCCT is -502MHz, since I get the best FPS after it has settled from 5 mins continuous running.

    However, the best realistic scenario is something like 3DMark, which will tax with dynamic loads. Using 3DMark, performance increased as I slowly reduced from -502, to -450, etc…

    Here are my test results

    MSI AB Settings 3DMark Graphics CPU
    Stock 5459 5358 6112
    Mod Curve, -502 5607 5539 6027
    Mod Curve, -470 5625 5553 6073
    Mod Curve, -460 CPU fail 5557 CPU fail
    Mod Curve, -400 5646 5583 6036
    Mod Curve, -360 CPU fail 5557 CPU fail
    Mod Curve, -340 CPU fail 5557 CPU fail
    Mod Curve, -320 5680 5619 6054
    Mod Curve, -300 CPU fail 5557 CPU fail


    First, you’ll notice some CPU tests failed. This is how I discovered 3D Mark was a better indicator of CPU undervolt stability! I got around to backing the CPU undervolt later..

    You’ll also notice that my graphics performance started to climb as I increased MEM clock speed. However at some point after around -320, performance starts to reduce. I did not record the results after this, but I did confirm through testing -280, -260, -240 and a few more steps… and all of them showed performance decreasing.

    So, the sweetspot for my machine is around the downclocked MEM, between -320 and -330.

    The graphics improvement is 6% at -320. However it was interesting that in OCCT, -320 did did not actually show any real improvement in FPS over -502. From observing the graph, I can see that the card spent most of its time below 0.9V anyway – so “static” homogenous loads are probably not the best tool to fine tune for overall performance, although it was a good tool to check which direction FPS was going, better or worse, and OCCT error reporting was also invaluable for finding out if my settings were indeed stable.

    Yes, it is very strange that I have to downclock the MEM to get more graphics performance. Again, it must be the power limits being enforced, which actually downclocks the Core, but doesn’t seem to downclock the MEM. So for my machine at least, I needed to downclock the MEM to get more headroom.

    Note: in AB, the option to increase power limits is locked on my machine. I’m not sure how to go about unlocking it, or if I actually want to. There must be a reason why, so I didn’t bother trying to increase power limits to squeeze out a bit more performance. Unlocking the power limits has allowed folks to actually overclock the MEM speeds also. So if you manage to do this on the Zephyrus M, I’m pretty sure you’ll be able to get even better results by clocking a bit higher.

    For me, clocking a bit higher, even though completely stable, resulted in overall worse performance.


    Phase 3.6 – fine-tuning the curve

    Getting the curve performing well seems more art than science. For undervolting, the common theme among all the examples I’ve seen is a straight line after a certain voltage point, which ensures the card never upvolts beyond that point. The curve left of that point… I’ve seen examples online of smooth, sharp jumps, each point manually adjusted and fine tuned, etc…

    I’ve chosen to stick to the stock curve shape for most of the curve. My reasoning was that I felt that lower voltages were less tolerant to big OCs, so I didn’t alter more than the 3 points including the 0.95V after raising the graph wholesale by 185mV. Plus I’m guessing the default curve has been ‘intelligently’ set by Nvidia Shift 3.0 – it probably knows what ratios of upshift the card is capable of at each voltage point, so I didn’t mess with that.

    In the end, I settled on the config posted at the start, and happy with 8% improvement. If time allows, I might play around with the curve a bit more, especially around the 0.95V point, perhaps up to 1.0V, and if I do get better results, I’ll post back. In the end, I chose 1999Mhz at 0.95V. The reason is, after many tests, sometimes AB would apply and the clock speed would actually settle a bit lower, sometimes higher, that 1999MHz. When I had it previously at 2025, sometimes if got pushed to 2037 on its own! I would only find out once 3dMark crash. In any case, I noticed probably 0.3% improvement at best, but this is already risking pushing the limits.

    I've also tried playing around with the points between 0.825 to 0.925V, pushing them up a bit here and there and restesting. Minimal benefit if any, but the worst thing is the 3DMark window would display some artifacting, rather the in the test itself.

    So for my card, it is a standard +185 (or +190, making sure 0.8v point did not go into the 1700Mhz+ terrirtory), and then just boosting the 0.95 and 0.943v points a little bit, and flatline after 0.95V.

    Some more testing results, this time Unigine Superposition

    Unigine Superposition, 1080p
    Extreme Extreme Medium Medium
    DirectX OpenGL DirectX OpenGL

    Stock 3311 3165 12122 10556
    My Curve 3497 3165 12674 10819
    Improvement 5.6% 5.1% 4.6% 2.5%

    Final thoughts… I’ve seen lots of different methods to do this. From the basic slider-only global core and mem overclocking, and the curve method, I actually prefer the curve method. I believe it actually unlocks more potential out of the card as you are able to prevent the card from going too high at the higher voltage points beyond the max clock limits for the core. The method I used is to primarily stick to the shape of the stock curve, but shifting it upwards, then just overclocking a bit more at the 0.95V point before locking all further points to a flatline. I am absolutely certain there is more headroom to play with. However the most surprising thing was that I needed to downclock the MEM to unlock more performance. At the back of my mind, I know that I’m being forced to work around artificial limitations which I suspect is the power limits and throttling because of that. There’s probably much better performance if I unleashed the power, but for now I’ll just get back to playing games since I’ve already spent 10x more time tuning this than actually what I bought the machine for!
     
    mediadoctor likes this.
  6. Arog

    Arog Notebook Consultant

    Reputations:
    10
    Messages:
    244
    Likes Received:
    36
    Trophy Points:
    41
    So the cooling is similar to my GL702VS. It's an interesting form factor for sure. Many had issues with the gl702vs which is why I got mine for 750 usd refurb with a 1070 lol. Took repasting, and serious modding of the bottom panel for me to get temps below 70c without having the fans on full 100%. Usually I can get away with 20% on most games. I will add to your post a simple suggestion as well. Simply going to advanced power options and setting the maximum clock state to 99% will disable turbo boost, and from my understanding will undervolt the cpu thus giving much better temps. It's also wise to use vsync as that will keep the gpu and cpu usage down in most games thus keeping temps cool.
     
  7. Lunatik

    Lunatik Notebook Evangelist

    Reputations:
    83
    Messages:
    459
    Likes Received:
    266
    Trophy Points:
    76
    Gimping a laptop by disabling turbo boost and limiting fps using vsync when you have a 144hz panel is downright disturbing when people paid good money for a product that should function as described. Avoid Asus laptops until they fix their sh*t is my suggestion.
     
  8. Arog

    Arog Notebook Consultant

    Reputations:
    10
    Messages:
    244
    Likes Received:
    36
    Trophy Points:
    41
    Well there really isn't a laptop that doesn't downclock with an intel cpu and a 1070 and up card. It's more of a limitation of the laptop form factor than it is an asus issue. I'd say get a bigger laptop if you need better cooling rather than a slim gaming laptop which will always be a compromise regardless of brand
     
  9. Lunatik

    Lunatik Notebook Evangelist

    Reputations:
    83
    Messages:
    459
    Likes Received:
    266
    Trophy Points:
    76
    Exactly why I sold my Hero ii and will get getting a MSI GE75 :)
     
    Last edited: Feb 6, 2019
  10. LPay

    LPay Newbie

    Reputations:
    0
    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    5
    Very interesting post ive recently brought a Zephyrus M and have plugged into the same values you got from your testing. Im a little concerned that speed shift throttles performance along with temperature?. Have you tried your results with lower speed shift values ? i played an hour or so of destiny 2 with 110 then 100 and the temperatures weren't too bad clock speeds definitely rise so i am amusing performance will eg a few more FPS on the 144hrz screen I had no power throttling.I haven't run any benchmarks, but there must be sweet spot between power throttle temps and performance. P.S i tried 64 on speed shift and got BSOD.
     
Loading...

Share This Page