As mentioned in my review, the W7S can get quite hot and good power management profiles are essential to keep it running cool, and to avoid premature heat-death of the components. One particularly annoying sign of the high thermal output is the fact that the fan keeps switching between the lowest speed and the next highest one, when the laptop is idle.
The post below documents my attempts and experience with managing the CPU and GPU clocks and voltages on the W7S.
Caution: In the tests below, I will only focus on temperature drops without talking about stability. Nevertheless: To use ANY voltages lower than the default ones, the CPU has to be THOROUGHLY TESTED FOR STABILITY, on EVERY MULTIPLIER for which the voltage has been modified. Also, stable voltages will vary among CPUs of the same type, so even if I was able to run the computer with 0.2V lower than the default voltage on max multiplier, that's no guarantee that your computer won't crash instantly with that voltage. Every multiplier has to be tested separately; the fact that voltage X works for the max multiplier doesn't mean that the voltages interpolated automatically by RMClock will work on the smaller multiplier values.
Do the stability tests using Orthos Beta, which is based on StressPrime but is modified to stress both cores at the same time. Run the CPU stress test for at least 3 hours for each multiplier that you use. For instance, even though default-0.2 volts worked for me in the Intel Thermal Analysis tool, and for browsing the internet and light use, stressing the CPU with Orthos Beta caused it to BSOD in less than two minutes. So, I had to drop the voltage to default-0.15 volts to run the CPU stably (actually, I'm still doing stability tests).
Windows Vista only uses two power profiles for the GPU on the W7S that I'm using, and it switches them in accordange to the Power4Gear Extreme profile: GPU profile is Balanced on the Quiet Office and Battery Saving; and High Performance on the High Performance power profile. Note I'm using the XP names for the GPU power profiles. Vista calls them differently: ASPM (Active State Power Management) = Maximum Power Savings and = Moderate Power Savings both correspond to Balanced; and ASPM = OFF to High Performance. Riva Tuner uses Low-Power 3D for Balanced,
Performance 3D for HighPerformance, and Standard 2D for the Power Savings XP profile, which cannot be used by Vista as far as I can tell.
Tests and Temps
First, let's get some baseline results. I first checked the temperature on idle; while running the Intel Thermal Analysis stress tool; and during 3DMark06. Every data point reports the stress condition and the power profiles of the CPU and GPU.
*** Idle, Quiet Office (CPU 8X, GPU Balanced), no undervolting:
CPU 59, GPU 71
*** Intel TAT, CPU+GPU HighPerf, no undervolting:
CPU 95, GPU 90
*** 3DMark06, CPU+GPU HighPerf, no undervolting:
CPU 75, GPU 84 Score 1034
Then, I started RMClock. Still without undervolting, I setup the Performance on Demand profile to allow the CPU to go full-speed while the Vista power profile was officially "Power Saving"; this ensured that the GPU stayed at the "Balanced" profile.
*** Intel TAT, CPU Power on Demand, GPU Balanced, no undervolting:
CPU 91, GPU 89
Next, I started to undervolt, to determine the gains that the lower voltage will make in the full-blast temperatures.
*** Intel TAT, CPU Power on Demand, GPU Balanced, -0.1 Volts at max multiplier (no IDA), interpolated values for the rest of the multipliers:
CPU 81, GPU 82
*** Intel TAT, CPU Power on Demand, GPU Balanced, -0.2 Volts at max multiplier (no IDA), interpolated values for the rest of the multipliers:
CPU 76, GPU 80
As you can see, the gains of undervolting are quite significant for both the CPU temp, and the GPU temp (why? because heat is transmitted through the heatsink to the GPU die).
Downclocking the GPU (as far as possible...)
Next, I tweaked the HighPerformance power profile of the GPU to have as low clocks as possible (200 core, 200 memory, using Riva Tuner). Why did I edit HighPerf into a reverse role of what it should have? Because Vista only uses the Balanced and HighPerformance GPU profiles (see Preliminaries above), and the Balanced profile cannot be edited with Riva Tuner 2.09 (I don't know why... for some reason the sliders are frozen). The clocks for Power Savings (2D standard) go as low as 80/80, but I can't determine Vista to use those, unfortunately.
To force the GPU profile to HighPerformance, I started Power4Gear and switched it to HighPerformance. Since RMClock is in control of the CPU, the power profile will not push the CPU to full.
*** Intel TAT, CPU Power on Demand, GPU HighPerf w/ clocks to minimum, -0.2 Volts at max multiplier (no IDA), interpolated values for the rest of the multipliers:
CPU 77, GPU 81
So dropping the high-performance clocks to their minimum of 200/200, doesn't help with the GPU or CPU temperature. With RivaTuner, the only way to alleviate this is to control the Balanced setting, which can actually lower; but as explained earlier, in my version of Riva Tuner the sliders are frozen in the Balanced setting. Another option is to force Vista to use the Power Saving GPU profile -- I have not found a way to do that, see also the next paragraph.
I have checked the power plan settings for Vista, and it seems that both in Quiet Office and Battery Saving, the PCI Express (GPU) ASPM setting is set to Maximum Power Savings. This, however, gives mid-range clocks that are comparable to the Balanced clocks in XP. I have tested by setting the PCI Express setting to Moderate Power Savings on Quiet Office, and there is no difference, I repeat no difference, in GPU clocks between Moderate and Maximum Power Savings. So either it's a bug, or a design feature in which case we got shafted by Vista yet again.
Next, let's see whether SuperLFM can make a difference. SuperLFM can drop the speed of the CPU even lower than what could be achieved by the multiplier setting. It is a power saving feature of the Santa Rosa platform.
*** Idle, CPU Power Saving (SuperLFM, 600MHz clock speed at 0.9 volts), GPU HighPerf w/ clocks to minimum:
CPU 60, GPU 74
*** Idle, CPU Power Saving (6x multiplier, 1.2GHz at 0.9 volts), GPU HighPerf w/ clocks to minimum:
CPU 60, GPU 73
Using undervolting and keeping the GPU in a Balanced profile, it was possible to drop the full-blast CPU temps with 15 degrees Celsius, and the GPU temperatures with 10 degrees. However, I have been unable to drop the idle temps, mainly due to a poor implementation of GPU power management in Windows Vista. To get lower idle temps, it is essential that the GPU is downlocked significantly, and that can only be done in XP. So the only smart solution to keep the idle temps down on the W7S is this: use Windows XP. The gains from undervolting will be the same in XP. In addition to that, however, by significantly dropping the GPU clocks, one should be able to both decrease further the full-blast temps, and to decrease the idle temps.
I have updated the GPU driver to 175.16. Since I did not really trust the readings of RivaTuner and the power profiles of Vista, I ran some performance tests with 3DMark. With these drivers, RivaTuner still did not allow me to change the clocks on the 3D Low Power mode, so when I say minimum clocks below, I only mean minimum clocks in 2D Standard, and 3D Performance, modes. Note ASPM = Active State Power Management, the state of the PCI Express setting in the Vista Power Profile. I have always set the CPU to Power on Demand, regardless whether plugged in or not.
I. Plugged in
A. Standard clocks:
ASPM = MaxPowerSavings: 1151 / 82 (3DMark06 score / max GPU temp)
ASPM = OFF (no power savings): 1144 / 82
ASPM = ModeratePowerSavings: (not tested)
B. Minimum clocks:
ASPM = MaxPowerSavings: 607 / 79
ASPM = OFF (no power savings): 611 / 78
ASPM = ModeratePowerSavings: 609 / 79
II. On battery
A. Standard clocks:
ASPM = MaxPowerSavings: 376 / 75 (3DMark06 score / max GPU temp)
ASPM = OFF (no power savings): 370 / 75
ASPM = ModeratePowerSavings: (not tested)
B. Minimum clocks
NOT RUN, display driver fails.
(the Not Tested experiments were not done to save time; their results would be quite obviously nearly identical to the nearby tests)
Attached: evolution of clocks and temps while Plugged In, with Standard Clocks, and ASPM = Maximum Power Savings. The red rectangle roughly encloses the interval which refers to the last 3DMark test run. Clearly, the clocks increase when the GPU is pushed, and come back to low values after the test is finished. The dip in the middle corresponds probably to an episode of loading from disk in-between tests.
So, we can draw some clear conclusions:
The score (and therefore, GPU clocks) are COMPLETELY INDEPENDENT of the Vista Power Profile. SO, Either (a) Vista COMPLETELY IGNORES the setting for ASPM in the power profile; or (b) that setting does not actually refer to the power profile of the GPU, but something else.
Instead, Vista adjusts the GPU power automatically according to the load when plugged in, and it caps it when on battery. By looking at graphs, I have also seen that the GPU is capped to the 2D Standard clocks when on battery.
Underclocking clearly has an effect on performance, which is basically halfed for the minimum clocks, which are roughly half of the maximum ones.
Unlike with the stock driver, where as documented in the power management post above the GPU was only using two power profiles, with the updated driver it uses all three power profiles. This is good news, since it should run on 2D Standard most of the time, especially if Aero is disabled.
By looking at idle temps with the lowest possible clocks and 2D Standard, I have never seen the GPU temp dropping below 70 degrees; this is with the CPU on Power Saving. So 70 degrees is the bottom low that one can achieve on this laptop without physical mods. This will hold in Windows XP, as well.
Reverting to standard clocks, the GPU temp on idle is still 70 degrees, even though the minimum clocks are a little more than half of the standard. So, there is little benefit to underclocking the GPU. This can also be seen by the small difference between underclocked and standard temps when stressed with 3DMark. This will hold in Windows XP, as well.
Combined with the experience with the CPU, I can say with pretty good confidence that the lowest average idle temperatures achievable on this version of the W7S, with stock cooling, are in the area of 57-60 degrees CPU / 70-72 degrees GPU. This will hold in Windows XP, as well.
All this assumes, of course, that Riva Tuner gives correct readings. Its readings correspond with the readings of GPU-Z, however (up to some consistent errors in the core clocks), and also with the actual scores given by 3DMark06. So I am inclined to trust them.
A followup on this. I don't have the numbers with me, but I tried SuperFLM a bit more. It turns out it may actually help one or two degrees at idle. But this happens only when I'm using the PowerSaving mode of RMClock, so when I don't allow the computer to go any higher than that. This is all assuming that those 2 degrees were not by accident... To get these 2 degrees i tested while forcing the computer to run on SuperLFM 6x, and then on normal 6x, and then I compared.
Where it does help is on full blast.. well, hardly full blast since the computer runs significantly slower on SuperFLM than on the first normal multiplier. Anyway, the CPU hardly heats up at all. But, as I said, the slowdown is significant.
So, it probably pays off to use SuperFLM with the PowerSaving profile of RMClock. Don't use it with Performance on Demand though, it seems to be buggy when used like that.
I am a bit worried about my W7Sg temperatures... namely the GPU temperatures are just under 70deg on idle, and that drives the HDD to above 40 on idle. Under load (either CPU or GPU), HDD is getting close to or above 50 and that is definitely out of my comfort range. The main cuplrit here is the GPU which heats up too much on idle (it heats up only 10-15degrees more under load, so the load temperatures are quite OK...)
What I have done:
1. I have of course cleaned up the fan as much as I could without breaking the warranty seal.
2. I have tried to underclock the GPU with Riva Tuner to the lowest clocks avaiable, but this (counterintuitively) seems to crash the notebook. I woudn't want to dig into undervolting the GPU as that requires VBIOS flashing etc.
So my question would be for suggestions of other things to do. For instance, how to obtain a safe underclock for the GPU. Also, I am thinking about CPU undervolting but there I have other problems, detailed here:
Originally Posted by E.B.E.
Hello people, one question.
I have a notebook (ASUS W7Sg) with temperature issues and undervolting wouldn't hurt... there are two problems.
1. I don't trust the default Voltages that RMClock gives me. They don't match the voltage specifications on the CPU datasheet page: http://ark.intel.com/Product.aspx?id=33917
(unless I'm reading those incorrectly; the voltage on minimum multiplier is lower than the minimum voltage given in the specs; and the voltage on the max multiplier is larger than the maximum voltage in the specs)
So, the question would be if anyone here knows the correct default voltages for a T9300, per multiplier (or at least for minimum and maximum multiplier; interpolating between those should give good results).
Similar values for a T9550 wouldn't hurt either, I would then be able to undervolt my F6Ve --- although it doesn't suffer too much from temperature issues.
2. I also don't really trust the behavior or RMClock on these newer chipsets/CPUs. I don't remember details now, but I saw some inconsistencies when I last tried to use it. I would use CrystalCPUID but that software doesn't give ANY default voltages at all; so with that I return to my original question about the default voltages.
PS: I am selecting "Mobile" CPU manually under RMClock, since it detects it as a desktop CPU, due to chipset incompatibilies I presume.
A notebook cooler would be impractical as the laptop gets moved around, although I may consider it if no other steps give results.
and it works quite well in tandem with the W7Sg. It decreased the temperatures by more than 5 degrees (Celsius) -- I don't remember the exact number. Since the W7S layout is similar, probably it would have the same benefits for that model.
This is useful since the W7S(g) models tend to get quite hot due mostly to the GPU.
(It is useful to know it works for this model, as that is by no means guaranteed. For instance, I also tried using that cooler with the F6Ve but the airflow is different and I actually ended up INCREASING the temperatures. It all depends on the airflow/location of the intakes, and it seems this cooler fits the W7S nicely.)