How Dell cripple performance explained by Notebookcheck.net

Discussion in 'Hardware Components and Aftermarket Upgrades' started by Papusan, Sep 14, 2017.

  1. Brad331

    Brad331 Notebook Enthusiast

    Reputations:
    27
    Messages:
    44
    Likes Received:
    32
    Trophy Points:
    26
    Oh, you're talking about SpeedStep? That's just dynamic frequency scaling so the CPU doesn't idle at the highest clock and waste power doing nothing. You can read more about it in the datasheet. A system with SpeedStep can still give you full boost if you do full load. In fact, virtually all Intel computers from the past decade come with SpeedStep. All it means is your CPU is allowed to not run at highest clock all the time. Setting the Minimum Processor State to 100% in your Power Plan effectively disables SpeedStep. Nothing to worry about here. Let's focus our criticism on the truly horrible things like EC-based power limiting (ahem Razer), which Dell thankfully does not appear to use.

    EDIT: Oh wait, no, you're pointing to the lack of options in the BIOS for customization. I see what you mean now. Sorry for the misunderstanding.
     
  2. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    343
    Likes Received:
    159
    Trophy Points:
    56
    Some truly horrible things that should be outlawed:
    • BIOS Whitelists so you can't upgrade Wifi cards and other parts
    • Building a laptop with throttling as a "solution" to a problem they made
    • Not cooling VRMs on laptops with high power CPUs
    • Messing around with keyboard layouts cough cough Razer
    • Any limitations tied to EC or is otherwise inaccessible by end users
    • Laptops that are glued cough cough Microsoft and Apple
    • Proprietary power connectors built into the motherboard rather than being soldered to the motherboard
    • Putting a powerful CPU in a chassis that cannot handle the heat, even with extensive modification
    • Stupid heatsink designs that are clearly bad, but they still use it anyways
    • Sacrificing literally everything to shave off a millimeter of thickness
     
    bennyg and Brad331 like this.
  3. Larry Q

    Larry Q Newbie

    Reputations:
    9
    Messages:
    9
    Likes Received:
    6
    Trophy Points:
    6
    So here's my personal viewpoint and opinions on this whole throttling/laptop performance situation. Not all or any of the below is guaranteed to be correct, but it is my experience or opinion, and suggestions are welcome. I'm writing this pretty late at night so there are probably writing problems that wouldn't be there if I were blogging, but anyway,

    Most of my personal experiences are with MacBook/MacBook Pros, generally because they're common and especially due to the fact that they come with a lot of built in instrumentation*. Some systems I have played with before are MSI GS70 Stealth Pro, Razer Blade 14, Aorus X7 V2, Surface Pro, Clevo P650RG, MacBook Pro (Retina, 15 and 13, Touch Bar, pre-Touch Bar).
    In addition, I have opened many other systems over the years to work on them, and there are far too many to list here. Even though many of these platforms are many years old, electrically/mechanically, the concepts should still be valid.

    There have been many good points brought up in this thread. Most notebook performance issues revolve around the following three categories:

    1) Power supply and VRM
    2) Thermal management
    3) Vendor and OEM firmware configuration


    1) Power supply and VRM

    Power adapters:
    The power adapter of the machine limits the long-term performance sustainability of notebooks. Portability is a major theme with the Ultrabook market, which is why vendors are increasingly shipping adapters that match the 'anticipated use case' of the notebook, rather than the actual power consumption. This of course, results in battery drain under load. Ultrabook type machines often ship with 40-60W of power, especially USB-C models.

    I think that the main impediments to improvement in this area are:
    - USB Type C does not officially support over 100 Watts. Using multiple ports to supply additional power presents additional challenges, due to the power OR-ing problem (selecting/combining between two voltage sources). Type-C power supplies may not support all voltage levels, particularly weaker ones, so in order to draw power from more than one source, a multiple-input dc to dc converter would need to be present. This can be challenging to get working properly. Providing an additional DC jack on the laptop would be a possible workaround, but has the disadvantage of breaking the included adapter's compatibility with Type C devices.

    - Power adapter size and weight - A particular amount of copper windings and magnetic materials are required to convert a given amount of power. In a switching power adapter, which changes the direction and flow of current through the voltage conversion transformer tens of thousands of times per second, the size of these power magnetics can be reduced somewhat by increasing the rate at which the current is switched. However, the transistors used to switch the current waste more power (producing more heat) at higher frequencies because they are switching more. There is a new technology called GaN FET that might solve this, but currently these types of transistors are very expensive. Anker are working on a model right now I believe.

    Dissipating the heat inside the sealed laptop power supply enclosure is the other challenge; making the device too small would make it difficult to keep the surface temperature under control. This could be alleviated through the use of a fan, but this would introduce noise and cost. If a manufacturer wanted to build a 600, 700, or even 1000 W power supply for a laptop that is the same size as many 200W adapters, they could easily use a design meant for 1U network servers. However, again, fan noise and cost continue to be issues, not to mention weight.

    Perhaps the biggest issue with shipping powerful adapters is that it can be rather expensive to develop bespoke power supplies. I know Razer had a custom one made by Delta for the older Blade 14 units, which output 150 W in the size of a standard 90W adapter. Most other OEMs are likely using reference designs from power supply manufacturers, and re-engineering this, particularly for high efficiency and low weight/size, would create a lot of non-recurring engineering (NRE) cost. Apple's adapters are obviously all custom, but they benefit from economies of scale because all of their laptop products use the same adapters. The 87 W Type C from Apple is significantly more compact than most other bricks, and teardowns show that it is well-built with a high internal space density.

    In light of these issues, it's not hard to see why manufacturers choose to design around 45, 65, or even 35W in newer Ultrabook laptops.

    VRMs:
    Modern CPU and GPU chips typically run at close to one volt. We'll assume that is the case, to simplify calculations a little. At one volt, from (P)ower = (V)olts * (A)mps, the power consumed by these components is roughly equivalent to the amount of current we need to supply to them.

    For a Core I7 processor running at say, 55 W, a current of 55 A is required. This is a significant amount of current. In fact, a very large number of the pins on the 1,023 pin processor are dedicated to supplying power and ground to the chip.

    The VRM of a modern processor or GPU usually implements something called a multi-phase dc-dc buck converter. This means that the load is split across multiple sections, or phases, the VRM takes in 12 volts dc or whatever the notebook's "bus" voltage (usually the battery voltage), and produces about 1 volt, converting the voltage into current by turning the power on and off really really fast (150KHz+?) and sending it through an inductor and diode during the on and off parts of the cycle. The VCore voltage is modified or maintained by changing the time the converter is "on" and the time it is "off". This is a huge oversimplification, but it will do for the purposes of this.

    As with the power adapter, all this switching creates quite a lot of heat. A lot of this heat is dissipated in the transistors that switch the power, and they're small, so left unchecked, they could easily exceed 150 degrees Celcius and cause damage (solder melts at about 210).

    Interestingly enough most laptop manufacturers don't appear to provide an (accessible) temperature sensor for the VRMs. As such, the processor is usually programmed to draw a limited amount of power to avoid causing extreme stress to this circuitry. An exception to this would likely be laptops that are using a transistor that is combined with controller circuitry, an IPM or 'intelligent'/integrated' power module. I have seen a model from Infineon that outputs a "hot" signal and a "overheat" signal, but not a temperature sensor signal.

    I suspect this was the cause of the MacBook Pro 2018 with Core i9's throttling behaviour. Likely, the "hot" signal was wired to set the #PROCHOT signal on the Intel CPU, which is essentially the emergency throttle request line. Asserting this signal caused the processor to throttle to 800MHz, its lowest performance level, possibly to avoid allowing the temperature to reach "overheat" levels, which would cut power to the CPU immediately and crash the system. As a result, likely the system was bouncing/oscillating against this limit in an attempt to 'bang-bang' control the temperature of the power supply circuits.

    Usually it is the heat produced by the VRM that causes issues. I don't usually see machines built with undersized VRMs, because under XTU likely you would see current limit throttling if this were the case, or 'electrical design point throttling' as Intel refers to it in the processor manuals.

    2) Thermal Management

    Keep in mind: Power input to the CPU = heat that needs to be removed. Calculations are not a form of energy and CPUs do not generate appreciable light or sound, so as per conservation of energy, this has to be true.

    CPU vs GPU:
    It looks like for "decent", full-turbo performance for recent (8th-generation) Whiskey Lake U CPUs, you're looking at about 55-60 watts of heat from the processor as measured by the CPU by Brad331. I have seen values quoted up to 70-80W for the new i9 hex core processors. Keep in mind the size of the processor die, which is usually only one or two square centimeters. This is likely what causes the CPU to run at higher temperatures than the GPU, which often can stay cooler even while dissipating more power because it's physically larger and the heat is less centralized into one point.

    Heat pipes vs vapour chambers:
    Most modern notebook computers use either heat pipes or vapour chambers to move the CPU's heat to a more suitable location for dissipation. The pipe works much better than a solid copper/copper pipe with water in it because of the principle where boiling water takes a lot of energy.

    Conveniently, we can make water boil at room temperature or so by pulling all the air out of the pipe. Heating the pipe up with a CPU boils the water into steam, absorbing all of the heat, and the steam travels to the other end of the pipe where a fan and heat sink are blowing on it, where the temperature falls, and without all of that extra energy, the steam condenses into water, travelling across a wick made from copper foam inside the pipe back to the CPU where the process repeats.

    The only real difference between a heat pipe and a vapour chamber is that a heat pipe is well, a pipe, and a vapour chamber is a flat space that's essentially a big flat pipe, and is better at spreading the heat across a greater area.

    Heat dissipation:
    The main ways through with laptops get rid of heat are through convection, which means the heat gets transferred to the air, or radiation, which honestly doesn't work that well at temperatures you'd want to touch with your hands anyway.

    Using a fan to blow on something is *forced* convection. What you're really trying to do here, is get lots of nice cool air in front of the surfaces of the radiator or heat sync to take away heat. There's an effect called the boundary layer, and what that means is that the layer of air really close to the metal tends to like to stay there while the air on top keeps moving. One way to mitigate this is to try to disturb it by turbulence or chaotic flow. (basically, not smooth flow)

    Normally you don't want to cause turbulence because when you think turbulence you think noise, less airflow, etc etc. But in this case you want turbulence because you're not at the end of the day trying to blow air around, you're trying to cool something down. The same principles apply to liquid cooling, actually, some guy did a science fair project back in the day at school about which type of nozzle/water block texture would best cool the processor, and a lot of the shapes promoted turbulent flow.

    so in short: things that help cool things down:
    -Blow more air over it, and not super smoothly.
    -Make the surface area for dissipation bigger, the bigger the better because you want maximum air-touching-metal.
    -Hotter things are easier to cool (make the heatsink hotter than the air; as close to CPU temp as possible)
    -Get the heat away from the processor chips and into the heatsink. (heat pipes)

    Examples and notes about fans and heat syncs:
    Earlier in the thread I did see some debate about whether conjoined heat pipes are a great idea or no, and honestly, in my opinion joining them makes decent amounts of sense.
    Aside from the 'typical power imbalance' thing where you might not load the CPU all the way when you play games, having more heat syncs usually means more fans, and more fans are a good idea although it costs more. Useful things to think about when you're thinking about fans: power use by a fan increases with the cube (^3) of shaft speed, but airflow is only proportional. Twice the fans, twice the flow, no need to create more noise with more speed. Sure, you can twiddle the size of the fans, but there's a practical limitation to the size you can put in there. Sony put three in the Vaio Z Canvas tablet, and managed to keep the processor cool under a 55W load despite the cooling solution taking a relatively small amount of space.

    Asus advertises their machine (Zephyrus) as having 12 volt fans (presumably super powerful). I think this is a good idea, but obviously is very loud and really only works well for larger fans with space for larger motors.

    I think the biggest thing with cooling in laptops that I've seen over the years is either there is insufficient heat sync area (air doesn't have the ability to pick up much heat) or the heat pipes aren't big enough, and the heat sync is cool but the processor is at 99C. The Aorus X7 did this. I think Razer is on the right track with their vapour chamber forays with the Blade Pro 17 (GTX1080) and the new Blade 15 Advanced, because they're able to spread the heat really really well.

    The Apple MacBook Pro units are generally really really good with spreading the heat out, actually. Usually with the CPU at 99C, the heat sync temperature (as measured by the SMC/EC, queried by iStat Menus) is usually around 80 C. This is a good result. Unfortunately the laptop is way too thin, the heat sync itself is about 5mm thick, and is just too small even with the fans at 6,600RPM.

    Lenovo's older machines were really good at dissipating heat because the heatsync was incredibly dense, and the air coming out was always very warm. Quite a far cry from how the Surface Book 2 15 behaves, blowing out mostly cold air.

    I have some other examples but I'll leave them for next time since this is getting long. Of course, I've left out every single low-effort cooling solution I've seen to save space, because obviously anything with a single, small heat pipe connected to a teeny low density fin stack and a fan with straight wings is going to be a budget and subpar solution unless the components in question are truly massive.

    As for VRM cooling, generally as long as you have a heat pipe going from the VRM parts to some radiator that's in front of a fan, you're good. I've only seen this rarely, I do know that MSI seems to get the idea on a lot of their machines.


    3) Firmware configuration


    This is really getting long so I'm going to make this part quick.

    On Intel processors, thermal throttling (usually at 99C) as well as power throttling are supported. They're controlled through something called MSR, or Model Specific Registers. These are what XTU/ThrottleStop are trying to set, and often are also set by hidden BIOS or uEFI settings.

    There are a bunch of other ways that Intel refers you to the "Turbo Implementation Guide" for details on (seems to be under NDA). One of these is probably through the LPC bus, which the EC is connected to. The EC (SMC on Mac) is like a little arduino on the motherboard that controls stuff like fans, power and temperature. It is able to tell the CPU to throttle.

    It is quite difficult to reverse engineer and change the firmware on these without documentation or source code. It is usually an ITE chip, with a 8032 "8051 compatible" CPU architecture.

    Additionally, there is a software component which runs in Windows called the Intel Dynamic Power and Thermal Framework (DPTF). This is a bunch of algorithms that are claimed to try to prevent the device skin temperature (casing temp) and the VRM temp from getting too high, based off of some form of modelling involving Newton's laws of cooling.
    Unfortunately, these parameters are often set too conservatively, or would be better off being sensors instead of software.
    Fortunately, you can disable this using a number of ways documented elsewhere (null drivers, uninstall drivers and lock them out with group policy, etc).

    For NVidia GPU, usually the settings are part of VBIOS. People have been able to flash this and change the settings before, with varying degrees of success. Please don't be that one that ends up with blown VRM on the GPU card though, I've seen an Alienware card where the PCB layers themselves ballooned from heat and pressure after something went short.

    There are a number of reasons why on a lot of platforms these settings are locked down to begin with, and set to rather conservative feeling values, and here's my best guess.

    1) The aforementioned power adapter issues. Even though machines nowadays do power sharing (drain the power adapter and the battery at the same time for more power), if you do that, the battery drains. And yes, people complain, like they did about the Surface Book 2, which does this despite throttling.
    2) To protect the VRM from thermal damage because it doesn't have thermal monitoring, likely if they are not using a IPM or a sensor.
    3) To prevent the computer casing from getting too hot
    4) To prevent too much fan noise
    5) Because they are using reference design and they cannot be bothered to modify that.
    6) To prevent the processor from bouncing off the throttle temperature.
    7) Ensure stability after many years of operation when the heat sync or fan is clogged?



    I think that honestly if you were to ask me what the biggest reason laptops aren't reaching their potential these days is, I don't think it's necessarily the fault of any one of these things in the list. I think that it would take a very deliberate effort to solve the throttling issues most computers these days experience and it would definitely be pushing the limits of engineering. I suppose to us, as hobbyists, tinkerers, makers, it would be a fun project to try to design a computer without these constraints, but a lot of the improvements I suggest would add a lot of cost, and for major OEM players I suppose this is the trade off that we're stuck with.


    But anyway this seems to be sufficiently long for now, I know this might not be super super relevant but I hope someone might find it interesting as a bit of context.

    Thanks for reading!
     
    jclausius and Brad331 like this.
  4. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    343
    Likes Received:
    159
    Trophy Points:
    56
    I think that vapor chambers are better for when you have large areas that you need to cool. If a heatpipe wants to cover a VRM group above the CPU, you need to either extend the heatpipes to make contact, or use some other form of getting the heat from one place to another. A downside is that everything touching the vapor chamber gets as hot as the hottest single part due to the way heat spreads. Heatpipes are better at directing heat towards certain areas.

    I feel like the best way to cool would be a combination of both, vapor chamber for VRMs and stuff that are connected to a shared heatpipe with the CPU and GPU

    My ideal configuration would be 2 thick heatpipes connected to a vapor chamber system that is attached to the motherboard. The heatpipes help direct heat to the cooling systems while the vapor chambers direct heat to the heatpipes
     
  5. Larry Q

    Larry Q Newbie

    Reputations:
    9
    Messages:
    9
    Likes Received:
    6
    Trophy Points:
    6
    I think the ideal way to cool VRMs would be to provide a dedicated heat pipe and a copper spreader plate, funneling heat to a heat sync that is separated from the rest of the components. A vapour chamber is likely not necessary for the several watts of heat generated by the VRMs.

    I think the biggest benefit of using a vapour chamber for the CPU and GPU would be easier integration of much larger heat sync area into the notebook. I think having a much higher fin density as well as a 'longer' fin path for air to pass through would also greatly improve the ability of the device to dissipate heat.
     
  6. Papusan

    Papusan JOKEBOOKs Sucks! Dont waste your $$$ on FILTHY

    Reputations:
    28,060
    Messages:
    25,338
    Likes Received:
    45,368
    Trophy Points:
    931
    upload_2019-1-31_19-52-23.png
    Yeah, You finally hit the nail on the head. +rep


    See also my next reply...
    ---------------------------------------

    You're wrong! And I fixed it for yooo. @Ultra Male:oops: @Mr. Fox :hi:

    http://forum.notebookreview.com/thr...5-owners-lounge.815492/page-324#post-10849173

    http://forum.notebookreview.com/thr...enware-15-and-17.826796/page-34#post-10845555

    As you can see from my posts... Alienware or better say DELL ain't much better than Razer. Lock bios as Forth Knox and force you to use an *proprietary* Windows apps for maintaining/adjust power settings... On top its locked down as Razer CIGAR.gif
     
    Last edited: Jan 31, 2019
  7. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    343
    Likes Received:
    159
    Trophy Points:
    56
    VRMs do get hot though, like 98 degrees or more. For something that how, I think a vapor chamber that connects it to the CPU and GPU would certainly be a good idea. Or maybe just a passive heatsink dedicated to it would work.

    I think just having more metal is the best solution, so higher fin density
     
  8. Papusan

    Papusan JOKEBOOKs Sucks! Dont waste your $$$ on FILTHY

    Reputations:
    28,060
    Messages:
    25,338
    Likes Received:
    45,368
    Trophy Points:
    931
    Lovely by Dell. First push on you an notebook with so and so much performance, then kill the performance afterwards with an firmware update (cripple nvidia graphics performance - add in lower temperature throttle point threshold). Damn Nice. This is Scam!
    upload_2019-2-7_1-22-22.png
    upload_2019-2-7_1-42-25.png
    ---------------------------------------------------------

    More nasty from Dell...

    [​IMG]
    Unusual throttling behavior shows the Core i7-8565U running both faster and slower than the i7-8550U

    The new Dell Inspiron 17 7786 can run 20 percent faster than the last generation Inspiron 17 7773 for the first 10 to 20 minutes before clock rates begin to dip dramatically. At that point, CPU performance stabilizes at 10 percent slower than the older Core i7-8550U CPU. This "delayed" throttling behavior is uncommon on many laptops.
     
    Last edited: Feb 6, 2019
    Ashtrix, Vasudev, Maleko48 and 3 others like this.
  9. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    343
    Likes Received:
    159
    Trophy Points:
    56
    TL;DR: They basically crippled the performance of the XPS 15. I am pretty sure you can get around the limit, but it still sucks that they force you to work around it. The Inspiron 17 has a really poorly implemented power management system.
     
    Vasudev likes this.
  10. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,420
    Messages:
    3,409
    Likes Received:
    3,696
    Trophy Points:
    331
    There is no way to get around the 75C GPU limit as it is set in the vbios. I know Papusan salivates at the idea of Dell having any issues, but it's true that they did what I consider bait and switch. The previous 78C limit wasn't as bad because that's where the CPU/GPU stayed so performance wasn't bad. Now the CPU heats up the GPU to 74C easily and causes throttling <1000MHz. Sure you could gimp your CPU to likely prevent some of that, but I'm not a fan of that. My X1 Extreme doesn't have that issue (although it has other issues).
     
    ALLurGroceries, maffle and Maleko48 like this.
Loading...

Share This Page