News


Update to PSU Testing 2015: A Minor Change

Update to PSU Testing 2015: A Minor Change

Today I want to discuss a minor change in our PSU testing procedures, and how they have evolved since our 2014 – How We Test PSUs pipeline post.

To date, all of our testing was being done in accordance with Intel's Power Supply Design Guide for Desktop Form Factors and with the Generalized Test Protocol for Calculating the Energy Efficiency of Internal AC-DC and DC-DC Power Supplies. These two documents describe in detail how the equipment should be interconnected, how loading should be performed (as the power lines should not just be loaded randomly), and the basic methodology for the acquisition of each data set.

However, not all of our testing can be covered and/or endorsed by these guidelines.

Even though these documents are just a few years old, their methods fail to account for modern "enthusiast grade" computer switching mode power supplies. The industry has been making leaps on the creation of more energy-efficient devices, continuously lowering their power requirements. Nowadays, the vast majority of computers that require very powerful PSUs simply employ multiple components, such as numerous graphics cards. As the majority of energy-consuming components require a 12 V source, PSU manufacturers have been continuously driving the 12 V output of their units upwards, while the 3.3V/5V outputs remained inert or are getting weaker. There are many design rules that modern "enthusiast-grade" PSUs do not adhere to nowadays, such as the current safely limits and the maximum size of the chassis, but this particular change creates a problem with the generalized test protocol.

Furthermore, nearly all switch mode power supplies with multiple voltage rails will exceed their maximum rated power output if all the rails are loaded to their maximum rated current. This includes nearly every PSU ever made for a PC. It is not possible to load every rail (3.3V, 5V, 12V, 5VSB, -12V) to its maximum rated current without severely overloading the PSU. For this purpose, the derating factor D exists, which calculates the contribution of each rail in relation to the maximum output of the PSU. The derating factor for a computer PSU always has a value lower than one. A lower derating factor indicates overly powerful lines in relation to the total output of the PSU, which practically is good. A value greater than one would suggest that fully loading every rail does not exceed the maximum power output of the PSU, which is never the case with a PC power supply.

According to the generalized test protocol, the derating factor D of the 3.3V/5V lines should be:

Simply put, the formula is maximum rated power output of the unit divided by the sum of the power output ratings of each individual power line.

However, this formula frequently leads to the overloading of the 3.3V/5V lines with >1 kW PSUs. The effect is particularly severe in some high efficiency units, in which the designers moved the 3.3V/5V DC-to-DC conversion circuits on the connectors PCB, reducing their maximum power output significantly. Although some PSUs would operate normally even if their 3.3V/5V lines were overloaded, the continuous degradation of the 3.3V/5V lines in comparison to the 12 V line resulted to PSUs appearing in our labs that could not operate under such conditions.

The grandest example of them all would be the Andyson Platinum R 1200W PSU that we reviewed just recently. This PSU has a lopsided design such that the 3.3V/5V rails that can output just 100W combined, which is nothing compared to the 1200W the single 12V rail can output. Furthermore, the current rating of the 5V line alone can reach the maximum output reserved for both the 3.3V and 5V rails. This great imbalance creates an issue with the generalized PSU testing protocol, which has been developed for PSUs that do adhere to the design guide standards. If we were to load that PSU using the standard derating factor formula, it would create a load of over 150 Watts on the 3.3V and 5V rails, which were rated for an output of just 100 Watts. Other units did work with their 3.3V and 5V rails slightly overloaded but, in this case, the Platinum rated unit failed long before it reached its maximum output. Therefore, it was obvious that the official derating factor calculation method could no longer be used for modern high output PC PSUs.

Therefore, we had to alter the derating factor formula in order to compensate for real world testing. Without at least two significant energy consumers, no modern system requires > 500 Watts. Greater power demand suggests the presence of devices that load only the 12 V line (i.e. GPUs, CPUs, liquid cooling pumps, Peltier effect coolers, etc.). After certain calculations and research, for units with a rated power output over 500 Watts, we will be using the following formula:

Which effectively halves the impact of the 3.3V/5V lines on the calculation of the derating factor, imposing the difference on the 12V line. This does not mean that their load is being halved, only that their contribution to the total output of the PSU is now considered to be of lower importance. Furthermore, the loading criterion of the 3.3V/5V lines for a load rating X (in % of the unit's maximum output) is now changed to:

For the 12 V line(s), the loading criterion remains unchanged.

This formula results to the more realistic representation of the requirements that actual systems have, at least up to a power output realizable today.

Furthermore, there are no guidelines on how transient tests should be performed and the momentary power-up cross load testing that Intel recommends is far too lenient. Intel recommends that the 12 V line should be loaded to < 0.1 A and the 3.3V/5V lines up to just 5 A. We also perform two cross load tests of our own design.

In test CL1, we load the 12 V line up to 80% of its maximum capacity and the 3.3V/5V lines with 2 A each.
In test CL2, we load the 12 V line with 2 A and the 3.3V/5V lines up to 80% of their maximum combined capacity.

The End Result

If that all sounded like jargon, the end takeaway cause is this – due to user requirements of high wattage power supplies, manufacturers have altered the design of their products outside of the specification documents in order to compensate for cost and engineering prowess.

A power supply should have a balance between the 3.3V/5V and the 12V rails, such that when one is increased the other will rise as well. However this doesn't happen with high wattage power supplies like the specifications says it should. Normally the power rating advertised should be based on this balance, but it doesn't have to. It means that some designs are not like others, and the level of balance is different to get to the power rating.

If the OEMs did adhere to specifications, the cost of the end product would increase to accomodate the higher wattage 3.3V/5V outputs, which is bad for a product that sells based on margins. Meanwhile the extra power that users actually need is all on the 12V, after all, so keeping parity with the guidelines is perhaps a fruitless task. But this means the products do not follow the guidelines, much in the same way that some cars disregard emission guidelines in various markets. The end result is that by testing against the guidelines, the results become erroneous because the device isn't built to strict specification.

Nevertheless the design underneath still works for the user, just like the car with high emissions still drives like a car. You just can't test it like a normal car, or some of the guidelines no longer apply. As a result, we're going to adjust our testing on a sliding scale. If we didn't, some units that will work happily in a real system might fail on our test-bed well before we hit 100% load. The culprit is that 'guidelines' are ultimately not 'rules', and these guidelines can be blurred without proper inspection and preparation.