Power Dissipation and Power Losses; Which One is Which???

Some time ago I wrote about power losses as a means to explain why a motor driver circuit gets hot. You can read the details of this article at the “My Chip Is Running Hot” posting. On this previous article, I detail how we can compute power dissipation by using the I^2*R (from now on ISR for I Squared R) equation. This is all pretty easy to digest as we can imagine a current “I” flowing through two FETs which happen to have an RDSon “R”. Hence filling the previous equation up has to be a piece of cake.

A reader poses an intriguing question: what happens when we are PWMing? How do we use that equation then? The truth is the ISR equation can be used as simply as it is or we can complicate things gargantually. Well, on my previous post we kept things simple so how about we make a big mess? Hold on to your brains as I will have no mercy! OK, seriously, I will try to make this as simple to understand as possible, so please bear with me…

The question explicitly involves PWMing but for us to make sense of the ISR equation we need to understand what happens during such PWMing. For example, we know we are turning our main FETs ON and OFF as we apply this PWM. However, we should not see the FETs as being OFF because the truth is we will need to let that current flow through somewhere else. Heck, current will not just stop flowing through an inductive load just because you want it to, so you BETTER give it another path! I have already discussed this on my “Decay” posting, but will summarize what happens in the following statements:

1. Either you let a flyback diode take the current, or

2. You enable another FET to take the current. It can either be both of the opposing FETs (as in fast decay) or the opposing low side FET (as in slow decay).

Option 1 is hardly ever used anymore as it is not the most efficient mechanism. If you use it, though, you will need to use two equations: ISR during the TIME ON and I*Vf during the TIME OFF, where Vf is the diode’s forward voltage.

What you will find on a fair majority of today’s available goodies, is the use of an alternate FET based path to take on the current. The reason behind this logic is that if you use the opposing FETs you can obtain controlled current flow and optimize things. In other words, you can choose when the FETs are enabled/disabled. It will become apparent later why this is important. Just to keep with the nomenclature, if you use the diode mechanism, since you cannot control when the diode starts/stop conducting, we will call this current recirculation asynchronous, whereas on the opposing FET scenario, since you control enablement/disablement, we will call this current recirculation synchronous (it is synchronized to your controller).

Whether we use diodes or FETs does not change the fact that we are conducting current at all times. Granted that if we use diodes, we will be adding power dissipation in the form of IVf being greater than ISR. Why? Well, because the Vf on a diode is often pretty big whereas the RDSON on a FET is made to be tiny. Precissely because of this! If the RDSON was huge, you might as well use the diodes and go with the asynchronous decay.

With all that being said, when we use the opposing FETs to recirculate current, even if we are PWMing the outputs, we can still use the ISR equation. Really??? On your favorite’s device datasheet you will be given the RDSon for both FETs. If you are assuming the FETs are ON at all times (no PWM is employed), you use the ISR equation with the continuous current and the two RDSON resistances added together. For example, on a DRV8818’s at room temp, its high side FET RDSON is 220 mOhms whereas its low side FET RDSON is 150 mOhms. During TIME ON, you would use the ISR equation with each H Bridge having an R = 370 mOhms.

But what happens when we PWM? Sure, we can use the very same equation as before during TIME ON, but what about during the eventual occurrence of a TIME OFF period? Here is where you will need to know whether you are using FAST or SLOW decay. If you are using FAST Decay then you can use the exact same numbers on the ISR equation whether you are on TIME ON or TIME OFF because as you utilize opposing FETs, the resistance will match in both case. On our previous example, if the right high side and left low side FETs were ON during TIME OFF, the left high side FET and right low side FET would have been ON during TIME OFF.

If you are using slow decay, on the other hand, then you will need to take into consideration which FET is the one maintained ON as the resistance term will in esence double (e.g. you will have either both low side FETs or both high side FETs).

Now some people may argue that 50 to 100 mOhms difference is not going to change the equation results that much, so why even bother? Heck! Lets swag it and assume 190 mOhms at all times! In my opinion that should work for a quick analysis, but what I prefer to do is assume the worst case scenario of 220 at all times. For those purists out there, who want to know what is the power dissipation as neutrinos collide with reverse polarized electrons and up quarks at each FET gate, then you will need to take this to the next level. In this case, you will need to use the following equations:

P_HBRIDGE = (I^2RDSON_HighSideFET + I^2RDSON_LowSideFET)*TIMEON%*2 (if on fast decay)

P_HBRIDGE = (I^2RDSON_HighSideFET + I^2RDSON_LowSideFET)*TIMEON* + (I^2RDSON_SlowDecayFET)*TIMEOFF %* 2

(if on slow decay), where SlowDecayFET is the RDSON for the FET that is used for the slow decay. You will need to check the device’s datasheet as each H Bridge is different and some will use high side FET recirculation, while others will use low side FET recirculation during slow decay.

Notice you will need to know what the duty cycle is, as this will define what the TIME ON and TIME OFF are. To be honest, I would personally do not even bother with this. Because chances are that if you have a PWM, the duty cycle will be changing at all times. For example, what if you have a PID loop? What if you are microstepping? There is no way to easily model what the duty cycle will be doing at all times. Well, I can tell you it will be changing. But how? Beats me! Logically, this can all be modeled if you have the time and the resources. If you are going to go this route, make sure to rent some time at the Large Hadron Collider computer so you can take into consideration solar flares, floor vibrations as Elephants stampede across the African terrains and most importantly quantum fluctuations cause by alien invasion on a nearby parallel universe. Heck, those can mess any simulation results in a flash!

OK, so now we have all the considerations (at least the ones I can think off) when it comes to TIME ON and TIME OFF. There is one dinky little problem. To think an H Bridge can only operate in TIME ON and TIME OFF is as naive as a 15 year old thinking their new Android App will surely make him an overnight millionaire. The truth is FETs cannot just be turned ON and OFF in no time. I mean, we could if we had infinite current. Do you happen to have some of that? Because that could truly make you an overnight millionaire! Forget about the Android App!

No, neither you nor me have the slightest idea on how to get infinite current. And even if we had it, we shouldn’t use it. Because when you use lots and lots of current, parasitics start to creep up from the very same fiery pits of electro-hell! Remember that ISR applies for everything and that includes traces, PCB structures, bond wires, etc. If you have a heinuous current flowing into the transistor gate so you can enable it in no time, what was construed as GND will most likely no longer behave like the aforementioned GND. This is what is called Ground Bouncing and although there are ways to diminish this problem, the real solution is to ensure current does not change abruptly. In other words, making sure the FETs are not turned ON and OFF as fast as we would want, or an infinitesimaly small time.

NOTE: Another problem with this high current spikes is EMI (Electro Magnetic Interference), but this is Harry Potter Technology so I am not even going to go there…

OK, if we cannot switch our FETs ON and OFF as soon as zero femtoseconds, then this means we have to incorporate yet another period of time to turn the FETs ON and OFF. In other words, what we need is a transition. And these transitions happen to have a name. And a last name too! Exagerated on the image below (as in real life we try to make these times as small as siliconely possible), when the power output goes HI, such transition will be referred to as the rising edge and how long it takes for the FET to reach full enablement (and hence its lowest RDSON value) is the Rise Time. On the same token, when the output is made LO, then we are looking at the falling edge and how long it takes for the FET to be fully disabled is measured as the Fall Time. Device’s datasheet should tell you what these time parameters are, as you will need them to compute the next part of the puzzle.

Rise Fall Times

The truth is the ISR equation cannot be used for these transitions.  In fact, it makes no sense to look at the FET as a resistor during this period of time, because chances are we do not know its resistance. We know is something in between “infinity” (or its leakege resistance when not conducting) and its subsequent RDSON, but what? Who knows! What we know, however, is the voltage across it, and the current. We also know how long we are on this region and as we look at the picture it all draws we can recognize a triangle. I’ll be darned!!! My third grade math class is actually useful for something! What we have here is a triangle, and if we compute the area below the triangle, that is the power dissipation of this transition. So you will get one triangle for the Rise Time and one for the Fall Time. Then you will need to account for two of these on a per FET basis. And finally you will get all of this mambo jambo as many times as you transition on a per second basis. You know your PWM frequency, so VOILA! The lovely equation forms below:

Computing Switching Losses

PSwitchingLosses = (1/2*RT*I*V + 1/2FT*I*V)*FREQ

Where:

RT is the Rise Time in seconds

I is the average current flowing through the system, in Amps

V is the systems voltage (often referred to VM), in Volts

FT is the Fall Time in seconds

and FREQ is the Frequency in Hz

Also known as Switching Losses, or the losses associated with the power stage continuous switching, these are often ignored as they are most of the time negligible when compared with the actual ISR component. Why would that be true? For starters, really efficient power stages will have Rise and Fall times being so small (think nano seconds), the result of this equation is tiny. Granted that these power stages have to be designed in with the uttermost care or the ground bouncing is going to knock you out of your sockets, but if done properly, you can have a system for which the switching losses are literally not a problem needing to worry yourself with. Either way, even on an amazing power stage you should expect Rise and Fall times in the order of 50 ns, whereas more economical versions will be designed with anywhere in the vicinity of 100 ns to 300 ns. So as you can see, this time is rather small.

The other reason why switching losses are often ignored is because the switching frequency is not that high. Notice that the switching losses are directly proportional to the frequency so clearly the higher the frequency the worst the power dissipation gets! However, motor H Bridges are hardly PWM’d at anything more than 30 KHz. In some systems, 30 KHz is prohibitely large and you see frequencies lower than 20 KHz. Heck, if it is a cheapo toy, where the cost of the processor has been squeezed so much the cost is almost negative, I can assure  you the frequency will most likely be 1 KHz or 2 KHz if at all. On applications of this sort, it really doesn’t matter that much.

With all that being said, it could come to be that the switching losses do matter. For example, on some applications you will need the rise and the fall time to be huge. Think for example a gargantuan application like a battery operated drill or an RC plane. You would not imagine this, but currents on these guys can go up to a few dozen amps! To switch this much current ON and OFF quickly is like pulling on your nose hair. Painfull like hell! You just can’t! Hence, the rise and the fall times may easily be taken to as much as 500 ns or more. This same phenomenon often plagues very large drives, such as industrial drives, in which heinuous currents in the hundreds of amps must be switched. An an application of this sort, to go above 16 KHz is like hitting your face with a shovel.

Then there are those applications suffering from an ego boost and wanting to run on so fast frequencies, even dogs get to squeal like a puppy. For example, some time ago a customer was incquiring about using 400 KHz. What was the application in question is irrelevant. In my experience, for any frequency higher than 80 KHz, the switching losses start to become noticeably high and ignoring them is no longer a good idea.

All this mayhem and I haven’t mentioned voltage. Do we care about it? Unfortunately, yes. Do note that on our lovely Switching Losses equation, this power dissipation will in fact be directly proportional with application voltage! So say you are driving a stepper with a 12V power supply and then the same stepper with an 80V power supply. Maybe you think that because in both cases you are putting 1A across the stepper the power dissipation should be pretty much the same. ERRR!!! That will not be the case. The ISR component may be quite similar, but the switching losses on the 80V will super squash the 12V counterpart losses. This is one of the tradeoffs of using higher voltages to drive a stepper: Yes, you can increase current throughout the winding faster and hence move the stepper faster, but the switching losses will also be larger.

Are you still with me? Is your brain in pain? Don’t worry, I am done! Hopefully you know understand what are the implications of adding a PWM into your H Bridge. I truly hope I didn’t scare you away…

2 comments for “Power Dissipation and Power Losses; Which One is Which???

  1. August 5, 2012 at 6:12 pm

    Thank you! Does this phenomena occur in BJTs too? If so, why are inverter BJTs circuits used to improve the rise/fall times of the waveform? In my case, I was recommended that I should insert a BJT inverter between the output of the comparator and the input of the MOSFET. There is a resistor connected between the collector and power supply. This resistor ends up making a RC circuit with the input capacitance of the MOSFET and due to a time constant of 1uS, the rise/fall time is much faster. Does this make sense? Also, do slow rise/fall times affect the switch speed of a transistor? Exactly, an inverter circuit would be able to source/sink more current than the comperator, allowing the mosfet to switch on and off more quickly, meaning that it will be in the linear range for a shorter amount of time, thus reducing switching losses.

  2. avayan
    August 6, 2012 at 11:36 pm

    I don’t think this phenomena applies to BJT’s, but to be honest I am not a BJT expert. What I know is this: a Bipolar transistor is not modeled as a resistor for which you can control its resistance value by opening and closing the N (or the P) channel. That is how FETs work. A bipolar transistor works by creating an area of polarization which enhances current flow across an structure which would be otherwise a reversed diode. With that being said, however, bipolar transistors still have losses, or better stated, power dissipation. The saturation voltage VCE is about 0.2V and if you multiply that by the current, that gives you the amount of Watts dissipated on the BJT.

    The BJT inverter is basically a Pre Driver and it is necessary to supply large currents into the FET gate. The FET can only turn as fast as how much current you are pumping into its structure which is in essence a cap. The more current you pump in, the faster the FET turns on. This is why this Pre Drive stage is precisely what defines what the rise and the fall time is.

    The resistor at the gate is also crucial, as just like you described it will form an RC filter and slow the Turn On response. In fact, on external FET drivers, many designers will put a resistor in series with the gate precisely to be able to tune the rise and the fall time. If these times are too fast, the amount of current can cause ground bounce and bring all sorts of havoc into the design.

    The rise and the fall time ARE the switch speed of that transistor. Basically you want to tune this time to be as short as possible without causing other problems such as EMI, ground bouncing, etc.

    Whether an inverter circuit has more current capability than a comparator will depend on the inverter circuit and the comparator. Some comparators can supply lots of currents very fast. It all depends on how the comparator output has been designed. Have in mind that a comparator is often a general purpose device. There are high bandwidth comparators or operational amplifiers which turn faster so I imagine they should be able to drive a FET faster as well (as long as they can supply the current the power FET requires!).

    But yes, if the MOSFET is enabled/disabled quicker, then the switching losses are reduced. Do note you will increase EMI and ground bouncing by definition. Not to mention you will need more current and the power supply will need to be able to supply this without incurring in a voltage droop.

    Nothing is free in this Universe 😉

Leave a Reply

Your email address will not be published. Required fields are marked *