A reader poses an intriguing question: what happens when we are PWMing? How do we use that equation then? The truth is the ISR equation can be used as simply as it is or we can complicate things gargantually. Well, on my previous post we kept things simple so how about we make a big mess? Hold on to your brains as I will have no mercy! OK, seriously, I will try to make this as simple to understand as possible, so please bear with me…
The question explicitly involves PWMing but for us to make sense of the ISR equation we need to understand what happens during such PWMing. For example, we know we are turning our main FETs ON and OFF as we apply this PWM. However, we should not see the FETs as being OFF because the truth is we will need to let that current flow through somewhere else. Heck, current will not just stop flowing through an inductive load just because you want it to, so you BETTER give it another path! I have already discussed this on my “Decay” posting, but will summarize what happens in the following statements:
1. Either you let a flyback diode take the current, or
2. You enable another FET to take the current. It can either be both of the opposing FETs (as in fast decay) or the opposing low side FET (as in slow decay).
Option 1 is hardly ever used anymore as it is not the most efficient mechanism. If you use it, though, you will need to use two equations: ISR during the TIME ON and I*Vf during the TIME OFF, where Vf is the diode’s forward voltage.
What you will find on a fair majority of today’s available goodies, is the use of an alternate FET based path to take on the current. The reason behind this logic is that if you use the opposing FETs you can obtain controlled current flow and optimize things. In other words, you can choose when the FETs are enabled/disabled. It will become apparent later why this is important. Just to keep with the nomenclature, if you use the diode mechanism, since you cannot control when the diode starts/stop conducting, we will call this current recirculation asynchronous, whereas on the opposing FET scenario, since you control enablement/disablement, we will call this current recirculation synchronous (it is synchronized to your controller).
Whether we use diodes or FETs does not change the fact that we are conducting current at all times. Granted that if we use diodes, we will be adding power dissipation in the form of IVf being greater than ISR. Why? Well, because the Vf on a diode is often pretty big whereas the RDSON on a FET is made to be tiny. Precissely because of this! If the RDSON was huge, you might as well use the diodes and go with the asynchronous decay.
With all that being said, when we use the opposing FETs to recirculate current, even if we are PWMing the outputs, we can still use the ISR equation. Really??? On your favorite’s device datasheet you will be given the RDSon for both FETs. If you are assuming the FETs are ON at all times (no PWM is employed), you use the ISR equation with the continuous current and the two RDSON resistances added together. For example, on a DRV8818’s at room temp, its high side FET RDSON is 220 mOhms whereas its low side FET RDSON is 150 mOhms. During TIME ON, you would use the ISR equation with each H Bridge having an R = 370 mOhms.
But what happens when we PWM? Sure, we can use the very same equation as before during TIME ON, but what about during the eventual occurrence of a TIME OFF period? Here is where you will need to know whether you are using FAST or SLOW decay. If you are using FAST Decay then you can use the exact same numbers on the ISR equation whether you are on TIME ON or TIME OFF because as you utilize opposing FETs, the resistance will match in both case. On our previous example, if the right high side and left low side FETs were ON during TIME OFF, the left high side FET and right low side FET would have been ON during TIME OFF.
If you are using slow decay, on the other hand, then you will need to take into consideration which FET is the one maintained ON as the resistance term will in esence double (e.g. you will have either both low side FETs or both high side FETs).
Now some people may argue that 50 to 100 mOhms difference is not going to change the equation results that much, so why even bother? Heck! Lets swag it and assume 190 mOhms at all times! In my opinion that should work for a quick analysis, but what I prefer to do is assume the worst case scenario of 220 at all times. For those purists out there, who want to know what is the power dissipation as neutrinos collide with reverse polarized electrons and up quarks at each FET gate, then you will need to take this to the next level. In this case, you will need to use the following equations:
P_HBRIDGE = (I^2RDSON_HighSideFET + I^2RDSON_LowSideFET)*TIMEON%*2 (if on fast decay)
P_HBRIDGE = (I^2RDSON_HighSideFET + I^2RDSON_LowSideFET)*TIMEON* + (I^2RDSON_SlowDecayFET)*TIMEOFF %* 2
(if on slow decay), where SlowDecayFET is the RDSON for the FET that is used for the slow decay. You will need to check the device’s datasheet as each H Bridge is different and some will use high side FET recirculation, while others will use low side FET recirculation during slow decay.
Notice you will need to know what the duty cycle is, as this will define what the TIME ON and TIME OFF are. To be honest, I would personally do not even bother with this. Because chances are that if you have a PWM, the duty cycle will be changing at all times. For example, what if you have a PID loop? What if you are microstepping? There is no way to easily model what the duty cycle will be doing at all times. Well, I can tell you it will be changing. But how? Beats me! Logically, this can all be modeled if you have the time and the resources. If you are going to go this route, make sure to rent some time at the Large Hadron Collider computer so you can take into consideration solar flares, floor vibrations as Elephants stampede across the African terrains and most importantly quantum fluctuations cause by alien invasion on a nearby parallel universe. Heck, those can mess any simulation results in a flash!
OK, so now we have all the considerations (at least the ones I can think off) when it comes to TIME ON and TIME OFF. There is one dinky little problem. To think an H Bridge can only operate in TIME ON and TIME OFF is as naive as a 15 year old thinking their new Android App will surely make him an overnight millionaire. The truth is FETs cannot just be turned ON and OFF in no time. I mean, we could if we had infinite current. Do you happen to have some of that? Because that could truly make you an overnight millionaire! Forget about the Android App!
No, neither you nor me have the slightest idea on how to get infinite current. And even if we had it, we shouldn’t use it. Because when you use lots and lots of current, parasitics start to creep up from the very same fiery pits of electro-hell! Remember that ISR applies for everything and that includes traces, PCB structures, bond wires, etc. If you have a heinuous current flowing into the transistor gate so you can enable it in no time, what was construed as GND will most likely no longer behave like the aforementioned GND. This is what is called Ground Bouncing and although there are ways to diminish this problem, the real solution is to ensure current does not change abruptly. In other words, making sure the FETs are not turned ON and OFF as fast as we would want, or an infinitesimaly small time.
NOTE: Another problem with this high current spikes is EMI (Electro Magnetic Interference), but this is Harry Potter Technology so I am not even going to go there…
OK, if we cannot switch our FETs ON and OFF as soon as zero femtoseconds, then this means we have to incorporate yet another period of time to turn the FETs ON and OFF. In other words, what we need is a transition. And these transitions happen to have a name. And a last name too! Exagerated on the image below (as in real life we try to make these times as small as siliconely possible), when the power output goes HI, such transition will be referred to as the rising edge and how long it takes for the FET to reach full enablement (and hence its lowest RDSON value) is the Rise Time. On the same token, when the output is made LO, then we are looking at the falling edge and how long it takes for the FET to be fully disabled is measured as the Fall Time. Device’s datasheet should tell you what these time parameters are, as you will need them to compute the next part of the puzzle.
The truth is the ISR equation cannot be used for these transitions. In fact, it makes no sense to look at the FET as a resistor during this period of time, because chances are we do not know its resistance. We know is something in between “infinity” (or its leakege resistance when not conducting) and its subsequent RDSON, but what? Who knows! What we know, however, is the voltage across it, and the current. We also know how long we are on this region and as we look at the picture it all draws we can recognize a triangle. I’ll be darned!!! My third grade math class is actually useful for something! What we have here is a triangle, and if we compute the area below the triangle, that is the power dissipation of this transition. So you will get one triangle for the Rise Time and one for the Fall Time. Then you will need to account for two of these on a per FET basis. And finally you will get all of this mambo jambo as many times as you transition on a per second basis. You know your PWM frequency, so VOILA! The lovely equation forms below:
PSwitchingLosses = (1/2*RT*I*V + 1/2FT*I*V)*FREQ
Where:
RT is the Rise Time in seconds
I is the average current flowing through the system, in Amps
V is the systems voltage (often referred to VM), in Volts
FT is the Fall Time in seconds
and FREQ is the Frequency in Hz
Also known as Switching Losses, or the losses associated with the power stage continuous switching, these are often ignored as they are most of the time negligible when compared with the actual ISR component. Why would that be true? For starters, really efficient power stages will have Rise and Fall times being so small (think nano seconds), the result of this equation is tiny. Granted that these power stages have to be designed in with the uttermost care or the ground bouncing is going to knock you out of your sockets, but if done properly, you can have a system for which the switching losses are literally not a problem needing to worry yourself with. Either way, even on an amazing power stage you should expect Rise and Fall times in the order of 50 ns, whereas more economical versions will be designed with anywhere in the vicinity of 100 ns to 300 ns. So as you can see, this time is rather small.
The other reason why switching losses are often ignored is because the switching frequency is not that high. Notice that the switching losses are directly proportional to the frequency so clearly the higher the frequency the worst the power dissipation gets! However, motor H Bridges are hardly PWM’d at anything more than 30 KHz. In some systems, 30 KHz is prohibitely large and you see frequencies lower than 20 KHz. Heck, if it is a cheapo toy, where the cost of the processor has been squeezed so much the cost is almost negative, I can assure you the frequency will most likely be 1 KHz or 2 KHz if at all. On applications of this sort, it really doesn’t matter that much.
With all that being said, it could come to be that the switching losses do matter. For example, on some applications you will need the rise and the fall time to be huge. Think for example a gargantuan application like a battery operated drill or an RC plane. You would not imagine this, but currents on these guys can go up to a few dozen amps! To switch this much current ON and OFF quickly is like pulling on your nose hair. Painfull like hell! You just can’t! Hence, the rise and the fall times may easily be taken to as much as 500 ns or more. This same phenomenon often plagues very large drives, such as industrial drives, in which heinuous currents in the hundreds of amps must be switched. An an application of this sort, to go above 16 KHz is like hitting your face with a shovel.
Then there are those applications suffering from an ego boost and wanting to run on so fast frequencies, even dogs get to squeal like a puppy. For example, some time ago a customer was incquiring about using 400 KHz. What was the application in question is irrelevant. In my experience, for any frequency higher than 80 KHz, the switching losses start to become noticeably high and ignoring them is no longer a good idea.
All this mayhem and I haven’t mentioned voltage. Do we care about it? Unfortunately, yes. Do note that on our lovely Switching Losses equation, this power dissipation will in fact be directly proportional with application voltage! So say you are driving a stepper with a 12V power supply and then the same stepper with an 80V power supply. Maybe you think that because in both cases you are putting 1A across the stepper the power dissipation should be pretty much the same. ERRR!!! That will not be the case. The ISR component may be quite similar, but the switching losses on the 80V will super squash the 12V counterpart losses. This is one of the tradeoffs of using higher voltages to drive a stepper: Yes, you can increase current throughout the winding faster and hence move the stepper faster, but the switching losses will also be larger.
Are you still with me? Is your brain in pain? Don’t worry, I am done! Hopefully you know understand what are the implications of adding a PWM into your H Bridge. I truly hope I didn’t scare you away…
2 comments for “Power Dissipation and Power Losses; Which One is Which???”