Monolithic Power Systems, Inc. (MPWR)
NASDAQ: MPWR · Real-Time Price · USD
1,583.48
-30.93 (-1.92%)
At close: May 1, 2026, 4:00 PM EDT
1,576.00
-7.48 (-0.47%)
After-hours: May 1, 2026, 7:59 PM EDT
← View all transcripts

Status Update

Jan 17, 2024

Julian Meier
Marketing Manager, MPS

Hello, everyone. Thank you very much for joining our webinar today. We are going to wait for just a little bit to allow more attendees to dial into the webinar. Okay, it looks like we're getting people in pretty well, and as we do not want those of you who were on time to wait, we will get started now. Again, thank you very much for joining our webinar, Constant On-Time Control for FPGA High Current Power Supplies. For your information, we do hold webinars on a regular basis, and all recordings can be found on our MPS website. I'm Julian Meyer, Marketing Manager at MPS. In this webinar, Tomás Hudson, Applications Engineer at MPS, will present about constant on-time control versus current mode control, multi-phase COT to enable fast transients for large current steps, and MPS advanced power blocks for powering FPGA core rails.

Now, a bit of housekeeping before we start the webinar. All attendees are on mute. If you have any question, you can ask the question via the button located on the bottom of the Zoom meeting interface. We will answer at the end of the presentation. We are recording this webinar, and it will be available on demand within the next couple of days on our MPS website. We will also provide a PDF of the actual presentation itself. Now, I'm handing over to Tomás. Tomás, the stage is yours.

Tomás Hudson
Applications Engineer, MPS

Hi, Julian. Well, hi, everyone. Thanks a lot for the presentation, Julian. As he mentioned, my name is Tomás Hudson. I'm an applications engineer at MPS, and today I will be talking about constant on-time control for FPGA high current power supplies. A bit of the overview of what we're going to be looking at today. I'm gonna start with a more commonly known form of controlling power converters, which is peak current mode control. We're going to then take a look at why constant on-time control has taken over, especially for FPGAs and ASIC power supplies.

Look, taking a look at the advantages and challenges that this form of controlling the power supply entails, and then a bit of a more of an explanation on what sets apart MPS's Constant On-Time Control from other ones on the market, and a bit of what we can offer specifically for FPGA and ASIC design. So, as I said, just to start as a form of an introduction, we're going to look at Peak Current Mode Control, which is the one that most of you will be more familiar with, and it's the one that we've been generally using over the last years for powering our supplies. So Peak Current Mode Control, it basically. Well, generally, current mode controls use two loops. One is the voltage loop, which is also known as the slow loop.

Basically, what we do here is we. Let me just put on my laser pointer. Excuse me. Right. So yeah, as I was saying, we measure the output voltage, and then we use a resistive divider to generate a signal that goes through our linear compensator, and this generates the compensation signal, which you see in blue here. Then our second loop, the fast loop, we sense the current in the converter, and this triangular waveform here in green, that would be our inductor current, which we're sensing. From there, we go into the PWM generation circuit, and this has a latch, a comparator, and a clock. So we compare both signals, and we use the clock to generate the PWM.

As you can see, the clock determines when the PWM signal goes high, when the driving signal for this MOSFET goes high, and then what we do is we compare the inductor current with the compensation signal, and once these two meet, then this latch is reset, and the PWM goes down. So this way, what we're doing is generating our PWM signal by modifying the fall time and therefore changing the duty cycle. But we maintain a steady frequency, thanks to using the clock to determine the frequency of the PWM signal. This is a very fine way of controlling our power converters, and we can... Yeah, as I said, we can modify the duty cycle in order to be able to deliver more energy to the load as necessary.

So this is fine, and this has been working for many years in the power electronics industry. However, as the loads that we want to power change, then we need to find new solutions. The problem is that FPGAs and ASICs and other similar loads require very fast transient responses. This is due to the way that we use them and the way that they are designed. In FPGAs, depending on how we design the circuit inside the FPGA, we can have suddenly the load going from a minimum current to a quite high current at a moment's notice, and we need to be able to deliver that energy instantaneously to the load without it affecting our output voltage stability. So basically, what we need is a faster transient response.

The problem that we have with current mode control is twofold. The first is that we have to wait for a clock, and the second is that it's difficult to get this response to be as fast as we would like. So the thought process for developing a faster control method is pretty straightforward. First, why are we waiting for the clock? And secondly, let's just remove the element that gives us this delay. So we just eliminate the... If we eliminate the compensator and therefore eliminate the delay, we can then just connect our output voltage straight into the comparator. And of course, we stop generating this compensation signal, so we can just substitute this for a reference. The second issue that I mentioned, which is that we have to wait for a clock.

We just remove the clock signal and change the way that we generate this PWM, changing it from a latch to a one-shot timer. And this is how basically we create the idea of Constant On-Time Control. So as you can see here in the change in these waveforms, we don't change the duty cycle by changing the on-time. We fix the on-time, and all that we do is we modify the frequency or the total period, the total switching period. So our one-shot timer will always drive the MOSFET for the same amount of time. We have a comparator as well. We're comparing the output voltage, and we're comparing it to a reference.

So while the output voltage is above the reference, that's fine, and once they meet, then we switch the high side MOSFET on for a specific on time, delivering the energy to the output, charging the output capacitors and increasing the output voltage. The advantage of this is that when we see an increase in the load current, instead of having to wait for the next period, we can just deliver these short bursts of energy as frequently as, well, of course, limited by a minimum off time in between these pulses. But this allows us to deliver energy quicker to the output and therefore reduce the drop in output voltage, thus allowing us to have these faster transient responds that are required, that are required for FPGAs and ASICs. So yes.

So just to be clear, the on time is constant, and what we vary, in effect, is the off time and therefore the total switching period. So this has several advantages, which we'll go into more depth in the following slides. But just as a quick overview, we improve the transient performance, which is one of our main objectives. We simplify the architecture. We have a more seamless transition between the light load and heavy load, and we don't require an internal oscillator due to the fact that we don't use a clock. However, the challenge is that we depend on the ESR of the output capacitors for stability, and we need to have a periodic signal on the feedback voltage in order to maintain this proper control.

I'll explain more on this later on. Also, the switching frequency is varying, therefore, we don't have a constant switching frequency, which can be an issue in certain applications. And finally, the output filter design is slightly more complex with these control systems than in traditionally used ones. So just to look more in depth onto the advantages, this is similar to what I said before, but here you can see a direct comparison. When we have our load step here, our feedback voltage in peak current mode control will drop because we have to wait for the clock signal, and we can't deliver the energy to the inductor and the output as fast as we would like.

Whereas in constant on time, we can instantly change the frequency and deliver much more energy to the output, and therefore, try and reduce the drop in the feedback as fast as possible. In a real-life example, we compared two equivalent converters, one using peak current mode and the other one using constant on-time control. Using the same input voltage and output voltage, a 3.5-amp load step with the same inductor and the same output capacitance. We can see that the output voltage during the rise and the fall of the load current step in the peak current mode, we had an error of about ±8% on our desired output voltage.

Whereas when using constant on-time, because we can deliver the energy much faster, the voltage drop is much less, and we were just limited to 2.5%. This value is important, as many of you may know, when designing power supplies for FPGAs, because the requirements on the output voltage stability are quite strict. For example, this is from our reference design, and I'll touch more on this later on. But, for these main core rails, very often we see, sorry, that we're required to be within a 3% margin around the desired output voltage. So if this rail is a 1 V rail, then this gives us about 30 mV in peak-to-peak when dealing with these load steps.

So this means that we need to design a converter that's capable of going up to 20 A, maybe from as low as, you know, 1 amp or 2 A, without shifting our voltage above this 30 mV margin. The parts that I'm showing today are MPS's power modules, which are power converters, which also integrate the inductor. I'll explain more of this later on, but the one that we use in our reference design for the Virtex-7, for example, is the 3695-25, which can deliver up to 20 A of continuous current, and it has PMBus control and uses this constant on-time. So this way, we can deliver our voltage within the requirements set by the FPGA.

The second advantage of constant on-time is that, as I said before, we reduce the compensator. So, more often than not, we need to tune the values on this, on these compensators, the capacitors and the resistors, and this can be a very time-consuming process, soldering and desoldering different values. By reducing this, we also reduce the design time. So that's one sort of delay that we reduce. The other is, of course, during using the current mode, we've got our error amplifier and our, and our clamp that we use to generate the PWM signal. With constant on-time control, since we just eliminate this and go straight into our comparator, we can eliminate the delay from the error amplifier.

Furthermore, because we just use the comparator, we use our high-speed comparator to generate the PWM signal, our delay is very, very significantly reduced. Another one of the main aspects of constant on-time control is that due to the fact that we only switch the. Well, we only turn the MOSFET on when the voltage drops below the reference voltage. What we can do is we can have a sort of seamless transition between light load operation and heavy load operation. So, in other forms of control, for example, current mode control, we need to implement sort of alternative control methods when operating at light load. For example, MPS, we use our advanced asynchronous mode, which does pulse skipping during light load.

The issue with this is that when we find ourselves in the area where we're transitioning from a very light load to where the converter can start operating in normal DCM or CCM operation, then this change in the control method can lead to slight instability, depending on the application and the total design, and this can have a drop in the efficiency in this transition period. However, when we use our constant on-time, we can use the same. We don't have to change the control method, therefore, we can maintain this high efficiency throughout the whole current range.

So, just to as an example of this, our lower current modules, the 38 family, they are up to 5.5 input voltage range, and we have some that can deliver up to 600 m A, 1 A, 2 A, or 3 A. All these have constant on-time control and come in very, very small packages. These are just 2.5 by 2.5 mm and can deliver up to 3 A, and these come with an integrated inductor. So all we really need is our voltage divider and input and output caps, and then we have our low current power supply. This greatly simplifies design and also saves a lot of space on our boards.

As you can see here, we have a very stable and high efficiency all throughout the current, the current range. Now that we've gone over the advantages of constant on time, I would like to discuss the challenges. One of the first challenges is the variable frequency. Now, this can be an issue in certain applications, and especially, for example, regarding EMI, because we often like to know what frequencies, what frequencies we have in our circuit, right? It's hard enough to figure out when we have constant frequency and the fact that the frequency is going to depend on the conversion ratio and the current load, and the load current is worrying for some designers.

So what we've done at MPS is, we have modified our one-shot timer to also consider the conversion ratio by also having sensing the input and output voltage in our circuit. This way, we can set quasi-fixed switching frequency during steady state. So, which means that it's independent of any variations in the input voltage. Because variations in the load current, that's the whole point with a constant on time. What we don't want is the frequency to also be dependent on potential variations in our input voltage. So during steady state, in this way, by modifying our one-shot timer, we can ensure that we have practically a fixed frequency during the steady state.

We can do this without affecting our control loop's ability to react when faced with a current load step. One of the main challenges that we often see when designing with constant on-time control is that, of course, we've lost our dual loop control. So, in our dual loop control, as you saw, we have our inductor current, that triangular waveform, which is periodic, and it helps us a lot in maintaining the stability of the controller. The way that we do it in constant on time is we're just reading this output voltage. So we're using basically the output voltage ripple itself to maintain this periodicity and therefore maintain the stability of the controller. Now, let's have a quick look at the different components that set up...

That make up this feedback voltage or this output ripple. Basically, these come from our output capacitor. We can just see them looking at our output capacitor. So the first component is the ESR ripple. The ESR ripple is the ESR or the equivalent series resistor, is a parasitic element inherent to any capacitor. So as the current flows through the capacitor and therefore through this equivalent series resistor, it generates a voltage drop, which we can see here. Now, this voltage drop, the good thing is, it has the same shape and has no delay with regards to the inductor current. However, the other element, which is the capacitor's voltage ripple, which is due to the charging and the discharging of the capacitor itself. This does have a delay with regards to the ESR ripple.

As you can see here, the ESR ripple reaches its maximum point just when we turn our high side MOSFET off, whereas the capacitor ripple is delayed. Now, if we're lucky our ESR ripple is the dominant factor in the voltage ripple, then we have a signal which is periodic. This happens when we use capacitors with large ESRs, right? So, such as POSCAPs . However, using POSCAPs is something that we're trying to reduce as much as possible because it affects the frequency response, and it also, very importantly, affects the final cost of our solution.

So unfortunately, if we use capacitors with low ESR, such as MLCCs, which is the one that, you know, we generally use for all our power conversion applications, in these, in this voltage range, we run the risk that the capacitor ripple component is larger than our ESR ripple. And then, because of this delay, we can enter a subharmonic oscillation within our control, which means that our signal that we are, we're dealing with in the feedback will become sort of non-periodic, entering this subharmonic oscillation and therefore risking our ability to be able to control the circuit as we intend. So basically, using low ESR capacitors can be a problem for stability. However, there are some solutions to this, of course.

The first one, obviously, is just increase the ESR, and that's a very simple way of improving the stability. So as I said, this is due to the use of POSCAPs or even electrolytic capacitors, or reducing the quantity of capacitors and using larger packages. But this is often... Well, this solution is simply not good enough. So an alternative is to fix this subharmonic oscillation in the same way that we fix subharmonic oscillation in current-mode converters, which is the addition of an external ramp signal into our feedback, so that we can maintain, regardless of the shape of our feedback voltage, we can maintain this periodicity and avoid entering subharmonic oscillation.

However, adding this ramp externally, of course, is also gonna add unnecessary complexity to our design and therefore is not the ideal solution. So the one that we implement in our power modules, which, as I said before, our power modules not only integrate the control and the switch and the switching elements, but it also integrates the inductor. And for the constant on-time to ensure the stability, we also integrate this RC circuit in parallel to the inductor. This generates a slope voltage, which is with the same frequency as the inductor current. So basically, what we're doing is generating the same signal as we would of the ESR ripple, but regardless of the capacitor that we use outside.

So in this way, we ensure that regardless of what capacitors we're using and what their placement is, and what the ratio between the ESR ripple and the capacitor ripple is, the inductor current shape always dominates on the final ripple voltage. Therefore, we can maintain our stability in our controller. We can make sure that the feedback is in phase with the inductor current, and therefore, we can properly control our output voltage and make sure that we can meet the transient specifications for which the constant on-time control was selected for these FPGA applications. So, just to show you how we, well, where we implement all of this technology, I wanted to take a quick look at the MPM3695-100. This is our 100-amp power module.

So, you can see it up here. It's in a 15 mm by 30 mm package, delivering up to 100 A of constant, of continuous current. And, some of the main aspects that I wanted to highlight and then go into a bit more depth on is. Well, first of all, as you can see here, it's a BGA package, and we've designed our our modules, and by integrating the inductors and the convert, and the control, and the power MOSFETs inside this same package, we greatly simplify the layout of the board itself. So for example, in here, we just need three layers to do the most basic outputs for our 100 A. And this, again, allows us to save board space, it allows us to save time and design, it allows design iterations.

We don't have someone routing signals under the inductor and running into trouble then when there's, you know, crosstalk or many of the issues that we see when we're designing these high current power supplies. Our power supply can deliver 100 A because it has four phases inside. I'm gonna talk a bit more about this later on. And I say this because I wanted to also tell you that constant on-time, albeit that it has it doesn't have a fixed frequency, MPS and its constant on-time modules can be placed in multi-phase operation. So we can use the advantages of constant on-time, the varying frequency, and the ability to deliver this energy to the load. We can also implement this in a multi-phase operation.

I'm gonna explain more, as I said, later on. The fact that we can deliver this very fast transient, we can deliver this energy very quickly, also reduces the requirement of the output capacitance, because these output capacitors are the ones that need to deliver the energy instantaneously to the load when the load demands it. By having this very fast control system, we can deliver that energy, not so much through the capacitors, we can deliver it from the inputs. This allows us to reduce the requirements of the output capacitance about to 50%, meeting the same transient specifications as before.

The final aspect that I wanted to show about these is many of our high current modules are designed for these FPGA applications and therefore have digital interfaces such as PMBus, I2C, and we even have some for the newer FPGAs, which are demanding Smart VID. We can also communicate with these FPGAs and be able to further optimize the power efficiency and the behavior of our, of our converters, as well as simplifying the prototyping stage by even integrating, in some cases, the output voltage divider. We can even integrate this within some of our modules and further reduce the BOM count and the rework time, and generally increase the simplicity of, of designing and prototyping using these modules. So about the auto interleaving.

As many of you know, if we need to deliver large amounts of current, we can place several... Sorry, excuse me. Several power converters in parallel and use these to deliver current simultaneously to our load. In the case of the 3695-100, for example, we have evaluation boards. This is orderable on our website. We can, we have several options of 100 A, 200 A, 400 A, up to 800 A of power using these a 100 A power modules. So basically, what we're doing here is interleaving these eight power modules to be able to deliver to reduce the current ripple and increase the speed with which we can deliver this energy to the load.

So as if you've ever worked designing multi-phase power supplies, this is not such a simple task to take into account. We need to consider the phase difference between the different modules, make sure that we evenly distribute them, make sure that the current is distributed equally along each phase. We don't want, if we're delivering say 200 A using two of these two of these converters, we want to deliver 100 A on each, not one delivering 150 A and the other delivering 50 A, because that's gonna lead us into big trouble. So what we've done in our, with our constant on-time control and our advanced control is we can have the automatic interleaving, and we have automatic active balancing.

So we can ensure that all our phases are delivering efficiently without having to go through a very painstaking process of, you know, trimming and modifying the control loop. So the way that this auto interleaving works is, here's a small schematic. We have three pins that are dedicated to controlling this interleaving. One is the take pin, the pass pin, and the run pin. Excuse me. Gonna explain more of this later on, but what I wanted to show you is how we can connect these eight modules together and simplify this interleaving process by just having automatic detection of how many phases we have. And also, the way that we design this interleaving is by setting one module as the master and the others as the slave.

So we set the the first phase, count how many phases we have, and then distribute the phase differences accordingly. So to just detect the master module, we just connect the take pin to VCC, and then we also, for the rest of the modules, we can connect, as I said, up to eight. They will notice that their take pin is not connected to VCC at startup, and therefore, they know that they are a slave module. And then this information, knowing how many modules there are in total, the master module will send the appropriate PWM information using the pass and the run pins to the rest of the modules and implementing this multi-phase operation. There's two options.

That's why we have the pass and the run pin in order to set up this multi-phase. If we use option 1, we have... I keep misclicking. Sorry. Yeah, we have the run pin. In this case, what we're doing is we're connecting the modules in, so as I said before, each side, inside each of these modules are four phases. So what we are doing is we are connecting them sort of in parallel, so to speak. So we have actually four phases with a 90° phase difference, and the run and the take pin connected ensure that these phases work hand in hand to deliver the energy to our load faster.

So this means that we can deliver larger amounts of current per phase, but it also means that the, due to the 90-degree phase difference, as compared to option two, where we have a 40-degree phase difference, our ripple will be slightly larger. However, we'll be able to deliver larger peak currents, and therefore, we'll be able to have slightly faster transients. So, the option two, using pass and take, we connect them in a sort of eight-phase operation, so we have a 45-degree phase difference between the different converters inside these two modules. In order to connect four or eight phases, we can do a combination of the run and the pass and so both these options, and this is also all available on our evaluation board guides and on our data sheets for these modules.

The final aspect that I mentioned before, the simplicity to program. We have our Virtual Bench Pro software, which I'm not sure if many of you are familiar with. This software includes many of our parts that have digital interfaces and allows users to be able to program both basic parameters such as the output voltage, the switching frequency, or just enabling or disabling different parts, as well as in these more advanced parameter section, we can modify anything from going into the protections or in these advanced modules for these advanced FPGAs, we can select our fast communication bus that we want to use. We can also use advanced control methods to increase, further increase our transient response, such as active voltage positioning or load line configuration.

We can also modify how we do the current monitoring, so we, we can meet with, different manufacturer standards on, on current monitoring, and we can also modify the way that we, the way that we generate the PWM. We can modify the on time, the minimum off time. We can modify, several aspects that are going to be able... They're going to allow you to trim your design without having to do reworks on the board itself. So the final challenge that I wanted to talk about is the DC offset. As you may have noticed, as I was mentioning, the output voltage drops to the reference voltage and then rises again once we activate the high-side MOSFET. This, of course, is going to generate a small offset in our output voltage. So the way that we fix this is by integrating an error amplifier....

Before our comparator for the reference voltage. This is going to allow us to reduce our output voltage to where we want our reference to be. So we're going to reduce this steady state error. But most importantly, we're going to be able to do this without affecting the capability of our constant on-time control to meet the transient requirements, because this error amplifier is only going to affect the reference voltage. The signal that we're comparing it to is going to still come straight from our output voltage ripple, and therefore, we're still going to be able to deliver this very fast transient response. So just as a last overview of what we've gone over regarding the advantages and challenges of constant on-time control, we still have our very good transient performance.

I think that's the key takeaway for when we design power supplies for the FP... for FPGAs. But we also simplify the architecture, not only due to the constant on time, but due to many other aspects in our modules, such as the auto interleaving and the and the digital programmability. We have increased efficiency in the transition between the light load and heavy load operation. We also resolved the issue with the ESR, so we can maintain our stability regardless of the output capacitors that we place. And also, as I said before, we can reduce the amount of capacitors that we have at the output very significantly. This is gonna allow you to save more board space and also save cost on the on the BOM.

The frequency stability is, of course, an issue in certain applications, such as radio, but for many other applications where the main worry is the general EMI, for example, for industrial applications, we can maintain a quasi-stable frequency during the steady state operation by modifying our one-shot timer. So having said all of these advantages and having resolved, hopefully, all of these challenges, I'd like to just quickly show you our portfolio, which is very wide for FPGAs. Just a very quick overview. We have a wide range from 10 up to 100+ A. We have either single output, for example, these 10-amp output up to... or the 100-amp output I showed before. We have multiple outputs.

We have dual 13, which can also be connected in a single 26 amp, up to dual 25 amp or a single 50 amp. We have some of them are digitally programmable, such as the 3695-50D. Some even have these high-speed buses, such as the 3698, for being able to communicate with, you know, advanced FPGAs, such as the Agilex, for example. We even have this quad 25 amp module, where you can connect the outputs together or separately, so you can have four 25 amp outputs, two 50 amp outputs, one 75 ampamp and one 25 . This adds a lot of flexibility to designs. If we need several medium or high current rails, we can you can use just one single part number for different power rail requirements.

And we also have for applications that don't want to require digital programming, we have the 3683 family, which is fully analog. All you need to do is set the output voltage using a output voltage divider and some of the other key parameters with a couple of resistors on the output. Sorry, a couple of resistors on the circuit, and you can also set several of the parameters from there, such as the switching frequency, for example. Also, the other thing I wanted to mention is that on our webpage, you will be able to find several reference designs for the main FPGA manufacturers on the market, such as AMD Xilinx, Intel, used to be Altera, and Lattice FPGAs.

So you just go into our website, into the design page, and you will be able to find our reference designs for all these, series of designs. Some of these come with, with schematics, some of them, and, and you can even purchase some of the evaluation boards for some of these. Yes. And finally, just as a reminder also of the advantages of our power modules, MPS, we've been developing for a very long time, and, it's what we're best known for, is developing power converters, monolithic power converters, so we integrate the switches with the control. Now, the next step in integration, which is where we're going now, is also integrating some of the passive elements to further integrate and further increase, power density in our designs.

The main aspect in this case is we're integrating the power inductor onto the same lead frame and the same package, which is an SMD package. Manufacturing is as simple as you would using a standard controller or converter, but also integrating several passive elements, such as the bootstrap capacitor or some of the decoupling capacitors at the input to increase the EMI performance. The main advantages of this, as I said, we can integrate several outputs into a single package, so we reduce total solution size, we reduce the board layout complexity, and we also minimize the BOM, which also makes it easier for handling the production process later on.

And then for EMI, for those of you who may have applications in the medical, the industrial, or the automotive sector, there are several key aspects I find in using the modules. For example, the main one of the main culprits we're finding issues with EMI is improper layout design. So for example, the switch node, we want to keep that as small as possible and keep this distance and this plate that radiates noise as small as possible. By integrating the inductor within the package, I think it's very, very hard to get a smaller switch node than by placing the inductor inside the package.

Also, as I said before, by integrating these decoupling capacitors at the input into the package as well, we keep these hot loops as small as possible and therefore have them radiating as least as possible. So this has allowed us to have very good EMI performance in many of our modules, and we've even qualified many of them as complete power supplies, passing emission tests in Class B. So you can also have less uncertainty when going into testing on your final product by using our MPS power modules. And that is all that I wanted to share with all of you today. Julian, I think we can move along to the Q&A section.

Julian Meier
Marketing Manager, MPS

Yes, let's start. So thanks again, Tomás, for the great presentation. For every participant, I want to remind that you can mouse over the bottom of your Zoom webinar interface, and you will see the Q&A button. Please type in any question that you might have. As we are waiting for more questions to come in, I want to remind you that we are recording this session, and we will send you a link early next week. With the link, you can watch today's webinar on demand, and we will also send you a link to the PDF itself. Please also check our MPS website and webinar schedule for additional webinar sessions. Now let's get started with the first question: How does the varying switching frequency affect EMI?

Tomás Hudson
Applications Engineer, MPS

Yeah, well, as I think I mentioned at some point in the presentation, when we have a fixed switching frequency, that's not the main—the switching frequency itself is not often the main issue with EMI, it's the several harmonics above this switching frequency. But the thing is that we want to know what the base frequency is, so we can look out and try and identify culprits when debugging for EMI. So many people are worried about using this varying frequency in constant on time, because during these load transients, we can find spurs that can be problematic in this design.

However, with the right layout and using these modules and, therefore, reducing as much as possible our emissions, we can deliver both very good transient responses while not being too harmful in our EMI design. And also additionally, the use of our quasi-fixed frequency during steady states also can reduce the worry of this varying frequency.

Julian Meier
Marketing Manager, MPS

Thanks a lot, Tomás. So someone is asking if the presentation and the recording will be available. Yes, as I mentioned before, we will send you an email with a link to the recording and also to the PDF presentation. Next question is: How to compensate stability over temperature range, -40°- 60°, when ESR is in capacitor change?

Tomás Hudson
Applications Engineer, MPS

Well, when we... So as I said before, if we use our, if we have this, ramp voltage, sorry, this, ramp signal within our control, we, we become much less dependent on this ESR for our control. Again, but of course, this is something that you need to test in your entire solution. For our parts, we test both in cold and high temperature. So in very low temperature and very high temperature conditions, to ensure that our system can be stable. And in fact, all this information is on our data sheet. Of course, it may depend on, again, on the total solution, but by using this ramp signal, we can maintain the stability regardless of the ESR.

Not regardless, but try and be as less dependent as possible on the ESR.

Julian Meier
Marketing Manager, MPS

... Thanks a lot, Tomás. Not sure to which slide this question was referring to. Perhaps you know, Tomás, the question is: Will this type of converter work well on high resistance power source, for example, lithium-ion battery?

Tomás Hudson
Applications Engineer, MPS

Hmm. I'm not sure I can give you an answer. I wouldn't want to be wrong, but I-- That's a very interesting question, and I would suggest that we can. Well, I wouldn't want to make a mistake, but I will look into it, and we can either make a post, or we can send you the information via email, or I think it could be an interesting aspect to develop an application note on this. But unfortunately, I can't give you an answer. I wouldn't want to give you an answer now and get something wrong. So I'll have to look more into it.

Julian Meier
Marketing Manager, MPS

Thanks a lot, Tomás. Next one is: Are the integrated passive components susceptible to mechanical issues, or is the package protecting them?

Tomás Hudson
Applications Engineer, MPS

Yes. So many modules that we see on the market and even some of our modules as well are open frame. Our modules and the most of the ones that I was talking today, in fact, I think all of the ones that I mentioned today come in SMD format. So, for example, here, these all the passive components, so all the inductors and the capacitors are on the lead frame inside the mold, and therefore there shouldn't be an issue of any, like, outside corrosion affecting the components. Some of our modules also are AEC-Q qualified, so the entire system goes through validation of vibration and the usual AEC-Q testing. For the other ones, they are validated for industrial use. So yes, they're not open frame.

They're inside the mold of the module.

Julian Meier
Marketing Manager, MPS

Okay, next one is: What are the minimum and the maximum switching frequency of the COT controller?

Tomás Hudson
Applications Engineer, MPS

It depends on. It's a controller-by-controller basis, so we can-- you can modify the switching frequency within a range. That's another important aspect. Sometimes, going back to your previous question about the EMI, we can also limit the switching frequency range by modifying the maximum and minimum off times. So the minimum time that we want to set between each burst or each PWM on time. That will depend from controller to controller. As a ballpark figure, we can go up to 2.2 MHz, and the lowest that I've seen down to, I'd say about 400 kHz. But this is off the top of my head.

As I said, this is gonna depend from controller to controller, and all that information is on the datasheet and on our website on the selection table. I think it's one of the aspects that can be filtered.

Julian Meier
Marketing Manager, MPS

Thanks a lot, Tomás. Next one is: What are the possibilities after design to compensate the FB resistor tolerance, ±1%?

Tomás Hudson
Applications Engineer, MPS

Sorry, could you repeat the question?

Julian Meier
Marketing Manager, MPS

What are the possibilities after design to compensate the FB resistor tolerance, ±1%?

Tomás Hudson
Applications Engineer, MPS

Well, it depends. So if we use, for example, an external-

Julian Meier
Marketing Manager, MPS

He means feedback resistor tolerance.

Tomás Hudson
Applications Engineer, MPS

Yeah, yeah. Yeah, so the feedback resistor tolerance, if you're using an external resistor, then inside many of our converters, especially those that are programmable, there are several options. One is using the internal resistive divider, and there you can set the output voltage, and you can even trim. So the steps can be as low as 20 mVs, I think, for the output. So you can do the trimming there via I²C or PMBus.

The other aspect is if even if you're using an external resistive divider and you see that there's an unexpected variation in the tolerance or you want to trim the output voltage regulation a bit, there are also some parameters which allow you to modify the gain of the loop, and therefore, also try and perfect your output voltage. This is something that's often done as well in some telecom applications. Because many of the technology that we use, and of course, the way FPGAs are going now to becoming more and more advanced, they become similar to what we've been seeing in the computing and the data center space.

So we can use these methods in order to also design our power supplies for these advanced FPGAs or other sort of smaller computing systems.

Julian Meier
Marketing Manager, MPS

Okay, next one is: Are the tantalum capacitors suggested in COT power supply, thinking the ESR? What is the ESR range value suggested?

Tomás Hudson
Applications Engineer, MPS

Well, the ESR range, giving a specific range is complex, as usual. It's gonna depend on many things, not only the operating point, of course, but it also has to do as well with where we place these capacitors. The distance between the bulk capacitors and to the voltage regulator. It's gonna depend on, you know, how noisy our load is. It's gonna depend on many different aspects. Yeah, the use of tantalum capacitors, it shouldn't be an issue due to the ESR, as I mentioned before, we try and make our converters using this constant on-time control to be as independent as possible of the ESR.

But there's many things in play when designing, especially these high current power supplies, that's gonna have to do with many aspects of the power delivery network in the end. We are, however, developing some articles and some content on to go in more depth on exactly, you know, the differences between different capacitor types, different capacitor placements, the distance that we place them from the converter. So this will be hopefully uploaded soon to our website, and there'll be more in-depth information there, which can be useful as a useful reference for designing these power supplies.

Julian Meier
Marketing Manager, MPS

Thanks a lot, Tomás. Next question is also related to ESR. COT power supply require the higher ESR value of caps, but it will reduce the efficiency. So how can we manage this issue?

Tomás Hudson
Applications Engineer, MPS

Well, the higher ESR is going to probably have a slight loss in efficiency. That is true, but so that's why we place so much interest in trying to ensure that we can deliver these transient responses, regardless, or not regardless, as I said, less dependent on the ESR. So the main point is by using the adaptive constant on-time control with the internal RC circuit, we can try and minimize the ESR that we actually need to make the converter stable.

So I have seen applications with incredible amounts of output capacitance, but by using our constant on-time and using the right designs and optimizing the PDN, we can reduce this capacitance by 50%. Therefore, we're also reducing, you know, this loss in efficiency. Talking about efficiency also, and to be fair, there's another aspect, which is that when we do this, we do these conversions, or when we increase the load current, we are also going to increase the switching frequency, and we're going and during these steps, there is a much higher switching frequency, and therefore, also we're going to have a slight loss on the a slight loss on due to switching losses.

Regarding efficiency, constant on-time will be a slight setback, but we find that in many of these applications, when we're dealing with 100-amp current rails, what we want to do is try and of course be as efficient as possible. And that's why our power modules, if you look at the efficiencies, we try and aim to be in the high 90s region. But yes, efficiency is definitely going to be an aspect to consider in the ESR design. But as I said, we can try and reduce this as much as possible and try and optimize these designs for efficiency.

Julian Meier
Marketing Manager, MPS

Okay, thanks a lot. Next one: Can a COT module be configured to work as current voltage limit supply, for example, 1.2 volt, 600 A?

Tomás Hudson
Applications Engineer, MPS

Using current mode control? Sorry, using as a constant current control power supply?

Julian Meier
Marketing Manager, MPS

So, repeat the question. It was, can a COT module be configured to work as current and voltage limit supply, for example, 1.2 V, 600 A?

Tomás Hudson
Applications Engineer, MPS

I don't think any of the ones that we have right now enable this, but I'll have to, I'll have to check more with the team. It's not any applications that I've, that I've worked on, but there, there may be. But I don't, I don't, I, I doubt, I doubt it at first, from first impression.

Julian Meier
Marketing Manager, MPS

Yeah, for the guy who has asked the question, perhaps you can send us an email again. Yeah, the email is shown here on the slide. Send an email to Tomás, and we can check for you. Next one would be: What is the new family name based on the MPQ8632?

Tomás Hudson
Applications Engineer, MPS

The new family name?

Julian Meier
Marketing Manager, MPS

Yes.

Tomás Hudson
Applications Engineer, MPS

MPQ8632?

Julian Meier
Marketing Manager, MPS

MPQ8632.

Tomás Hudson
Applications Engineer, MPS

I'm not sure. The MPQ8632 is not in my department, so I'm afraid I'm not sure. Yeah.

Julian Meier
Marketing Manager, MPS

Okay. Thanks, Tomás. So, yeah, we can also check and come back to you. You can also send an email to us, and we will forward to the right people. Next question would be, do you have evaluation boards for multi-phase applications?

Tomás Hudson
Applications Engineer, MPS

Yes. For example, the one I showed before, up to eight phases, all of these are available online. This is a eight-phase. We also have two-phase and four-phase options. And also, as I said, for our FPGA reference designs, we have, some even have full evaluation boards with the power modules already, placed for all the rails and the sequencing, so you can, go straight to, evaluating the entire solution. And for, yeah, for our multi-phase modules, we also offer, eval boards for multi-phase operation. And these are, as I said, orderable on the website.

Julian Meier
Marketing Manager, MPS

Okay. Thanks a lot, Tomás. So I see one last question, and it is: Can we limit the maximum switch, switching frequency to avoid entering certain frequency bands?

Tomás Hudson
Applications Engineer, MPS

Yeah. As I said, this is something that we can change in that we can modify for our digitally programmable modules. We can limit the off time in between the PWM signal, and therefore, we can limit, in this way, what our maximum and minimum frequencies are gonna be, and therefore, we can avoid entering in these problematic bands.

Julian Meier
Marketing Manager, MPS

Okay. Thanks a lot. So at the moment, I'm not seeing any further question, so we will pause here for a little bit and give you the chance to type in any question which you still might have.

Tomás Hudson
Applications Engineer, MPS

Furthermore, if anybody has any questions that come to mind afterwards, you can send me an email at the address that you see on the screen, and I'll be happy to help with what I can.

Julian Meier
Marketing Manager, MPS

Okay, great. Thanks a lot, Tomás. So I'm not seeing any further questions. So as mentioned before, we will send you an email with a link to the on-demand version of this session, and you can also review the PDF. So we are not seeing any further question. So, I want to thank you for your time and also remind you to look for future webinars that we are producing, and hopefully, we will see you back here again in the near future. Thanks a lot to everyone, and see you soon.

Tomás Hudson
Applications Engineer, MPS

Thank you very much. Goodbye.

Powered by