Українською
  In English
EDN Network
Delta-sigma demystified: Basics behind high-precision conversion

Delta-sigma (ΔΣ) converters may sound complex, but at their core, they are all about precision. In this post, we will peel back the layers and uncover the fundamentals behind their elegant design.
At the heart of many precision measurement systems lies the delta-sigma converter, an architecture engineered for accuracy. By trading speed for resolution, it excels in low-frequency applications where precision matters most, including instrumentation, audio, and industrial sensing. And it’s worth noting that delta-sigma and sigma-delta are interchangeable terms for the same signal conversion architecture.
Sigma-delta classic: The enduring AD7701
Let us begin with a nod to the venerable AD7701, a 16-bit sigma-delta ADC that sets a high bar for precision conversion. At its core, the device employs a continuous-time analog modulator whose average output duty cycle tracks the input signal. This modulated stream feeds a six-pole Gaussian digital filter, delivering 16-bit updates to the output register at rates up to 4 kHz.
Timing parameters—including sampling rate, filter corner, and output word rate—are governed by a master clock, sourced either externally or via an on-chip crystal oscillator. The converter’s linearity is inherently robust, and its self-calibration engine ensures endpoint accuracy by adjusting zero and full-scale references on demand. This calibration can also be extended to compensate for system-level offset and gain errors.
Data access is handled through a flexible serial interface supporting asynchronous UART-compatible mode and two synchronous modes for seamless integration with shift registers or standard microcontroller serial ports.
Introduced in the early 1990s, Analog Devices’ AD7701 helped pioneer low-power, high-resolution sigma-delta conversion for instrumentation and industrial sensing. While newer ADCs have since expanded on their capabilities, AD7701 remains in production and continues to serve in legacy systems and precision applications where its simplicity and reliability still resonate.
The following figure illustrates the functional block diagram of this enduring 16-bit sigma-delta ADC.

Figure 1 Functional block diagram of AD7701 showcases its key architectural elements. Source: Analog Devices Inc.
Delta-sigma ADCs and DACs
Delta-sigma converters—both analog-to-digital converters (ADCs) and digital-to-analog converters (DACs)—leverage oversampling and noise shaping to achieve high-resolution signal conversion with relatively simple analog circuitry.
In a delta-sigma ADC, the input signal is sampled at a much higher rate than the Nyquist frequency and passed through a modulator that emphasizes quantization noise at higher frequencies. A digital filter then removes this noise and decimates the signal to the desired resolution.
Conversely, delta-sigma DACs take high-resolution digital data, shape the noise spectrum, and output a high-rate bitstream that is smoothed by an analog low-pass filter. This architecture excels in audio and precision measurement applications due to its ability to deliver robust linearity and dynamic range with minimal analog complexity.
Note that from here onward, the focus is exclusively on delta-sigma ADCs. While DACs share similar architectural elements, their operational context and signal flow differ significantly. To maintain clarity and relevance, DACs are omitted from this discussion—perhaps a topic for a future segment.
Inside the delta-sigma ADC
A delta-sigma ADC typically consists of two core elements: a delta-sigma modulator, which generates a high-speed bitstream, and a low-pass filter that extracts the usable signal. The modulator outputs a one-bit serial stream at a rate far exceeding the converter’s data rate.
To recover the average signal level encoded in this stream, a low-pass filter is essential; it suppresses high-frequency quantization noise and reveals the underlying low-frequency content. At the heart of every delta-sigma ADC lies the modulator itself; its output bitstream represents input signal’s amplitude through its average value.
A block diagram of a simple analog first-order delta-sigma modulator is shown below.

Figure 2 The block diagram of a simple analog first-order delta-sigma modulator illustrates its core components. Source: Author
This modulator operates through a negative feedback loop composed of an integrator, a comparator, and a 1-bit DAC. The integrator accumulates the difference between the input signal and the DAC’s output. The comparator then evaluates this integrated signal against a reference voltage, producing a 1-bit data stream. This stream is fed back through DAC, closing the loop and enabling continuous refinement of the output.
Following the delta-sigma modulator, the 1-bit data stream undergoes decimation via a digital filter (decimation filter). This process involves data averaging and sample rate reduction, yielding a multi-bit digital output. Decimation concentrates the signal’s relevant information into a narrower bandwidth, enhancing resolution while suppressing quantization noise within the band of interest.
It’s no secret to most engineers that second-order delta-sigma ADCs push noise shaping further by using two integrators in the modulator loop. This deeper shaping shifts quantization noise farther into high frequencies, improving in-band resolution at a given oversampling ratio.
While the design adds complexity, it enhances signal fidelity and eases post-filtering demands. Second-order modulators are common in precision applications like audio and instrumentation, though stability and loop tuning become more critical as order increases.
Well, at its core, the delta-sigma ADC represents a seamless integration of analog and digital processing. Its ability to achieve high-resolution conversion stems from the coordinated use of oversampling, noise shaping, and decimation—striking a delicate balance between speed and precision.
Delta-sigma ADCs made approachable
Although delta-sigma conversion is a complex process, several prewired ADC modules—built around popular, low-cost ICs like the HX711, ADS1232/34, and CS1237/38—make experimentation remarkably accessible. These chips offer high-resolution conversion with minimal external components, ideal for precision sensing and weighing applications.

Figure 3 A few widely used modules simplify delta-sigma ADC practice, even for those just starting out. Source: Author
Delta-sigma vs. flash ADCs vs. SAR
Most of you already know this, but flash ADCs are the speed demons of the converter world—using parallel comparators to achieve ultra-fast conversion, typically at the expense of resolution.
Flash ADCs and delta-sigma architectures serve distinct roles, with conversion rates differing by up to two orders of magnitude. Delta-sigma ADCs are ideal for low-bandwidth applications—typically below 1 MHz—where high resolution (12 to 24 bits) is required. Their oversampling approach trades speed for precision, followed by filtering to suppress quantization noise. This also simplifies anti-aliasing requirements.
While delta-sigma ADCs excel in resolution, they are less efficient for multichannel systems. Architecture may use sampled-data modulators or continuous-time filters. The latter shows promise for higher conversion rates—potentially reaching hundreds of Msps—but with lower resolution (6 to 8 bits). Still in early R&D, continuous-time delta-sigma designs may challenge flash ADCs in mid-speed applications.
Interestingly, flash ADCs can also serve as internal building blocks within delta-sigma circuits to boost conversion rates.
Also, successive approximation register (SAR) ADCs sit comfortably between flash and delta-sigma designs, offering a practical blend of speed, resolution, and efficiency. Unlike flash ADCs, which prioritize raw speed using parallel comparators, SAR converters use a binary search approach that is slower but far more power-efficient.
Compared to delta-sigma ADCs, SAR designs avoid oversampling and complex filtering, making them ideal for moderate-resolution, real-time applications. Each architecture has its sweet spot: flash for ultra-fast, low-resolution tasks; delta-sigma for high-precision, low-bandwidth needs; and SAR for balanced performance across a wide range of embedded systems.
Delta-sigma converters elegantly bridge the analog and digital worlds, offering high-resolution performance through clever noise shaping and oversampling. Whether you are designing precision instrumentation or exploring audio fidelity, understanding their principles unlocks a deeper appreciation for modern signal processing.
Curious how these concepts translate into real-world design choices? Join the conversation—share your favorite delta-sigma use case or challenge in the comments. Let us map the noise floor together and surface the insights that matter.
T.K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Delta-sigma ADCs in a nutshell
- Delta-sigma ADC basics: How the digital filter works
- Recent Developments for SAR and Sigma Delta ADCs
- Understanding sigma delta ADCs: A non-mathematical approach
- 24-Bit, 16-Channel Delta-Sigma ADC Simplifies Front-End Signal Conditioning
The post Delta-sigma demystified: Basics behind high-precision conversion appeared first on EDN.
Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback

Efficient battery management becomes increasingly important as demand for portable power continues to rise, especially since balanced cells help ensure safety, high performance, and a longer battery life. When cells are mismatched, the battery pack’s total capacity decreases, leading to the overcharging of some cells and undercharging of others—conditions that accelerate degradation and reduce overall efficiency. The challenge is how to maintain an equal voltage and charge among the individual cells.
Typically, it’s possible to achieve cell balancing through either passive or active methods. Passive balancing, the more common approach because of its simplicity and low cost, equalizes cell voltages by dissipating excess energy from higher-voltage cells through a resistor or FET networks. While effective, this process wastes energy as heat.
In contrast, active cell balancing redistributes excess energy from higher-voltage cells to lower-voltage ones, improving efficiency and extending battery life. Implementing active cell balancing involves an isolated, bidirectional power converter capable of both charging and discharging individual cells.
This Power Tip presents an active cell-balancing design based on a bidirectional flyback topology and outlines the control circuitry required to achieve a reliable, high-performance solution.
System architectureIn a modular battery system, each module contains multiple cells and a corresponding bidirectional converter (the left side of Figure 1). This arrangement enables any cell within Module 1 to charge or discharge any cell in another module, and vice versa. Each cell connects to an array of switches and control circuits that regulate individual charge and discharge cycles.
Figure 1 A modular battery system block diagram with multiple cells a bidirectional converter where any cell within Module 1 can charge/discharge any cell in another module. Each cell connects to an array of switches and control circuits that regulate individual charge/discharge cycles. Source: Texas Instruments
The block diagram in Figure 2 illustrates the design of a bidirectional flyback converter for active cell balancing. One side of the converter connects to the bus voltage (18 V to 36 V), which could be the top of the battery cell stack, while the other side connects to a single battery cell (3.0 V to 4.2 V). Both the primary and secondary sides employ flyback controllers, allowing the circuit to operate bidirectionally, charging or discharging the cell as required.

Figure 2 A bidirectional flyback for active cell balancing reference design. Source: Texas Instruments
A single control signal defines the power-flow direction, ensuring that both flyback integrated circuits (ICs) never operate simultaneously. The design delivers up to 5 A of charge or discharge current, protecting the cell while maintaining efficiency above 80% in both directions (Figure 3).
Figure 3 Efficiency data for charging (left) and discharging (right). Source: Texas Instruments
Charge mode (power from Vbus to Vcell)In charge mode, the control signal enables the charge controller, allowing Q1 to act as the primary FET. D1 is unused. On the secondary side, the discharge controller is disabled, and Q2 is unused. D2 serves as the output diode providing power to the cell. The secondary side implements constant-current and constant-voltage loops to charge the cell at 5 A until reaching the programmed voltage (3.0 V to 4.2 V) while keeping the discharge controller disabled.
Discharge mode (power from Vcell to Vbus)Just the opposite happens in discharge mode; the control signal enables the discharge controller and disables the charge controller. Q2 is now the primary FET, and D2 is inactive. D1 serves as the output diode while Q1 is unused. The cell side enforces an input current limit to prevent discharge of the cell above 5 A. The Vbus side features a constant-voltage loop to ensure that the Vbus remains within its setpoint.
Auxiliary power and bias circuitsThe design also integrates two auxiliary DC/DC converters to maintain control functionality under all operating conditions. On the bus side, a buck regulator generates 10 V to bias the flyback IC and the discrete control logic that determines the charge and discharge direction. On the cell side, a boost regulator steps the cell voltage up to 10 V to power its controller and ensure that the control circuit is operational even at low cell voltages.
Multimodule operationFigure 4 illustrates how multiple battery modules interconnect through the reference design’s units. The architecture allows an overcharged cell from a higher-voltage module, shown at the top of the figure, to transfer energy to an undercharged cell in any other module. The modules do not need to be connected adjacently. Energy can flow between any combination of cells across the pack.

Figure 4 Interconnection of battery modules using TI’s reference design for bidirectional balancing. Source: Texas Instruments
Future improvementsFor higher-power systems (20 W to 100 W), adopting synchronous rectification on the secondary and an active-clamp circuit on the primary will reduce losses and improve efficiency, thus enhancing performance.
For systems exceeding 100 W, consider alternative topologies such as forward or inductor-inductor-capacitor (LLC) converters. Regardless of topology, you must ensure stability across the wide-input and cell-voltage ranges characteristic of large battery systems.
Modern multicell battery systems.The bidirectional flyback-based active cell balancing approach offers a compact, efficient, and scalable solution for modern multicell battery systems. By recycling energy between cells rather than dissipating this energy as heat, the design improves both energy efficiency and battery longevity. Through careful control-loop optimization and modular scalability, this architecture enables high-performance balancing in portable, automotive, and renewable energy applications.
Sarmad Abedin is currently a systems engineer with Texas Instruments, working in the power design services (PDS) team, working on both automotive and industrial power supplies. He has been designing power supplies for the past 14 years and has experience in both isolated and non-isolated power supply topologies. He graduated from Rochester Institute of Technology in 2011 with his bachelor’s degree.
Related Content
- Active balancing: How it works and what are its advantages
- Achieving cell balancing for lithium-ion batteries
- Lithium cell balancing: When is enough, enough?
- Product How-To: Active balancing solutions for series-connected batteries
The post Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback appeared first on EDN.
Does (wearing) an Oura (smart ring) a day keep the doctor away?

Before diving into my on-finger impressions of Oura’s Gen3 smart ring, as I’d promised I’d do back in early September, I thought I’d start off by revisiting some of the business-related topics I mentioned in that initial post in the series. First off, I mentioned at the end of that post that Oura had just obtained a favorable final judgment from the United States International Trade Commission (ITC) that both China-based RingConn and India-based Ultrahuman had infringed on its patent portfolio. In the absence of licensing agreements or other compromises, both Oura competitors would be banned from further product shipments to and sales of their products in the US after a final 60-day review period ended on October 21, although retailer partners could continue to sell their existing inventory until it was depleted.
Product evolutions and competition developmentsI’m writing these words 10 days later, on Halloween, and there’ve been some interesting developments. I’d intentionally waited until after October 21 in order to see how both RingConn and Ultrahuman would react, as well as to assess whether patent challenges would pan out. As for Ultrahuman, a blog post posted shortly before the deadline (and updated the day after) made it clear that the company wasn’t planning on caving:
- A new ring design is already in development and will launch in the U.S. as soon as possible.
- We’re actively seeking clarity on U.S. manufacturing from our Texas facility, which could enable a “Made in USA” Ring AIR in the near future.
- We also eagerly await the U.S. Patent and Trademark Office’s review of the validity of Oura’s ‘178 patent, which it acquired in 2023, and is central to the ITC ruling. A decision is expected in December.
To wit, per a screenshot I captured the day after the deadline, Wednesday, October 22, sales through the manufacturer’s website to US customers had ceased.

And surprisingly, inventory wasn’t listed as available for sale on Amazon’s website, either.
RingConn conversely took a different tack. On October 22, again, when I checked, the company was still selling its products to US customers both from its own website and Amazon’s:

This situation baffled me until I hit up the company subreddit and saw the following:
Dear RingConn Family,
We’d like to share some positive news with you: RingConn, a leading smart ring innovator, has reached a settlement with ŌURA regarding a patent dispute. Under the terms of the agreement, RingConn’s software and hardware products will remain available in the U.S. market, without affecting its market presence.
See the company’s Reddit post for the rest of the message. And here’s the official press release.
Secondly, as I’d noted in my initial coverage:
One final factor to consider, which I continue to find both surprising and baffling, is the fact that none of the three manufacturers I’ve mentioned here seems to support having more than one ring actively associated with an account, therefore, cloud-logging and archiving data, at the same time. To press a second ring into service, you need to manually delete the first one from your account first. The lack of multi-ring support is a frequent cause of complaints on Reddit on elsewhere, from folks who want to accessorize multiple smart rings just as they do with normal rings, varying color and style to match outfits and occasions. And the fiscal benefit to the manufacturers of such support is intuitively obvious, yes?
It turns out I just needed to wait a few weeks. On October 1, Oura announced that multiple Oura Ring 4 styles would soon be supported under a single account. Quoting the press release, “Pairing and switching among multiple Oura Ring 4 devices on a single account will be available on iOS starting Oct. 1, 2025, and on Android starting Oct. 20, 2025.” That said, a crescendo of complaints on Reddit and elsewhere suggests an implementation delay; I’m 11 days past October 20 at this point and haven’t seen the promised Android app update yet, and at least some iOS users have waited a month at this point. Oura PR told me that I should be up and running by November 5; I’ll follow up in the comments as to whether this actually happened.
Charging optionsThat same day, by the way, Oura also announced its own branded battery-inclusive charger case, an omission that I’d earlier noted versus competitor RingConn:

That said, again quoting from the October 1 press release (with bolded emphasis mine), the “Oura Ring 4 Charging Case is $99 USD and will be available to order in the coming months.” For what it’s worth, the $28.99 (as I write these words) Doohoeek charging case for my Gen3 Horizon:

is working like a charm:


Behind it, by the way, is the upgraded Doohoeek $33.29 charging case for my Oura Ring 4, whose development story (which I got straight from the manufacturer) was not only fascinating in its own right but also gave me insider insight into how Oura has evolved its smart ring charging scheme for the smart ring over time. More about that soon, likely next month.

And here’s my Gen3 on the factory-supplied, USB-C-fed standard charger, again with its Ring 4 sibling behind it:

As for the ring itself, here’s what it looks like on my left index finger, with my wedding band two digits over from it on the same hand:


And here again are all three rings I’ve covered in in-depth writeups to date: the Oura Gen3 Horizon at left, Ultrahuman Ring AIR in the middle and RingConn Gen 2 at right:

Like RingConn’s product:

both the Heritage:

and my Horizon variant of the Oura Gen3:


include physical prompting to achieve and maintain proper placement: sensor-inclusive “bump” guides on both sides of the backside inside, which the Oura Ring 4 notably dispenses with:

I’ve already shown you what the red glow of the Gen3 intermediary SpO2 (oxygen saturation) sensor looks like when in operation, specifically when I’m able to snap a photo of it soon enough after waking to catch it still in action before it discerns that I’ve stirred and turns off:

And here’s what the two green-color pulse rate sensors, one on either side of their SpO2 sibling:

look like in action:

Generally speaking, the Oura Gen3 feels a lot like the Ultrahuman Ring AIR; they both drop between 15-20% of battery charge level every 24 hours, leading to a sub-week operating life between recharges. That said, I will give Oura well-deserved kudos for its software user interface, which is notably more informative, intuitive and more broadly easier to use than its RingConn and Ultrahuman counterparts. Then again, Oura’s been around the longest and has the largest user base, so it’s had more time (and more feedback) to fine-tune things. And cynically speaking, given Oura’s $5.99/month or $69.99/year subscription fee, versus competitors’ free, it’d better be better!
Software insightsIn closing, and in fairness, regarding that subscription, it’s not strictly required to use an Oura smart ring. That said, the information supplied without it:

is a pale subset of the norm:

What I’m showing in the overview screen images is a fraction of the total information captured and reported, but it’s all well-organized and intuitive. And as you can see on that last one, the Oura smart ring is adept at sensing even brief catnaps 
With that, and as I’ve already alluded, I now have an Oura Ring 4 on-finger—two of them, in fact, one of which I’ll eventually be tearing down—which I aspire to write up shortly, sharing my impressions both versus its Gen3 predecessor and its competitors. Until then, I as-always welcome your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- RingConn: Smart, svelte, and econ(omical)
- Can a smart ring make me an Ultrahuman being?
- Smarty Ring Fails to Impress
- Smart ring allows wearer to “air-write” messages with a fingertip
The post Does (wearing) an Oura (smart ring) a day keep the doctor away? appeared first on EDN.
Inside the battery: A quick look at internal resistance

Ever wondered why a battery that reads full voltage still struggles to power your device? The answer often lies in its internal resistance. This hidden factor affects how efficiently a battery delivers current, especially under load.
In this post, we will briefly examine the basics of internal resistance—and why it’s a critical factor in real-world performance, from handheld flashlights to high-power EV drivetrains.
What’s internal resistance and why it matters
Every battery has some resistance to the flow of current within itself—this is called internal resistance. It’s not a design flaw, but a natural consequence of the materials and construction. The electrolyte, electrodes, and even the connectors all contribute to it.
Internal resistance causes voltage to drop when the battery delivers current. The higher the current draw, the more noticeable the drop. That is why a battery might read 1.5 V at rest but dip below 1.2 V under load—and why devices sometimes shut off even when the battery seems “full.”
Here is what affects it:
- Battery type: Alkaline, lithium-ion, and NiMH cells all have different internal resistances.
- Age and usage: Resistance increases as the battery wears out.
- Temperature: Cold conditions raise resistance, reducing performance.
- State of charge: A nearly empty battery often shows higher resistance.
Building on that, internal resistance gradually increases as batteries age. This rise is driven by chemical wear, electrode degradation, and the buildup of reaction byproducts. As resistance climbs, the battery becomes less efficient, delivers less current, and shows more voltage drop under load—even when the resting voltage still looks healthy.
Digging a little deeper—focusing on functional behavior under load—internal resistance is not just a single value; it’s often split into two components. Ohmic resistance comes from the physical parts of the battery, like the electrodes and electrolyte, and tends to stay relatively stable.
Polarization resistance, on the other hand, reflects how the battery’s chemical reactions respond to current flow. It’s more dynamic, shifting with temperature, charge level, and discharge rate. Together, these resistances shape how a battery performs under load, which is why two batteries with identical voltage readings might behave very differently in real-world use.
Internal resistance in practice
Internal resistance is a key factor in determining how much current a battery can deliver. When internal resistance is low, the battery can supply a large current. But if the resistance is high, the current it can provide drops significantly. Also, higher the internal resistance, the greater the energy loss—this loss manifests as heat. That heat not only wastes energy but also accelerates the battery’s degradation over time.
The figure below illustrates a simplified electrical model of a battery. Ideally, internal resistance would be zero, enabling maximum current flow without energy loss. In practice, however, internal resistance is always present and affects performance.

Figure 1 Illustration of a battery’s internal configuration highlights the presence of internal resistance. Source: Author
Here is a quick side note regarding resistance breakdown. Focusing on material-level transport mechanisms, battery internal resistance comprises two primary contributors: electronic resistance, driven by electron flow through conductive paths, and ionic resistance, governed by ion transport within the electrolyte.
The total effective resistance reflects their combined influence, along with interfacial and contact resistances. Understanding this layered structure is key to diagnosing performance losses and carrying out design improvements.
As observed nowadays, elevated internal resistance in EV batteries hampers performance by increasing heat generation during acceleration and fast charging, ultimately reducing driving range and accelerating cell degradation.
Fortunately, several techniques are available for measuring a battery’s internal resistance, each suited to different use cases and levels of diagnostic depth. Common methods include direct current internal resistance (DCIR), alternating current internal resistance (ACIR), and electrochemical impedance spectroscopy (EIS).
And there is a two-tier variation of the standard DCIR technique, which applies two sequential discharge loads with distinct current levels and durations. The battery is first discharged at a low current for several seconds, followed by a higher current for a shorter interval. Resistance values are calculated using Ohm’s law, based on the voltage drops observed during each load phase.
Analyzing the voltage response under these conditions can reveal more nuanced resistive behavior, particularly under dynamic loads. However, the results remain strictly ohmic and do not provide direct information about the battery’s state of charge (SoC) or capacity.
Many branded battery testers, such as some product series from Hioki, apply a constant AC current at a measurement frequency of 1 kHz and determine the battery’s internal resistance by measuring the resulting voltage with an AC voltmeter (AC four-terminal method).

Figure 2 The Hioki BT3554-50 employs AC-IR method to achieve high-precision internal resistance measurement. Source: Hioki
The 1,000-hertz (1 kHz) ohm test is a widely used method for measuring internal resistance. In this approach, a small 1-kHz AC signal is applied to the battery, and resistance is calculated using Ohm’s law based on the resulting voltage-to-current ratio.
It’s important to note that AC and DC methods often yield different resistance values due to the battery’s reactive components. Both readings are valid—AC impedance primarily reflects the instantaneous ohmic resistance, while DC measurements capture additional effects such as charge transfer and diffusion.
Notably, the DC load method remains one of the most enduring—and nostalgically favored—approaches for measuring a battery’s internal resistance. Despite the rise of impedance spectroscopy and other advanced techniques, its simplicity and hands-on familiarity continue to resonate with seasoned engineers.
It involves briefly applying a load—typically for a second or longer—while measuring the voltage drop between the open-circuit voltage and the loaded voltage. The internal resistance is then calculated using Ohm’s law by dividing the voltage drop by the applied current.
A quick calculation: To estimate a battery’s internal resistance, you can use a simple voltage-drop method when the open-circuit voltage, loaded voltage, and current draw are known. For example, if a battery reads 9.6 V with no load and drops to 9.4 V under a 100-mA load:
Internal resistance = 9.6 V-9.4 V/0.1 A = 2 Ω
This method is especially useful in field diagnostics, where direct resistance measurements may not be practical, but voltage readings are easily obtained.
In simplified terms, internal resistance can be estimated using several proven techniques. However, the results are influenced by the test method, measurement parameters, and environmental conditions. Therefore, internal resistance should be viewed as a general diagnostic indicator—not a precise predictor of voltage drop in any specific application.
Bonus blueprint: A closing hardware pointer
For internal resistance testing, consider the adaptable e-load concept shown below. It forms a simple, reliable current sink for controlled battery discharge, offering a practical starting point for further refinement. As you know, the DC load test method allows an electronic load to estimate a battery’s internal resistance by observing the voltage drop during a controlled current draw.

Figure 3 The blueprint presents an electronic load concept tailored for internal resistance measurement, pairing a low-RDS(on) MOSFET with a precision load resistor to form a controlled current sink. Source: Author
Now it’s your turn to build, tweak, and test. If you have got refinements, field results, or alternate load strategies, share them in the comments. Let us keep the circuit conversation flowing.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- All About Batteries
- What Causes Batteries to Fail?
- Power Consumption and Battery Life Analysis
- Resistivity is the key to measuring electrical resistance
- Cell balancing maximizes the capacity of multi-cell batteries
The post Inside the battery: A quick look at internal resistance appeared first on EDN.
NB-IoT module adds built-in geolocation capabilities

The ST87M01-1301 NB-IoT wireless module from ST provides narrowband cellular connectivity along with both GNSS and Wi-Fi–based positioning for outdoor and indoor geolocation. Its integrated GNSS receiver enables precise location tracking using GPS constellations, while the Wi-Fi positioning engine delivers fast, low-power indoor location services by scanning nearby 802.11b access points and leveraging third-party geocoding providers.

As the latest member of the ST87M01 series of NB-IoT (LTE Cat NB2) industrial modules, this variant supports multi-frequency bands with extended multi-regional coverage. Its compact, low-power design makes it well suited for smart IoT applications such as asset tracking, environmental monitoring, smart metering, and remote healthcare. A 10.6×12.8-mm, 51-pin LGA package further enables miniaturization in space-constrained designs.
ST provides an evaluation kit that includes a ready-to-use Conexa IoT SIM card and two SMA antennas, helping developers quickly prototype and validate NB-IoT connectivity in real-world conditions. This is supported by an expanding ecosystem featuring the Easy-Connect software library and design examples.
The post NB-IoT module adds built-in geolocation capabilities appeared first on EDN.
Boost controller powers brighter automotive displays

A 60-V boost controller from Diodes, the AL3069Q packs four 80-V current-sink channels for driving LED backlights in automotive displays. Its adaptive boost-voltage control allows operation from a 4.5-V to 60-V input range—covering common automotive power rails at 12 V, 24 V, and 48 V—and its switching frequency is adjustable from 100 kHz to 1 MHz.

The AL3069Q’s four current-sink channels are set using an external resistor, providing typical ±0.5% current matching between channels and devices to ensure uniform brightness across the display. Each channel delivers 250 mA continuous or up to 400 mA pulsed, enabling support for a range of display sizes and LED panels up to 32-inch diagonal, such as those used in infotainment systems, instrument clusters, and head-up displays. PWM-to-analog dimming, with a minimum duty cycle of 1/5000 at 100 Hz, improves brightness control while minimizing LED color shift.
Diode’s AL3069Q offers robust protection and fault diagnostics, including cycle-by-cycle current limit, soft-start, UVLO, programmable OVP, OTP, and LED-open/-short detection. Additional safeguards cover sense resistor, Schottky diode, inductor, and VOUT faults, with a dedicated pin to signal any fault condition.
The automotive-compliant controller costs $0.54 each in 1000-unit quantities.
The post Boost controller powers brighter automotive displays appeared first on EDN.
Hybrid device elevates high-energy surge protection

TDK’s G series integrates a metal oxide varistor and a gas discharge tube into a single device to provide enhanced surge protection. The two elements are connected in series, combining the strengths of both technologies to deliver greater protection than either component can offer on its own. This hybrid configuration also reduces leakage current to virtually zero, helping extend the overall lifetime of the device.

The G series comprises two leaded variants—the G14 and G20—with disk diameters of 14 mm and 20 mm, respectively. G14 models support AC operating voltages from 50 V to 680 V, while G20 versions extend this range to 750 V. They can handle maximum surge currents of 6,000 A (G14) and 10,000 A (G20) for a single 8/20-µs pulse, and absorb up to 200 J (G14) or 490 J (G20) of energy.
Operating over a temperature range of –40 °C to +105 °C, the G series is suitable for use in power supplies, chargers, appliances, smart metering, communication systems, and surge protection devices. Integrating both protection elements into a single, epoxy-coated 2-pin package simplifies design and reduces board space compared to using discrete components.
To access the datasheets for the G14 series (ordering code B72214G) and the G20 series (B72220G), click here.
The post Hybrid device elevates high-energy surge protection appeared first on EDN.
Power supplies enable precise DC testing

R&S has launched the NGT3600 series of DC power supplies, delivering up to 3.6 kW for a wide range of test and measurement applications. This versatile line provides clean, stable power with low voltage and current ripple and noise. With a resolution of 100 µA for current and 1 mV for voltage, as well as adjustable output voltages up to 80 V, the supplies offer both precision and flexibility.

The dual-channel NGT3622 combines two fully independent 1800-W outputs in a single compact instrument. Its channels can be connected in series or parallel, allowing either the voltage or the current to be doubled. For applications requiring even more power, up to three units can be linked to provide as much as 480 V or 300 A across six channels. The NGT3622 supports current and voltage testing under load, efficiency measurements, and thermal characterization of components such as DC/DC converters, power supplies, motors, and semiconductors.
Engineers can use the NGT3600 series to test high-current prototypes such as base stations, validate MPPT algorithms for solar inverters, and evaluate charging-station designs. In the automotive sector, the series supports the transition to 48-V on-board networks by simulating these networks and powering communication systems, sensors, and control units during testing.
All models in the NGT3600 series are directly rack-mountable with no adapter required. They will be available beginning January 13, 2026, from R&S and selected distribution partners. For more information, click here.
The post Power supplies enable precise DC testing appeared first on EDN.
Space-ready Ethernet PHYs achieve QML Class P

Microchip’s two radiation-tolerant Ethernet PHY transceivers are the company’s first devices to earn QML Class P/ESCC 9000P qualification. The single-port VSC8541RT and quad-port VSC8574RT support data rates up to 1 Gbps, enabling dependable data links in mission-critical space applications.

Achieving QML Class P/ESCC 9000P certification involves rigorous testing—such as Total Ionizing Dose (TID) and Single Event Effects (SEE) assessments—to verify that devices tolerate the harsh radiation conditions of space. The certification also ensures long-term availability, traceability, and consistent performance.
The VSC8541RT and VSC8574RT withstand 100 krad(Si) TID and show no single-event latch-up at LET levels below 78 MeV·cm²/mg at 125 °C. The VSC8541RT integrates a single Ethernet copper port supporting MII, RMII, RGMII, and GMII MAC interfaces, while the VSC8574RT includes four dual-media copper/fiber ports with SGMII and QSGMII MAC interfaces. Their low power consumption and wide operating temperature ranges make them well-suited for missions where thermal constraints and power efficiency are key design considerations.
The post Space-ready Ethernet PHYs achieve QML Class P appeared first on EDN.
Active current mirror

Current mirrors are a commonly useful circuit function, and sometimes high precision is essential. The challenge of getting current mirrors to be precise has created a long list of tricks and techniques. The list includes matched transistors, monolithic transistor multiples, emitter degeneration, fancy topologies with extra transistors, e.g., Wilson, cascode, etc.
But when all else fails and precision just can’t suffer any compromise, Figure 1 shows the nuclear option. Just add a rail-to-rail I/O (RRIO) op-amp!
Figure 1 An active current sink mirror. Assuming resistor equality and negligible A1 offset error, A1 feedback forces Q1 to maintain accurate current sink I/O equality I2 = I1.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The theory of operation of the ACM couldn’t be more straightforward. Vr , which is equal to I1*R, is wired to A1’s noninverting input, forcing it to drive Q1 to conduct I2 such that I2R = I1R.
Therefore, if the resistors are equal, A1’s accuracy limiting parameters, like offset voltage, gain-bandwidth, bias and offset currents, etc., are adequate, and Q1 doesn’t saturate, I1 can be equal to I2 just as precisely as you like.
Obviously, Vr must be >> Voffset, and A1’s output span must be >> Q1’s threshold even after subtracting Vr.
Substitute a PFET for Figure 1’s NFET, and a current-sourcing mirror results, as shown in Figure 2.

Figure 2 An active current source mirror. This is identical to Figure 1, except this Q1 is a PFET and the polarities are swapped.
Active current mirror (ACM) precision can be better than that of easily available sense resistors. So, a bit of post-assembly trimming, as illustrated in Figure 3, might be useful.

Figure 3 If adequately accurate resistors aren’t handy, a trimmer pot might be useful for post-assembly trimming.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A current mirror reduces Early effect
- A two-way mirror—current mirror that is
- A two-way Wilson current mirror
- Current mirror improves PWM regulator’s performance
The post Active current mirror appeared first on EDN.
Charting the course for a truly multi-modal device edge

The world is witnessing an artificial intelligence (AI) tsunami. While the initial waves of this technological shift focused heavily on the cloud, a powerful new surge is now building at the edge. This rapid infusion of AI is set to redefine Internet of Things (IoT) devices and applications, from sophisticated smart homes to highly efficient industrial environments.
This evolution, however, has created significant fragmentation in the market. Many existing silicon providers have adopted a strategy of bolting on AI capabilities to legacy hardware originally designed for their primary end markets. This piecemeal approach has resulted in inconsistent performance, incompatible toolchains, and a confusing landscape for developers trying to deploy edge AI solutions.
To unlock the transformative potential of edge AI, industry must pivot. We must move beyond retrofitted solutions and embrace a purpose-built, AI-native approach that integrates hardware and software right from the foundational design.
The AI-native mandate
“AI-native” is more than a marketing term; it’s a fundamental architectural commitment where AI is the central consideration, not an afterthought. Here’s what it looks like.
- The hardware foundation: Purpose-built silicon
As IoT workloads evolve to handle data across multiple modalities, from vision and voice to audio and time series, the underlying silicon must present itself as a flexible, secure platform capable of efficient processing. Core to such design considerations include NPU architectures that can scale, and are supported by highly integrated vision, voice, video and display pipelines.
- The software ecosystem: Openness and portability
To accelerate innovation and combat fragmentation for IoT AI, the industry needs to embrace open standards. While the ‘language’ of model formats and frameworks is becoming more industry-standard, the ecosystem of edge AI compilers is largely being built from vendor-specific and proprietary offerings. Efficient execution of AI workloads is heavily dependent on optimized data movement and processing across scalar, vector, and matrix accelerator domains.
By open-sourcing compilers, companies encourage faster innovation through broader community adoption, providing flexibility to developers and ultimately facilitating more robust device-to-cloud developer journeys. Synaptics is encouraging broader adoption from the community by open-sourcing edge AI tooling and software, including Synaptics’ Torq edge AI platform, developed in partnership with Google Research.
- The dawn of a new device landscape
AI-native silicon will fuel the creation of entirely new device categories. We are currently seeing the emergence of a new class of devices truly geared around AI, such as wearables—smart glasses, smartwatches, and wristbands. Crucially, many of these devices are designed to operate without being constantly tethered to a smartphone.
Instead, they soon might connect to a small, dedicated computing element, perhaps carried in a pocket like a puck, providing intelligence and outcomes without requiring the user to look at a traditional phone display. This marks the beginning of a more distributed intelligence ecosystem.
The need for integrated solutions
This evolving landscape is complex, demanding a holistic approach. Intelligent processing capabilities must be tightly coupled with secure, reliable connectivity to deliver a seamless end-user experience. Connected IoT devices need to leverage a broad range of technologies from the latest Wi-Fi and Bluetooth standards to Thread and ZigBee.
Chip, device and system-level security are also vital, especially considering multi-tenant deployments of sensitive AI models. For intelligent IoT devices, particularly those that are battery-powered or wearable, security must be maintained consistently as the device transitions in and out of different power states. The combination of processing, security, and power must all work together effectively.
Navigating this new era of the AI edge requires a fundamental shift in mindset, a change from retrofitting existing technology to building products with a clear, AI-first mission. Take the case of Synaptics SL2610 processor, one of the industry’s first AI-native, transformer-capable processors designed specifically for the edge. It embodies the core hardware and software principles needed for the future of intelligent devices, running on a Linux platform.
By embracing purpose-built hardware, rallying around open software frameworks, and maintaining a strategy of self-reliance and strategic partnerships, the industry can move past the current market noise and begin building the next generation of truly intelligent, powerful, and secure devices.
Mehul Mehta is a Senior Director of Product Marketing at Synaptics Inc., where he is responsible for defining the Edge AI IoT SoC roadmap and collaborating with lead customers. Before joining Synaptics, Mehul held leadership roles at DSP Group spanning product marketing, software development, and worldwide customer support.
Related Content
- Edge AI: Bringing Intelligence Closer to the Source
- An edge AI processor’s pivot to the open-source world
- Edge AI powers the next wave of industrial intelligence
- Synaptics, Google partnership targets edge AI for the IoT
- How Advanced Packaging is Unleashing Possibilities for Edge AI
The post Charting the course for a truly multi-modal device edge appeared first on EDN.
Infused concrete yields greatly improved structural supercapacitor

A few years ago, a team at MIT researched and published a paper on using concrete as an energy-storage supercapacitor (MIT engineers create an energy-storing supercapacitor from ancient materials) (also called an ultracapacitor), which is a battery based on electric fields rather than electrochemical principles. Now, the same group has developed a battery with ten times the storage per volume of that earlier version, by using concrete infused with various materials and electrolytes such as (but not limited to) nano-carbon black.
Concrete is the world’s most common building material and has many virtues, including basic strength, ruggedness, and longevity, and few restrictions on final shape and form. The idea of also being able to use it as an almost-free energy storage system is very attractive.
By combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, their electron-conducting carbon concrete (EC3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy, Figure 1.
Figure 1 As with most batteries, schematic diagram and physical appearance are simple, and it’s the details that are the challenge. Source: Massachusetts Institute of Technology
This greatly improved energy density was made possible by their deeper understanding of how the nanocarbon black network inside EC3 functions and interacts with electrolytes, as determined using some sophisticated instrumentation. By using focused ion beams for the sequential removal of thin layers of the EC3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the joint EC³ Hub and MIT Concrete Sustainability Hub team was able to reconstruct the conductive nanonetwork at the highest resolution yet. The analysis showed that the network is essentially a fractal-like “web” that surrounds EC3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.
A cubic meter of this version of EC3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy, which is enough to power an actual modest-sized refrigerator for a day. Via extrapolation (always the tricky aspect of these investigations), they say that 45 cubic meters of EC3 with an energy density of 0.22 kWh/m3 – a typical house-sided foundation—would have enough capacity to store about 10 kilowatt-hours of energy, the average daily electricity usage for a household, Figure 2.

Figure 2 These are just a few of the many performance graphs that the team developed. Source: Massachusetts Institute of Technology
They achieved highest performance with organic electrolytes, especially those that combined quaternary ammonium salts—found in everyday products like disinfectants—with acetonitrile, a clear, conductive liquid often used in industry, Figure 3.

Figure 3 They also identified needed properties for the electrolyte and investigated many possibilities for this critical component. Source: Massachusetts Institute of Technology
If this all sounds only like speculation from a small-scale benchtop lab project, it is, and it isn’t. Much of the work was done in cooperation with the American Concrete Institute, a research and promotional organization that studies all aspects of concrete, including formulation, application, standardized tests, long-term performance, and more.
While the MIT team, perhaps not surprisingly, is positioning this development as the next great thing—and it certainly gets a lot of attention in the mainstream media due to its tantalizing keywords of “concrete” and “battery”—there are genuine long-term factors to evaluate related to scaling up to a foundation-sized mass:
- Does the final form of the concrete matter, such a large cube versus flat walls?
- What are the partial and large-scale failure modes?
- What are the long-term effects of weather exposure, as this material is concrete (which is well understood) but with an additive?
- What happens when an EC3 foundation degrades or fails—do you have to lift the house and replace the foundation?
- What are the short and long-term influences on performance, and how does the formulation affect that performance?
The performance and properties of the many existing concrete formulations have been tested in the lab and in the field over decades, and “improvements” are not done casually, especially in consideration of the end application.
Since demonstrating this concrete battery in structural mode lacks visual impact, the MIT team built a more attention-grabbing demonstration battery of stacked cells to provide 12-V of power. They used this to operate a 12-V computer fan and a 5-V USB output (via a buck regulator) for a handheld gaming console, Figure 4.

Figure 4 A 12-V concrete battery powering a small fan and game console provides a visual image which is more dramatic and attention-grabbing. Source: Massachusetts Institute of Technology
The work is detailed in their paper “High energy density carbon–cement supercapacitors for architectural energy storage,” published in Proceedings of the National Academy of Sciences (PNAS). It’s behind a paywall, but there is a posted student thesis, “Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases.” Finally, there’s also a very informative 18-slide, 21-minute PowerPoint presentation at YouTube (with audio), “Carbon-cement supercapacitors: A disruptive technology for renewable energy storage,” that was developed by the MIT team for the ACI.
What’s your view? Is this a truly disruptive energy-storage development? Or will the realities of scaling up in physical volume and long-term performance, as well as “replacement issues,” make this yet another interesting advance that falls short in the real world?
Check back in five to ten years to find out. If nothing else, this research reminds us that there is potential for progress in power and energy beyond the other approaches we hear so much about.
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- New Cars Make Tapping Battery Power Tough
- What If Battery Progress Is Approaching Its End?
- Battery-Powered Large Home Appliances: Good Idea or Resource Misuse?
- Is a concrete rechargeable battery in your future?
The post Infused concrete yields greatly improved structural supercapacitor appeared first on EDN.
A simpler circuit for characterizing JFETs
The circuit presented by Cor Van Rij for characterizing JFETs is a clever solution. Noteworthy is the use of a five-pin test socket wired to accommodate all of the possible JFET pinout arrangements.
This idea uses that socket arrangement in a simpler circuit. The only requirement is the availability of two digital multimeters (DMMs), which add the benefit of having a hold function to the measurements. In addition to accuracy, the other goals in developing this tester were:
- It must be simple enough to allow construction without a custom printed circuit board, as only one tester was required.
- Use components on hand as much as possible.
- Accommodate both N- and P-channel devices while using a single voltage supply.
- Use a wide range of supply voltages.
- Incorporate a current limit with LED indication when the limit is reached.
The resulting circuit is shown in Figure 1.
Figure 1 Characterizing JFETs using a socket arrangement. The fixture requires the use of two DMMs.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Q1, Q2, R1, R3, R5, D2, and TEST pushbutton S3 comprise the simple current limit circuit (R4 is a parasitic Q-killer).
S3 supplies power to S1, the polarity reversal switch, and S2 selects the measurement. J1 and J2 are banana jacks for the DMM set to read the drain current. J3 and J4 are banana jacks for the DMM set to read Vgs(off).
Note the polarities of the DMM jacks. They are arranged so that the drain current and Vgs(off) read correctly for the type of JFET being tested—positive IDSS and negative Vgs(off) for N-channel devices and negative IDSS and positive Vgs(off) for P-channel devices.
R2 and D1 indicate the incoming power, while R6 provides a minimum load for the current limiter. Resistor R8 isolates the DUT from the effects of DMM-lead parasitics, and R9 provides a path to earth ground for static dissipation.
Testing JFETsFigure 2 shows the tester setup measuring Vgs(off) and IDSS for an MPF102, an N-channel device. The specified values of this device are Vgs(off) of -8v maximum and IDSS of 2 to 20 mA. Note that the hold function of the meters was used to maintain the measurements for the photograph. The supply for this implementation is a nominal 12-volt “wall wart” salvaged from a defunct router.

Figure 2 The test of an MPF302 N-Channel JFET using the JFET characterization circuit.
Figure 3 shows the current limit in action by setting the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102. The limit is 52.2 mA, and the I-LIMIT LED is brightly lit.

Figure 3 The current limit test that sets the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102.
John L. Waugaman’s love of electronics began when I built a crystal set at age 10 with my father’s help. Earning a BSEE from Carnegie-Mellon University led to a 30-year career in industry designing product inspection equipment and four patents. After being RIF’d, I spent the next 20 years as a consultant specializing in analog design in industrial and military projects. Now I’m retired, sort of, but still designing. It’s in my blood, I guess.
Related Content
- A simple circuit to let you characterize JFETs more accurately
- A Conversation with Mike Engelhardt on QSPICE
- Characteristics of Junction Field Effect Transistors (JFET)
- Simple circuit lets you characterize JFETs
- Building a JFET voltage-tuned Wien bridge oscillator
- Discrete JFETs still prominent in design: Low input voltage power supply
The post A simpler circuit for characterizing JFETs appeared first on EDN.
Gold-plated PWM-control of linear and switching regulators
“Gold-plated” without the gold plating
Alright, I admit that the title is a bit over the top. So, what do I mean by it? I mean that:
(1) The application of PWM control to a regulator does not significantly degrade the inherent DC accuracy of its output voltage,
(2) Any ability of the regulator’s output voltage to reach below that of its internal reference is supported, and
(3) This is accomplished without the addition of a new reference voltage.
Refer to Figure 1.

Figure 1 This circuit meets the requirements of “Gold-Plated PWM control” as stated above.
Wow the engineering world with your unique design: Design Ideas Submission Guide
How it worksThe values of components Cin, Cout, Cf, and L1 are obtained from the regulator’s datasheet. (Note that if the regulator is linear, L1 is replaced with a short.)
The datasheet typically specifies a preferred value of Rg, a single resistor between ground and the feedback pin FB.
Taking the DC voltage VFB of the regulator’s FB pin into account, R3 is selected so that U2a supplies a V_sup voltage greater than or equal to 3.0 V. C7 and R3 ensure that the composite is non-oscillatory, even with decoupling capacitor C6 in place.C6 is required for the proper operation of the SN74AC04 IC U1.
The following equations govern the circuit’s performance, where Vmax is the desired maximum regulator output voltage:
R3 = ( Vsup / VFB – 1 ) · 10k
Rg1 = Rg / ( 1 – ( VFB / Vsup ) / ( 1 – VFB/Vmax ))
Rg2 = Rg · Rg1 / ( Rg1 – Rg )
Rf = Rg · ( Vmax / VFB – 1 )
They enable the regulator output to reach zero volts (if it is capable of such) when the PWM inputs are at their highest possible duty cycle.
U1 is part of two separate PWMs whose composite output can provide up to 16 bits of resolution. Ra and Rb + Rc establish a factor of 256 for the relative significance of the PWMs.
If eight bits or less of resolution is required, Rb and Rc, and the least significant PWM, can be eliminated, and all six inverters can be paralleled.
The PWMs’ minimum frequency requirements shown are important because when those are met, the subsequent filter passes a peak-to-peak ripple less than 2-16 of the composite PWM’s full-scale range. This filter consists of Ra, Rb + Rc, R5 to R7, C3 to C5, and U2b.
ErrorsThe most stringent need to minimize errors comes from regulators with low and highly accurate reference voltages. Let’s consider 600 mV and 0.5% from which we arrive at a 3-mV output error maximum inherent to the regulator. (This is overly restrictive, of course, because it assumes zero-tolerance resistors to set the output voltage. If 0.1% resistors were considered, we’d add 0.2% to arrive at 0.7% and more than 4 mV.)
Broadly, errors come from imperfect resistor ratios and component tolerances, op-amp input offset voltages and bias currents, and non-linear SN74AC04 output resistances. The 0.1% resistors are reasonably cheap.
Resistor ratiosIf nominally equal in value, such resistors, forming a ratio, contribute a worst-case error of ± 0.1%. For those of different values, the worst is ± 0.2%. Important ratios involve:
- Rg1, Rg2, and Rf
- R3 and R4
- Ra and Rb + Rc
Various Rf, Rg ratios are inherent to regulator operation.
The Rg1, Rg2; R3, R4; and Ra, Rb + Rc pairs have been introduced as requirements for PWM control.
The Ra / (Rb + Rc) error is ± 0.2%, but since this involves a ratio of 8-bit PWMs at most, it incurs less than 1 least significant bit (LSbit) of error.
The Rg1, Rg2 pair introduces an error of ±0.2 % at most.
The R3, R4 pair is responsible for a worst-case ±0.2 %. All are less than the 0.5% mentioned earlier.
Temperature driftThe OPA2376 has a worst-case input offset voltage of 25 µV over temperature. Even if U2a has a gain of 5 to convert FB’s 600 mV to 3 V, this becomes only 125 µV.
Bias current is 10-pA maximum at 25°C, but we are given a typical value only at 125°C of 250 pA.
Of the two op-amps, U2b sees the higher input resistance. But its current would have to exceed 6 nA to produce even 1-mV of offset, so these op-amps are blameless.
To determine U1’s output resistance, its spec shows that its minimum logic high voltage for a 3-V supply is 2.46 V under a 12-mA load. This means that the maximum for each inverter is 45 Ω, which gives us 9 Ω for five in parallel. (The maximum voltage drop is lower for a logic low 12 mA, resulting in a lower resistance, but we don’t know how much lower, so we are forced to worst-case it at a ridiculous 0 V!)
Counting C3 as a short under dynamic conditions, the five inverters see a 35-kΩ load, leading to a less than 0.03% error.
Wrapping upThe regulator and its output range might need an even higher voltage, but the input voltage IN has been required to exceed 3.2 V. This is because U1 is spec’d to swing to no further than 80 mV from its supply rails under loads of 2 kΩ or more. (I’ve added some margin, but it’s needed only for the case of maximum output voltage.)
You should specify Vmax to be slightly higher than needed so that U2b needn’t swing all the way to ground. This means that a small negative supply for U2 is unnecessary. IN must also be less than 5.5 V to avoid exceeding U2’s spec. If a larger value of IN is required by the regulator, an inexpensive LDO can provide an appropriate U2 supply.
I grant that this design might be overkill, but I wanted to see what might be required to meet the goals I set. But who knows, someone might find it or some aspect of it useful.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- PWM buck regulator interface generalized design equations
- Improve PWM controller-induced ripple in voltage regulators
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Negative time-constant and PWM program a versatile ADC front end
- Another PWM controls a switching voltage regulator
The post Gold-plated PWM-control of linear and switching regulators appeared first on EDN.
The role of AI processor architecture in power consumption efficiency

From 2005 to 2017—the pre-AI era—the electricity flowing into U.S. data centers remained remarkably stable. This was true despite the explosive demand for cloud-based services. Social networks such as Facebook, Netflix, real-time collaboration tools, online commerce, and the mobile-app ecosystem all grew at unprecedented rates. Yet continual improvements in server efficiency kept total energy consumption essentially flat.
In 2017, AI deeply altered this course. The escalating adoption of deep learning triggered a shift in data-center design. Facilities began filling with power-hungry accelerators, mainly GPUs, for their ability to crank through massive tensor operations at extraordinary speed. As AI training and inference workloads proliferated across industries, energy demand surged.
By 2023, U.S. data centers had doubled their electricity consumption relative to a decade earlier with an estimated 4.4% of all U.S. electricity now feeding data-center racks, cooling systems, and power-delivery infrastructure.
According to the Berkeley Lab report, data-center load growth has tripled over the past decade and is projected to double or triple again by 2028. The report estimates that AI workloads alone could by that time consume as much electricity annually as 22% of all U.S. households—a scale comparable to powering tens of millions of homes.

Total U.S. data-center electricity consumption increased ten-fold from 2014 through 2028. Source: 2024 U.S. Data Center Energy Usage Report, Berkeley Lab
This trajectory raises a question: What makes modern AI processors so energy-intensive? Whether rooted in semiconductor physics, parallel-compute structures, memory-bandwidth bottlenecks, or data-movement inefficiencies, understanding the causes becomes a priority. Analyzing the architectural foundations of today’s AI hardware may lead to corrective strategies to ensure that computational progress does not come at the expense of unsustainable energy demand.
What’s driving energy consumption in AI processors
Unlike traditional software systems—where instructions execute in a largely sequential fashion, one clock cycle and one control-flow branch at a time—large language models (LLMs) demand massively parallel elaboration of multiple-dimensional tensors. Matrices many gigabytes in size must be fetched from memory, multiplied, accumulated, and written back at amazing rates. In state-of-the-art models, this process encompasses hundreds of billions to trillions of parameters, each of which must be evaluated repeatedly during training.
Training models at this scale require feeding enormous datasets through racks of GPU servers running continuously for weeks or even months. While the computational intensity is extreme, so is the energy footprint. For example, the training run for OpenAI’s GPT-4 is estimated to have consumed around 50 gigawatt-hours of electricity. That’s roughly equivalent to powering the entire city of San Francisco for three days.
This immense front-loaded investment in energy and capital defines the economic model of leading-edge AI. Model developers must absorb stunning training costs upfront, hoping to recover them later through the widespread use of the inferred model.
Profitability hinges on the efficiency of inference, the phase during which users interact with the model to generate answers, summaries, images, or decisions. “For any company to make money out of a model—that only happens on inference,” notes Esha Choukse, a Microsoft Azure researcher who investigates methods for improving the efficiency of large-scale AI inference systems. His quote appeared in the May 20, 2025, MIT Technology Review article “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.”
Indeed, experts across the industry consistently emphasize that inference not training is becoming the dominant driver of AI’s total energy consumption. This shift is driven by the proliferation of real-time AI services—millions of daily chat sessions, continuous content generation pipelines, AI copilots embedded into productivity tools, and ever-expanding recommender and ranking systems. Together, these workloads operate around the clock, in every region, across thousands of data centers.
As a result, it’s now estimated that 80–90% of all compute cycles serve AI inference. As models continue to grow, user demand accelerates, and applications diversify, further widening this imbalance. The challenge is no longer merely reducing the cost of training but fundamentally rethinking the processor architectures and memory systems that underpin inference at scale.
Deep dive into semiconductor engineering
Understanding energy consumption in modern AI processors requires examining two fundamental factors: data processing and data movement. In simple terms, this is the difference between computing data and transporting data across a chip and its surrounding memory hierarchy.
At first glance, the computational side seems conceptually straightforward. In any AI accelerator, sizeable arrays of digital logic—multipliers, adders, accumulators, activation units—are orchestrated to execute quadrillions of operations per second. Peak theoretical performance is now measured in petaFLOPS with major vendors pushing toward exaFLOP-class systems for AI training.
However, the true engineering challenge lies elsewhere. The overwhelming contributor to energy consumption is not arithmetic—it is the movement of data. Every time a processor must fetch a tensor from cache or DRAM, shuffle activations between compute clusters, or synchronize gradients across devices, it expends orders of magnitude more energy than performing the underlying math.
A foundational 2014 analysis by Professor Mark Horowitz at Stanford University quantified this imbalance with remarkable clarity. Basic Boolean operations require only tiny amounts of energy—on the order of picojoules (pJ). A 32-bit integer addition consumes roughly 0.1 pJ, while a 32-bit multiplication uses approximately 3 pJ.
By contrast, memory operations are dramatically more energy hungry. Reading or writing a single bit in a register costs around 6 pJ, and accessing 64 bits from DRAM can require roughly 2 nJ. This represents nearly a 10,000× energy differential between simple computation and off-chip memory access.
This discrepancy grows even more pronounced at scale. The deeper a memory request must travel—from L1 cache to L2, from L2 to L3, from L3 to high-bandwidth memory (HBM), and finally out to DRAM—the higher the energy cost per bit. For AI workloads, which depend on massive, bandwidth-intensive layers of tensor multiplications, the cumulative energy consumed by memory traffic considerably outstrips the energy spent on arithmetic.
In the transition from traditional, sequential instruction processing to today’s highly parallel, memory-dominated tensor operations, data movement—not computation—has emerged as the principal driver of power consumption in AI processors. This single fact shapes nearly every architectural decision in modern AI hardware, from enormous on-package HBM stacks to complex interconnect fabrics like NVLink, Infinity Fabric, and PCIe Gen5/Gen6.
Today’s computing horsepower: CPUs vs. GPUs
To gauge how these engineering principles affect real hardware, consider the two dominant processor classes in modern computing:
- CPUs, the long-standing general-purpose engines of software execution
- GPUs, the massively parallel accelerators that dominate AI training and inference today
A flagship CPU such as AMD’s Ryzen Threadripper PRO 9995WX (96 cores, 192 threads) consumes roughly 350 W under full load. These chips are engineered for versatility—branching logic, cache coherence, system-level control—not raw tensor throughput.
AI processors, in contrast, are in a different league. Nvidia’s latest B300 accelerator draws around 1.4 kW on its own. A full Nvidia DGX B300 rack unit, housing eight accelerators plus supporting infrastructure, can reach 14 kW. Even in the most favorable comparison, this represents a 4× increase in power consumption per chip—and when comparing full server configurations, the gap can expand to 40× or more.
Crucially, these raw power numbers are only part of the story. The dramatic increases in energy usage are multiplied by AI deployments in data centers where tens of thousands of such GPUs are running around the clock.
Yet hidden beneath these amazing numbers lies an even more consequential industry truth, rarely discussed in public and almost never disclosed by vendors.
The well-kept industry secret
To the best of my knowledge, no major GPU or AI accelerator vendor publishes the delivered compute efficiency of their processors defined as the ratio of actual throughput achieved during AI workloads to the chip’s theoretical peak FLOPS.
Vendors justify this absence by noting that efficiency depends heavily on the software workload; memory access patterns, model architecture, batch size, parallelization strategy, and kernel implementation can all impact utilization. This is true, and LLMs place extreme demands on memory bandwidth causing utilization to drop substantially.
Even acknowledging these complexities, vendors still refrain from providing any range, estimate, or context for typical real-world efficiency. The result is a landscape where theoretical performance is touted loudly, while effective performance remains opaque.
The reality, widely understood among system architects but seldom stated plainly is simple: “Modern GPUs deliver surprisingly low real-world utilization for AI workloads—often well below 10%.”
A processor advertised at 1 petaFLOP of peak AI compute may deliver only ~100 teraFLOPS of effective throughput when running a frontier-scale model such as GPT-4. The remaining 900 teraFLOPS are not simply unused—they are dissipated as heat requiring extensive cooling systems that further compound total energy consumption.
In effect, much of the silicon in today’s AI processors is idle most of the time, stalled on memory dependencies, synchronization barriers, or bandwidth bottlenecks rather than constrained by arithmetic capability.
This structural inefficiency is the direct consequence of the imbalance described earlier: arithmetic is cheap, but data movement is extraordinarily expensive. As models grow and memory footprints balloon, this imbalance worsens.
Without a fundamental rethinking of processor architecture—and especially of the memory hierarchy—the energy profile of AI systems will continue to scale unsustainably.
Rethinking AI processors
The implications of this analysis point to a clear conclusion: the architecture of AI processors must be fundamentally rethought. CPUs and GPUs each excel in their respective domains—CPUs in general-purpose control-heavy computation, GPUs in massively parallel numeric workloads. Neither was designed for the unprecedented data-movement demands imposed by modern large-scale AI.
Hierarchical memory caches, the cornerstone of traditional CPU design, were originally engineered as layers to mask the latency gap between fast compute units and slow external memory. They were never intended to support the terabyte-scale tensor operations that dominate today’s AI workloads.
GPUs inherited versions of these cache hierarchies and paired them with extremely wide compute arrays, but the underlying architectural mismatch remains. The compute units can generate far more demand for data than any cache hierarchy can realistically supply.
As a result, even the most advanced AI accelerators operate at embarrassingly low utilization. Their theoretical petaFLOP capabilities remain mostly unrealized—not because the math is difficult, but because the data simply cannot be delivered fast enough or close enough to the compute units.
What is required is not another incremental patch layered atop conventional designs. Instead, a new class of AI-oriented processor architecture must emerge, one that treats data movement as the primary design constraint rather than an afterthought. Such architecture must be built around the recognition that computation is cheap, but data movement is expensive by orders of magnitude.
Processors of the future will not be defined by the size of their multiplier arrays or peak FLOPS ratings, but by the efficiency of their data pathways.
Lauro Rizzatti is a business advisor at VSORA, a company offering silicon solutions for AI inference. He is a verification consultant and industry expert on hardware emulation.
Related Content
- Solving AI’s Power Struggle
- The Challenges of Powering Big AI Chips
- AI Power and Cooling Spawn Forecasting Frenzy
- Benchmarking AI Processors: Measuring What Matters
- Breaking Through Memory Bottlenecks: The Next Frontier for AI Performance
The post The role of AI processor architecture in power consumption efficiency appeared first on EDN.
Resonant inductors offer a wide inductance range

ITG Electronics launches the RL858583 Series of resonant inductors, delivering a wide inductance range, high current, and high efficiency in a compact DIP package. The family of ferrite-based, high-current inductors target demanding power electronics applications.
(Source: ITG Electronics)
The RL858583 Series features an inductance range of 6.8 μH to 22.0 μH with a tight 5% tolerance. Custom inductance values are available.
The series supports currents up to 39 A, with approximately 30% roll-off, in a compact 21.5 × 21.0 × 21.5-mm footprint. This provides exceptional current handling in a compact DIP package, ITG said.
Designed for reliability in high-stress operating conditions, the inductors offer a rated voltage of 600 VAC/1,000 VDC and dielectric strength up to 4,500 VDC. The devices feature low DC resistance (DCR) from 3.94 mΩ to 17.40 mΩ and AC resistance (ACR) values from 70 mΩ to 200 mΩ, which helps to minimize power losses and to ensure high efficiency across a range of frequencies. The operating temperature ranges from -55℃ to 130℃.
The combination of high current capability, compact design, and customizable inductance options makes them suited for resonant converters, inverters, and other high-performance power applications, according to ITG Electronics. The RL858583 Series resonant inductors are RoHS-compliant and halogen-free.
The post Resonant inductors offer a wide inductance range appeared first on EDN.
Power resistors handle high-energy pulse applications

Bourns, Inc. releases its Riedon BRF Series of precision power foil resistors for high-energy pulse applications. These power resistors offer power ratings up to 2,500 W and a temperature coefficient of resistance (TCR) as low as ±15 ppm/°C, making them suited as energy dissipation solutions for circuits that require high precision. Applications include current sensing, power management, industrial power control, and energy storage.
(Source: Bourns, Inc.)
The power resistor series is available in two- and four-terminal options with termination current ratings up to 150 A. This enables developers to tailor the resistors to their exact design requirements, Bourns said.
Other key specifications include a resistance range from 0.001 to 500 Ω, low inductance of <50 nH, and load stability to 0.1%. The operating temperature range is -40°C to 130°C.
The BRF Series of power resistors is built using metal foil technology housed in an aluminum heat sink and a low-profile package. These precision power resistors are designed to meet the rugged and space-constrained requirements of high-energy pulse applications such as power converters, battery energy storage systems, industrial power supplies, inverters, and motor drives.
Available now, the Riedon BRF series is RoHS compliant. Click here for Bourns’ portfolio of metal foil resistors.
The post Power resistors handle high-energy pulse applications appeared first on EDN.
The Linksys MX4200C: A retailer-branded router with memory deficiencies

How timely! My teardown of Linksys’ VLP01 router, submitted in late September, was published one day prior to when I started working on this write-up in late October.

What’s the significance, aside from the chronological cadence? Well, at the end of that earlier piece, I wrote:
There’s another surprise waiting in the wings, but I’ll save that for another teardown another (near-future, I promise) day.
That day is today. And if you’ve already read my earlier piece (which you have, right?), you know that I actually spent the first few hundred words of it talking about a different Linksys router, the LN1301, also known as the MX4300:

I bought a bunch of ‘em on closeout from Woot (yep, the same place that the refurbished VLP01 two-pack came from), and I even asked my wife to pick up one too, with the following rationale:
That’ll give me plenty of units for both my current four-node mesh topology and as-needed spares…and eventually I may decide to throw caution to the wind and redirect one of the spares to a (presumed destructive) teardown, too.
Last month’s bigger brotherHold that thought. Today’s teardown victim was another refurbished Linksys router two-pack from Woot, purchased a few months later, this February to be exact. Woot promotion-titled the product page as a “Linksys AX4200 Velop Mesh Wi-Fi 6 System”, and the specs further indicated that it was a “Linksys MX8400-RM2 AX4200 Velop Mesh Wi-Fi 6 Router System 2-Pack”. It cost me $19.99 plus tax (with free shipping) after another $5 promotion-code discount, and I figured that, as with the two-VLP01 kit, I’d tear down one of the two routers for your enjoyment and hold onto the other for use as a mesh node. Here’s its stock image on Woot’s website:

Looks kinda like the MX4300, doesn’t it? I admittedly didn’t initially notice the physical similarity, in part because of the MX8400 product name replicated on the outer box label:

When I started working on the sticker holding the lid in place, I noticed a corner of a piece of literature sticking out, which turned out to be the warranty brochure. Nice packing job, Linksys!

Lifting the lid:

You’ll find both routers inside, along with two Ethernet cable strands rattling around loose. Underneath the thick blue cardstock piece labeled “Setup Guide” to the right:

are the two power supplies, along with…umm…the setup guide plus a support document:

Some shots of the wall wart follow:

including the specs:

and finally, our patient, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front view:

left side:

back, both an overview and a closeup of the various connectors: power, WAN, three LAN, and USB-A. Hmm…where have I seen that combo before?


right side:

top, complete with the status LED:

and…wait. What’s this?

In addition to the always-informative K7S-03580 FCC ID, check out that MX4200C product name. When I saw it, I realized two key things:
- Linksys was playing a similar naming game to what they’d done with the VLP01. Quoting from my earlier teardown: “…an outer box shot of what I got…which, I’ve just noticed, claims that it’s an AC2400 configuration
(I’m guessing this is because Linksys is mesh-adding the two devices’ theoretical peak bandwidths together? Lame, Linksys, lame…)” This time, they seemingly added the numbers in the two MX4200 device names together to come up with the “bigger is better” MX8400 moniker. - The MX4200(C, in this case) is mighty close to MX4300. Now also realizing the physical similarity, I suspected I had a near-clone (and much less expensive, not to mention more widely available) sibling to the no-longer-available router I’d discussed a month earlier, which, being rare, I was therefore so reticent to (presumably destructively) disassemble.
Some background from my online research before proceeding:
- The MX4200 came in two generational versions, both of them integrating 512 Mbytes of flash memory for firmware storage. V1 of the MX4200 included 512 Mbytes of RAM and had dimensions of 18.5cm (7.3 inches) high and 7.9cm (3.1 inches) wide. The larger, 24.3cm (9.57 inches) high and 11cm (4.45 inches) wide, V2 MX4200 also doubled the internal RAM capacity to 1 GByte.
- This MX4200C is supposedly a Costco-only variant (meaning what beyond the custom bottom sticker? Dunno), conceptually reminiscent of the Walmart-only VLP01 I’d taken apart last month. I can’t find any specs on it, but given its dimensional commonality with the V2 MX4200, I’ll be curious to peer inside and see if it embeds 1 GByte of RAM, too.
- And the MX4300? It’s also dimensionally reminiscent of the V2 MX4200. But this time, there are 2 GBytes of RAM inside it. Last month, I’d mentioned that the MX4300 also bumps up the flash memory to 1 GByte, but the online source I’d gotten that info from was apparently incorrect. It’s 512 GBytes, the same as in versions of the MX4200.
Clearly, now that I’m aware of the commonality between this MX4200C and the MX4300, I’m going to be more careful (but still comprehensive) than I might otherwise be with my dissection, in the hope of a subsequent full resurrection. To wit, here we go, following the same initial steps I used for the much smaller VLP01 a month ago. The only top groove I was able to punch through was the back edge, and even then, I had to switch to a flat-head screwdriver to make tangible disassembly progress (without permanently creasing the spudger blade in the process):

Voila:


Next to go, again as before, are those four screws:


And now for a notable deviation from last month’s disassembly scheme. That time, there were also screws under the bottom rubber “feet” that needed to be removed before I could gain access to the insides. This time, conversely, when I picked up the assembly in preparation for turning it upside-down…

Alrighty, then!

Behold our first glimpses of the insides. Referencing the earlier outer case equivalents (with the qualifier that, visually obviously, the PCB is installed diagonally), here’s the front:

Left side:

Back, along with another accompanying connectors closeup (note, by the way, the two screws at the bottom of the exposed portion of the PCB):


And right side:

Let’s next get rid of the plastic shield around the connectors, which, as was the case last month, lifted away straightaway:

And next, the finned heatsink to its left (in the earlier photo) and the rear right half of the assemblage (when viewed from the front):



We have liftoff:


Oh, goodie, Faraday cages! Hold that thought:

Rotating the assemblage around exposes the other (front left) half and its metal plate, which, with the just-seen four heatsink screws also no longer holding it in place, lifts right off as well:




You probably already noticed the colored wires in the prior shots. Here are the up-top antennas and LED assembly where they end up:


And here’s where at least some of them originate:



Unhooking the wire harness running up the side of the assemblage, along with removing the two screws noted earlier at the bottom of the PCB, enables the board’s subsequent release:

Here’s what I’m calling the PCB backside (formerly in the rear right region) which the finned heatsink previously partially covered and which you’ve already seen:

And here’s the newly-exposed-to-view frontside (formerly front left, to be precise), with even more Faraday cages awaiting my pry-off attention:

I’m happy to oblige. Upper left corner first:

Temporarily (because, as previously mentioned, I aspire to put everything back together in functionally resurrected form later) bend the tab away, and with thanks to Google Image search results for the tip, a Silicon Labs EFR32MG21 Series 2 Multiprotocol Wireless SoC, supporting Bluetooth, Thread, and Zigbee mesh protocols, comes into view. The previously shown single-lead antenna connection on the other side of the PCB is presumably associated with it:

To its left, uncaged, is a Fidelix FMND4G08S3J-ID 512 Mbyte NAND flash memory, presumably for holding the system firmware.
Most of the rest of the cages’ contents are bland, unless you’re into lots of passives; as you’ll soon see, their associated ICs on the other side are more exciting:




Note in all these so-far cases, as well as the remainder, that thermal tape is employed for heat transfer purposes, not paste. Linksys’ decision not only makes it easier to see what’s underneath it will also increase the subsequent likelihood of tape-back-in-place reassembly functional success:

And after all those passives, the final cage at bottom left ended up being IC-inclusive again, this time containing a Qualcomm PMP8074 power management controller:

Now for a revisit of the other side of the PCB, starting with the top-most cage and working our way to the bottom. The first one, with two antenna connectors notably above it, encompasses a portion of the wireless networking subsystem and is based on two Qualcomm Wi-Fi SoCs, the QCN5024 for 2.4 GHz and QCN5054 for 5 GHz. Above the former are two Skyworks SKY85340-11 front-end modules (FEMs); the latter is topped off by two Skyworks SKY85755-11s:


The next cage is for the processor, a quad-core 1.4 GHz Qualcomm IPQ8174, the same SoC and speed bin as in the Linksys MX4300 I discussed last month, and the volatile memory, two ESMT M15T2G16128A 2 Gbit DDR3-933 SDRAMs. I guess we now know how the MX4200C differs from the V2 MX4200; Linksys halved the RAM to 512 GBytes total, reminiscent of the V1 MX4200’s allocation, to come up with this Costco-special product spin.



The third one, this time with four antennae connectors below it, houses the remainder of the (5 GHz-only, in this case) Wi-Fi subsystem; four more Qualcomm QCN5054s, each with a mated Skyworks SKY85755-11 FEM:


And last but not least, at bottom right is the final cage, containing a Qualcomm QCA8075 five-port 10/100/1000 Mbps Ethernet transceiver, only four ports’ worth of which are seemingly leveraged in this design (one WAN, three LAN, if you’ll recall from earlier). Its function is unsurprising given its layout proximity to the two Botthand LG2P109RN dual-port magnetic transformers to its right:


And with that, I’ll wrap up for today. More info on the MX4200 (V1, to be precise) can be found at WikiDevi. Over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- A fresh gander at a mesh router
- The pros and cons of mesh networking
- Teardown: The router that took down my wireless network
- Is it time to upgrade to mesh networking?
The post The Linksys MX4200C: A retailer-branded router with memory deficiencies appeared first on EDN.
Understand quadrature encoders with a quick technical recap

An unexpected revisit to my earlier post on mouse encoder hacking sparked a timely opportunity to reexamine quadrature encoders, this time with a clearer lens and a more targeted focus on their signal dynamics and practical integration. So, let’s get a fresh restart and dive straight into the quadrature signal magic.
Starting with a flake of theory, a quadrature signal refers to a pair of sinusoidal waveforms—typically labeled I (in-phase) and Q (quadrature)—that share the same frequency but are offset by 90° in phase. These orthogonal signals do not interfere with each other and together form the foundation for representing complex signals in systems ranging from communications to control.

Figure 1 A visualization illustrates the idealized output from a quadrature encoder, highlighting the phase relationship. Source: Author
In the context of quadrature encoders, the term describes two square wave signals, known as A and B channels, which are also 90° out of phase. This phase offset enables the system to detect the direction of rotation, count discrete steps or pulses for accurate position tracking, and enhance resolution through edge detection techniques.
As you may already be aware, encoders are essential components in motion control systems and are generally classified into two primary types: incremental and absolute. A common configuration within incremental encoders is the quadrature encoder, which uses two output channels offset in phase to detect both direction and position with greater precision, making it ideal for tracking relative motion.
Standard incremental encoders also generate pulses as the shaft rotates, providing movement data; however, they lose positional reference when power is interrupted. In contrast, absolute encoders assign a unique digital code to each shaft position, allowing them to retain exact location information even after a power loss—making them well-suited for applications that demand high reliability and accuracy.
Note that while quadrature encoders are often mentioned alongside incremental and absolute types, they are technically a subtype of incremental encoders rather than a separate category.
Oh, I almost forgot: The Z output of an ABZ incremental encoder plays a crucial role in precision positioning. Unlike the A and B channels, which continuously pulse to indicate movement and direction, the Z channel—also known as the index or marker pulse—triggers just once per revolution.
This single pulse serves as a reference point, especially useful during initialization or calibration, allowing systems to accurately identify a home or zero position. That is to say, the index pulse lets you reset to a known position and count full rotations; it’s handy for multi-turn setups or recovery after power loss.

Figure 2 A sample drawing depicts the encoder signals, with the index pulse clearly marked. Source: Author
Hands-on with a real-world quadrature rotary encoder
A quadrature rotary encoder detects rotation and direction via two offset signals; it’s used in motors, knobs, and machines for fine-tuned control. Below is the circuit diagram of a quadrature encoder I designed for a recent project using a couple of optical sensors.

Figure 3 Circuit diagram shows a simple quadrature encoder setup that employs optical sensors. Source: Author
Before we proceed, it’s worth taking a moment to reflect on a few essential points.
- A rotary encoder is an electromechanical device used to measure the rotational motion of a motor shaft or the position of a dial or knob. It commonly utilizes quadrature encoding, an incremental signaling technique that conveys both positional changes and the direction of rotation. On the other hand, linear encoder measures displacement along a straight path and is commonly used in applications requiring high-precision linear motion.
- Quadrature encoders feature two output channels, typically designated as channel A and channel B. By monitoring the pulse count and identifying which channel leads, the encoder interface can determine both the distance and direction of rotation.
- Many encoders also incorporate a third channel, known as the index channel (or Z channel), which emits a single pulse per full revolution. This pulse serves as a reference point, enabling the system to identify the encoder’s absolute position in addition to its relative movement.
- Each complete cycle of the A and B channels in a quadrature encoder generates square wave signals that are offset by 90 degrees in phase. This cycle produces four distinct signal transitions—A rising, B rising, A falling, and B falling—allowing for higher resolution in position tracking. The direction of rotation is determined by the phase relationship between the channels: if channel A leads channel B, the rotation is typically clockwise; if B leads A, it indicates counterclockwise motion.
- To interpret the pulse data generated by a quadrature encoder, it must be connected to an encoder interface. This interface translates the encoder’s output signals into a series of counts or cycles, which can then be converted into a number of rotations based on the encoder’s cycles per revolution (CPR) counts. Some manufacturers also specify pulses per revolution (PPR), which typically refers to the number of electrical pulses generated on a single channel per full rotation and may differ from CPR depending on the decoding method used.

Figure 4 The above diagram offers a concise summary of quadrature encoding basics. Source: Author
That’s all; now, back to the schematic diagram.
In the previously illustrated quadrature rotary encoder design, transmissive (through-beam) sensors work in tandem with a precisely engineered shaft encoder wheel to detect rotational movement. Once everything is correctly wired and tuned, your quadrature rotary encoder is ready for use. It outputs two phase-shifted signals, enabling direction and speed detection.
In practice, most quadrature encoders rely on one of three sensor technologies: optical, magnetic, or capacitive. Among these, optical encoders are the most commonly used. They operate by utilizing a light source and a photodetector array to detect the passage or reflection of light through an encoder disk.
A note for custom-built encoder wheels: When designing your own encoder wheel, precision is everything. Ensure the slot spacing and width are consistent and suited to your sensor’s resolution requirements. And do not overlook alignment; accurate positioning with the beam path is essential for generating clean, reliable signals.
Layers beneath the spin
So, once again we circled back to quadrature encoders—this time with a bit more intent and (hopefully) a deeper dive. Whether you are just starting to explore them or already knee-deep in decoding signals, it’s clear these seemingly simple components carry a surprising amount of complexity.
From pulse counting and direction sensing to the quirks of noisy environments, there is a whole layer of subtleties that often go unnoticed. And let us be honest—how often do we really consider debounce logic or phase shift errors until they show up mid-debug and throw everything off?
That is the beauty of it: the deeper you dig, the more layers you uncover.
If this stirred up curiosity or left you with more questions than answers, let us keep the momentum going. Share your thoughts, drop your toughest questions, or suggest what you would like to explore next. Whether it’s hardware oddities, decoding strategies, or real-world implementation hacks—we are all here to learn from each other.
Leave a comment below or reach out with your own encoder war stories. The conversation—and the learning—is far from over.
Let us keep pushing the boundaries of what we think we know, together.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Decode a quadrature encoder in software
- Understanding Incremental Encoder Signals
- AVR takes under 1µs to process quadrature encoder
- Linear position sensor/encoder offers analog and digital evaluation
- How to use FPGAs for quadrature encoder-based motor control applications
The post Understand quadrature encoders with a quick technical recap appeared first on EDN.
Motor drivers advance with new features

Industrial automation, robotics, and electric mobility are increasingly driving demand for improved motor driver ICs as well as solutions that make it easier to design motor drives. With energy consumption being a key factor in these applications, developers are looking for motor drivers that offer higher efficiency and lower power consumption.
At the same time, integrating motor drivers into existing systems is becoming more challenging, as they need to work seamlessly with a variety of motors and control algorithms such as trapezoidal, sinusoidal, and field-oriented control (FOC), according to Global Market Insights Inc.
The average electric vehicle uses 15–20 motor drivers across a variety of systems, including traction motors, power steering, and brake systems, compared with eight to 12 units in internal-combustion-engine vehicles, and industrial robots typically use six to eight motor drivers for joint articulation, positioning, and end-effector control, according to Emergen Research.
The motor driver IC market is expected to grow at a compound annual growth rate of 6.8% from 2024 to 2034, according to Emergen Research, driven by industrial automation, EVs, and smart consumer electronics. Part of this growth is attributed to Industry 4.0 initiatives that drive the demand for more advanced motor control solutions, including the use of artificial intelligence and machine-learning algorithms in motor control systems.
Emergen Research also reports that silicon carbide and gallium nitride (GaN) materials are gaining traction in high-power applications thanks to their higher switching characteristics compared with silicon-based solutions.
Other trends include the growing demand for precise motor control, the integration of advanced sensorless control, and low electromagnetic interference (EMI), according to the market research firms.
Here are a few examples of new motor drivers for industrial and automotive applications, as well as development solutions such as software, reference designs, and evaluation kits that help ease the development of motor drives.
Motor driversMelexis recently launched the MLX81339, a configurable motor driver with a pulse-width modulation (PWM)/serial interface for a range of industrial applications. This motor driver IC is designed for compact, three-phase brushless DC (BLDC) and stepper motor control up to 40 W in industrial applications such as fans, pumps, and positioning systems.
The motor driver targets a range of markets, including smart industrial and consumer sectors, in applications such as positioning motors, thermal valves, robotic actuators, residential and industrial ventilation systems, and dishwashing pumps. The MLX81339 is also qualified for automotive fan and blower applications.
A key feature of this motor control IC is the programmable flash memory, which enables full application customization. Designed for three-phase BLDC or bipolar stepper motors, these advanced drivers use silent FOC. It delivers reliable startup, stopping, and precise speed control from low to maximum speed, Melexis said.
The MLX81339 motor driver supports control up to 20 W at 12 V and 40 W at 24 V, integrating a three-phase driver with a configurable current limit up to 3 A, as well as under-/overvoltage, overcurrent, and overtemperature protection. Other key specifications include a wide supply voltage range of 6 V to 26 V and an operating temperature range of –40°C to 125°C (junction temperature up to 150°C).
The MLX81339 also incorporates 8× general-purpose I/Os and several interfaces, including PWM/FG, I2C, UART, and SPI, for easy integration into both legacy and smart systems. It also supports both sensor-based and sensorless control.
Melexis offers the Melexis StartToRun web tool to accelerate motor driver prototyping, eliminating engineering tasks by generating configuration files based on simple user inputs. In addition to the motor and electrical parameters, the tool includes prefilled mechanical values.
The MLX81339, housed in QFN24 and SO8-EP packages, is available now. A code-free and configurable MLX80339 for rapid deployment will be released in the first quarter of 2026.
Melexis’s MLX81339 motor driver (Source: Melexis)
Earlier this year, STMicroelectronics introduced the VNH9030AQ, an integrated full-bridge DC motor driver with high-side and low-side MOSFET gate drivers, real-time diagnostics, and protection against overvoltage transients, undervoltage, short-circuit conditions, and cross-conduction, aimed at reducing design complexity and cost. Delivering greater flexibility to system designers, the MOSFETs can be configured either in parallel or in series, allowing them to be used in systems with multiple motors or to meet other specific requirements.
The integrated non-dissipative current-sense circuitry monitors the current flowing through the device to distinguish each motor phase, contributing to the driver’s efficiency. The standby power consumption is very low over the full operating temperature range, easing use in zonal controller platforms, ST said.
This DC motor driver can be used in a range of automotive applications, including functional safety. The driver also provides a dedicated pin for real-time output status, easing the design into functional-safety and general-purpose low-/mid-power DC-motor-driven applications while reducing the requirements for external circuitry.
With an RDS(on) of 30 mΩ per leg, the VNH9030AQ can handle mid- and low-power DC-motor-driven applications such as door-control modules, washer pumps, powered lift gates, powered trunks, and seat adjusters.
The driver is part of a family of devices that leverage ST’s latest VIPower M0-9 technology, which permits monolithic integration of power and logic circuitry. All products, including the VNH9030AQ, are housed in a 6 × 6-mm, thermally enhanced triple-pad QFN package. The package is designed for optimal underside cooling and shares a common pinout to ease layout and software reuse.
The VNH9030AQ is available now. ST also offers a ready-to-use VNH9030AQ evaluation board and the TwisterSim dynamic electro-thermal simulator to simulate the motor driver’s behavior under various operating conditions, including electrical and thermal stresses.
STMicroelectronics’ VNH9030AQ half-bridge DC motor driver (Source: STMicroelectronics)
Targeting both automotive and industrial applications, the Qorvo Inc. 160-V three-phase BLDC motor driver also aims to reduce solution size, design time, and cost with an integrated power manager and configurable analog front end (AFE). The ACT72350 160-V gate driver can replace as many as 40 discrete components in a BLDC motor control system, and the configurable AFE enables designers to configure their exact sensing and position detection requirements.
The ACT72350 includes a configurable power manager with an internal DC/DC buck converter and LDOs to support internal components and serve as an optional supply for the host microcontroller (MCU). In addition, by offering a wide, 25-V to 160-V input range, designers can reuse the same design for a variety of battery-operated motor control applications, including power and garden tools, drones, EVs, and e-bikes.
The ACT72350 provides the analog circuitry needed to implement a BLDC motor control system and can be paired with a variety of MCUs, Qorvo said. It provides high efficiency via programmable propagation delay, precise current sensing, and BEMF feedback, as well as differentiated features for safety-critical applications.
The SOI-based motor driver is available now in a 9.0 × 9.0-mm, 57-pin QFN package. An evaluation kit is available, along with a model of the ACT72350 in Qorvo’s QSPICE circuit simulation software at www.qspice.com.
Qorvo’s ACT72350 three-phase BLDC motor driver (Source: Qorvo Inc.)
Software, reference designs, and evaluation kits
Motor driver IC and power semiconductor manufacturers also deliver software suites, reference designs, and development kits to simplify motor drive design and development. A few examples include Power Integrations’ MotorXpert software, Efficient Power Conversion Corp.’s (EPC’s) GaN-based motor driver reference design, and a modular motor driver evaluation kit developed by Würth Elektronik and Nexperia.
Power Integrations continues to enhance its MotorXpert software for its BridgeSwitch and BridgeSwitch-2 half-bridge motor driver ICs. The latest version, MotorXpert v3.0, enables FOC without shunts and their associated sensors. It also adds support for advanced modulation schemes and features V/F and I/F control to ensure startup under any load condition.
Designed to simplify single- and three-phase sensorless motor drive designs, the v3.0 release adds a two-phase modulation scheme, suited for high-temperature environments, reducing inverter switching losses by 33%, according to the company. It allows developers to trade off the temperature of the inverter versus torque ripple, particularly useful in applications such as hot water circulation pumps, reducing heat-sink requirements and enclosure cost, the company said.
The software also delivers a five-fold improvement to the waveform visualization tool and an enhanced zoom function, providing more data for motor tuning and debugging. The host-side application includes a graphical user interface with Power Integrations’ digital oscilloscope visualization tool to make it easy to design and configure parameters and operation and to simplify debugging. Also easing development are parameter tool tips and a tuning assistant.
The software suite is MCU-agnostic and includes a porting guide to simplify deployment with a range of MCUs. It is implemented in the C language to MISRA standards.
Power Integrations said development time is greatly reduced by the included single- and three-phase code libraries with sensorless support, reference designs, and other tools such as a power supply design and analysis tool. Applications include air conditioning fans, refrigerator compressors, fluid pumps, washing machine and dryer drums, range hoods, industrial fans, and heat pumps.
Power Integrations’ MotorXpert software suite (Source: Power Integrations)
EPC claims the first GaN-based motor driver reference design for humanoid robots with the launch of the EPC91118 reference design for motor joints. The EPC91118 delivers up to 15 ARMS per phase from a wide input DC voltage, ranging from 15 V to 55 V, in an ultra-compact, circular form factor.
The reference design is optimized for space-constrained and weight-sensitive applications such as humanoid limbs and drone propulsion. It shrinks inverter size by 66% versus silicon, EPC said, and eliminates the need for electrolytic capacitors due to the GaN ICs and high-frequency operation. The high switching frequency instead allows the use of smaller MLCCs.
The reference design is centered around the EPC23104 ePower stage IC, a monolithic GaN IC that enables higher switching frequencies and reduced losses. The power stage is combined with current sensing, a rotor shaft magnetic encoder, an MCU, RS-485 communications, and 5-V and 3.3-V power supplies on a single board that fits within a 32-mm-diameter footprint (55-mm-diameter outer frame; 32-mm-diameter inverter).
EPC’s EPC91118 motor driver reference design (Source: Efficient Power Conversion Corp.)
Aimed at faster development of motor controllers, Würth Elektronik and Nexperia have collaborated on the NEVB-MTR1-KIT1 modular motor driver evaluation kit. The kit can be configured for use in under two minutes and is powered via USB-C.
The companies highlight the modularity of the evaluation board that can be adapted to a wide range of motors, control algorithms, and test setups, enabling faster optimization as well as faster iterations and testing. With an open architecture, the kit enables MCUs and components to be easily exchanged, and the open-source firmware allows developers to quickly adapt and develop motor controllers under real-world conditions, according to the companies.
The kit includes a three-phase inverter board, a motor controller board, an MCU development board, pre-wired motor connections, and a BLDC motor. A key feature is the high-current connectors integrated by Würth Elektronik, which enable evaluations up to 1 kW at 48 V.
The demands on dynamics, fault tolerance, and energy efficiency in drive systems are rising steadily, resulting in increasingly more complex motor control system design, according to the companies. The selection of the right switches (MOSFETs and IGBTs), gate drivers, and protection circuits is critical to ensure lower switching losses, better thermal behavior, and stable dynamics.
The behavior of the components must be carefully validated under real-world conditions, taking into consideration factors such as parasitic elements, switching transients, and EMI, according to the companies. The modular kit helps with this by enabling different motors and control concepts to be evaluated.
The Würth Elektronik and Nexperia NEVB-MTR1-KIT1 motor drive evaluation kit (Source: Würth Elektronik)
The post Motor drivers advance with new features appeared first on EDN.




