Українською
  In English
EDN Network
Does (wearing) an Oura (smart ring) a day keep the doctor away?

Before diving into my on-finger impressions of Oura’s Gen3 smart ring, as I’d promised I’d do back in early September, I thought I’d start off by revisiting some of the business-related topics I mentioned in that initial post in the series. First off, I mentioned at the end of that post that Oura had just obtained a favorable final judgment from the United States International Trade Commission (ITC) that both China-based RingConn and India-based Ultrahuman had infringed on its patent portfolio. In the absence of licensing agreements or other compromises, both Oura competitors would be banned from further product shipments to and sales of their products in the US after a final 60-day review period ended on October 21, although retailer partners could continue to sell their existing inventory until it was depleted.
Product evolutions and competition developmentsI’m writing these words 10 days later, on Halloween, and there’ve been some interesting developments. I’d intentionally waited until after October 21 in order to see how both RingConn and Ultrahuman would react, as well as to assess whether patent challenges would pan out. As for Ultrahuman, a blog post posted shortly before the deadline (and updated the day after) made it clear that the company wasn’t planning on caving:
- A new ring design is already in development and will launch in the U.S. as soon as possible.
- We’re actively seeking clarity on U.S. manufacturing from our Texas facility, which could enable a “Made in USA” Ring AIR in the near future.
- We also eagerly await the U.S. Patent and Trademark Office’s review of the validity of Oura’s ‘178 patent, which it acquired in 2023, and is central to the ITC ruling. A decision is expected in December.
To wit, per a screenshot I captured the day after the deadline, Wednesday, October 22, sales through the manufacturer’s website to US customers had ceased.

And surprisingly, inventory wasn’t listed as available for sale on Amazon’s website, either.
RingConn conversely took a different tack. On October 22, again, when I checked, the company was still selling its products to US customers both from its own website and Amazon’s:

This situation baffled me until I hit up the company subreddit and saw the following:
Dear RingConn Family,
We’d like to share some positive news with you: RingConn, a leading smart ring innovator, has reached a settlement with ŌURA regarding a patent dispute. Under the terms of the agreement, RingConn’s software and hardware products will remain available in the U.S. market, without affecting its market presence.
See the company’s Reddit post for the rest of the message. And here’s the official press release.
Secondly, as I’d noted in my initial coverage:
One final factor to consider, which I continue to find both surprising and baffling, is the fact that none of the three manufacturers I’ve mentioned here seems to support having more than one ring actively associated with an account, therefore, cloud-logging and archiving data, at the same time. To press a second ring into service, you need to manually delete the first one from your account first. The lack of multi-ring support is a frequent cause of complaints on Reddit on elsewhere, from folks who want to accessorize multiple smart rings just as they do with normal rings, varying color and style to match outfits and occasions. And the fiscal benefit to the manufacturers of such support is intuitively obvious, yes?
It turns out I just needed to wait a few weeks. On October 1, Oura announced that multiple Oura Ring 4 styles would soon be supported under a single account. Quoting the press release, “Pairing and switching among multiple Oura Ring 4 devices on a single account will be available on iOS starting Oct. 1, 2025, and on Android starting Oct. 20, 2025.” That said, a crescendo of complaints on Reddit and elsewhere suggests an implementation delay; I’m 11 days past October 20 at this point and haven’t seen the promised Android app update yet, and at least some iOS users have waited a month at this point. Oura PR told me that I should be up and running by November 5; I’ll follow up in the comments as to whether this actually happened.
Charging optionsThat same day, by the way, Oura also announced its own branded battery-inclusive charger case, an omission that I’d earlier noted versus competitor RingConn:

That said, again quoting from the October 1 press release (with bolded emphasis mine), the “Oura Ring 4 Charging Case is $99 USD and will be available to order in the coming months.” For what it’s worth, the $28.99 (as I write these words) Doohoeek charging case for my Gen3 Horizon:

is working like a charm:


Behind it, by the way, is the upgraded Doohoeek $33.29 charging case for my Oura Ring 4, whose development story (which I got straight from the manufacturer) was not only fascinating in its own right but also gave me insider insight into how Oura has evolved its smart ring charging scheme for the smart ring over time. More about that soon, likely next month.

And here’s my Gen3 on the factory-supplied, USB-C-fed standard charger, again with its Ring 4 sibling behind it:

As for the ring itself, here’s what it looks like on my left index finger, with my wedding band two digits over from it on the same hand:


And here again are all three rings I’ve covered in in-depth writeups to date: the Oura Gen3 Horizon at left, Ultrahuman Ring AIR in the middle and RingConn Gen 2 at right:

Like RingConn’s product:

both the Heritage:

and my Horizon variant of the Oura Gen3:


include physical prompting to achieve and maintain proper placement: sensor-inclusive “bump” guides on both sides of the backside inside, which the Oura Ring 4 notably dispenses with:

I’ve already shown you what the red glow of the Gen3 intermediary SpO2 (oxygen saturation) sensor looks like when in operation, specifically when I’m able to snap a photo of it soon enough after waking to catch it still in action before it discerns that I’ve stirred and turns off:

And here’s what the two green-color pulse rate sensors, one on either side of their SpO2 sibling:

look like in action:

Generally speaking, the Oura Gen3 feels a lot like the Ultrahuman Ring AIR; they both drop between 15-20% of battery charge level every 24 hours, leading to a sub-week operating life between recharges. That said, I will give Oura well-deserved kudos for its software user interface, which is notably more informative, intuitive and more broadly easier to use than its RingConn and Ultrahuman counterparts. Then again, Oura’s been around the longest and has the largest user base, so it’s had more time (and more feedback) to fine-tune things. And cynically speaking, given Oura’s $5.99/month or $69.99/year subscription fee, versus competitors’ free, it’d better be better!
Software insightsIn closing, and in fairness, regarding that subscription, it’s not strictly required to use an Oura smart ring. That said, the information supplied without it:

is a pale subset of the norm:

What I’m showing in the overview screen images is a fraction of the total information captured and reported, but it’s all well-organized and intuitive. And as you can see on that last one, the Oura smart ring is adept at sensing even brief catnaps 
With that, and as I’ve already alluded, I now have an Oura Ring 4 on-finger—two of them, in fact, one of which I’ll eventually be tearing down—which I aspire to write up shortly, sharing my impressions both versus its Gen3 predecessor and its competitors. Until then, I as-always welcome your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- RingConn: Smart, svelte, and econ(omical)
- Can a smart ring make me an Ultrahuman being?
- Smarty Ring Fails to Impress
- Smart ring allows wearer to “air-write” messages with a fingertip
The post Does (wearing) an Oura (smart ring) a day keep the doctor away? appeared first on EDN.
Inside the battery: A quick look at internal resistance

Ever wondered why a battery that reads full voltage still struggles to power your device? The answer often lies in its internal resistance. This hidden factor affects how efficiently a battery delivers current, especially under load.
In this post, we will briefly examine the basics of internal resistance—and why it’s a critical factor in real-world performance, from handheld flashlights to high-power EV drivetrains.
What’s internal resistance and why it matters
Every battery has some resistance to the flow of current within itself—this is called internal resistance. It’s not a design flaw, but a natural consequence of the materials and construction. The electrolyte, electrodes, and even the connectors all contribute to it.
Internal resistance causes voltage to drop when the battery delivers current. The higher the current draw, the more noticeable the drop. That is why a battery might read 1.5 V at rest but dip below 1.2 V under load—and why devices sometimes shut off even when the battery seems “full.”
Here is what affects it:
- Battery type: Alkaline, lithium-ion, and NiMH cells all have different internal resistances.
- Age and usage: Resistance increases as the battery wears out.
- Temperature: Cold conditions raise resistance, reducing performance.
- State of charge: A nearly empty battery often shows higher resistance.
Building on that, internal resistance gradually increases as batteries age. This rise is driven by chemical wear, electrode degradation, and the buildup of reaction byproducts. As resistance climbs, the battery becomes less efficient, delivers less current, and shows more voltage drop under load—even when the resting voltage still looks healthy.
Digging a little deeper—focusing on functional behavior under load—internal resistance is not just a single value; it’s often split into two components. Ohmic resistance comes from the physical parts of the battery, like the electrodes and electrolyte, and tends to stay relatively stable.
Polarization resistance, on the other hand, reflects how the battery’s chemical reactions respond to current flow. It’s more dynamic, shifting with temperature, charge level, and discharge rate. Together, these resistances shape how a battery performs under load, which is why two batteries with identical voltage readings might behave very differently in real-world use.
Internal resistance in practice
Internal resistance is a key factor in determining how much current a battery can deliver. When internal resistance is low, the battery can supply a large current. But if the resistance is high, the current it can provide drops significantly. Also, higher the internal resistance, the greater the energy loss—this loss manifests as heat. That heat not only wastes energy but also accelerates the battery’s degradation over time.
The figure below illustrates a simplified electrical model of a battery. Ideally, internal resistance would be zero, enabling maximum current flow without energy loss. In practice, however, internal resistance is always present and affects performance.

Figure 1 Illustration of a battery’s internal configuration highlights the presence of internal resistance. Source: Author
Here is a quick side note regarding resistance breakdown. Focusing on material-level transport mechanisms, battery internal resistance comprises two primary contributors: electronic resistance, driven by electron flow through conductive paths, and ionic resistance, governed by ion transport within the electrolyte.
The total effective resistance reflects their combined influence, along with interfacial and contact resistances. Understanding this layered structure is key to diagnosing performance losses and carrying out design improvements.
As observed nowadays, elevated internal resistance in EV batteries hampers performance by increasing heat generation during acceleration and fast charging, ultimately reducing driving range and accelerating cell degradation.
Fortunately, several techniques are available for measuring a battery’s internal resistance, each suited to different use cases and levels of diagnostic depth. Common methods include direct current internal resistance (DCIR), alternating current internal resistance (ACIR), and electrochemical impedance spectroscopy (EIS).
And there is a two-tier variation of the standard DCIR technique, which applies two sequential discharge loads with distinct current levels and durations. The battery is first discharged at a low current for several seconds, followed by a higher current for a shorter interval. Resistance values are calculated using Ohm’s law, based on the voltage drops observed during each load phase.
Analyzing the voltage response under these conditions can reveal more nuanced resistive behavior, particularly under dynamic loads. However, the results remain strictly ohmic and do not provide direct information about the battery’s state of charge (SoC) or capacity.
Many branded battery testers, such as some product series from Hioki, apply a constant AC current at a measurement frequency of 1 kHz and determine the battery’s internal resistance by measuring the resulting voltage with an AC voltmeter (AC four-terminal method).

Figure 2 The Hioki BT3554-50 employs AC-IR method to achieve high-precision internal resistance measurement. Source: Hioki
The 1,000-hertz (1 kHz) ohm test is a widely used method for measuring internal resistance. In this approach, a small 1-kHz AC signal is applied to the battery, and resistance is calculated using Ohm’s law based on the resulting voltage-to-current ratio.
It’s important to note that AC and DC methods often yield different resistance values due to the battery’s reactive components. Both readings are valid—AC impedance primarily reflects the instantaneous ohmic resistance, while DC measurements capture additional effects such as charge transfer and diffusion.
Notably, the DC load method remains one of the most enduring—and nostalgically favored—approaches for measuring a battery’s internal resistance. Despite the rise of impedance spectroscopy and other advanced techniques, its simplicity and hands-on familiarity continue to resonate with seasoned engineers.
It involves briefly applying a load—typically for a second or longer—while measuring the voltage drop between the open-circuit voltage and the loaded voltage. The internal resistance is then calculated using Ohm’s law by dividing the voltage drop by the applied current.
A quick calculation: To estimate a battery’s internal resistance, you can use a simple voltage-drop method when the open-circuit voltage, loaded voltage, and current draw are known. For example, if a battery reads 9.6 V with no load and drops to 9.4 V under a 100-mA load:
Internal resistance = 9.6 V-9.4 V/0.1 A = 2 Ω
This method is especially useful in field diagnostics, where direct resistance measurements may not be practical, but voltage readings are easily obtained.
In simplified terms, internal resistance can be estimated using several proven techniques. However, the results are influenced by the test method, measurement parameters, and environmental conditions. Therefore, internal resistance should be viewed as a general diagnostic indicator—not a precise predictor of voltage drop in any specific application.
Bonus blueprint: A closing hardware pointer
For internal resistance testing, consider the adaptable e-load concept shown below. It forms a simple, reliable current sink for controlled battery discharge, offering a practical starting point for further refinement. As you know, the DC load test method allows an electronic load to estimate a battery’s internal resistance by observing the voltage drop during a controlled current draw.

Figure 3 The blueprint presents an electronic load concept tailored for internal resistance measurement, pairing a low-RDS(on) MOSFET with a precision load resistor to form a controlled current sink. Source: Author
Now it’s your turn to build, tweak, and test. If you have got refinements, field results, or alternate load strategies, share them in the comments. Let us keep the circuit conversation flowing.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- All About Batteries
- What Causes Batteries to Fail?
- Power Consumption and Battery Life Analysis
- Resistivity is the key to measuring electrical resistance
- Cell balancing maximizes the capacity of multi-cell batteries
The post Inside the battery: A quick look at internal resistance appeared first on EDN.
NB-IoT module adds built-in geolocation capabilities

The ST87M01-1301 NB-IoT wireless module from ST provides narrowband cellular connectivity along with both GNSS and Wi-Fi–based positioning for outdoor and indoor geolocation. Its integrated GNSS receiver enables precise location tracking using GPS constellations, while the Wi-Fi positioning engine delivers fast, low-power indoor location services by scanning nearby 802.11b access points and leveraging third-party geocoding providers.

As the latest member of the ST87M01 series of NB-IoT (LTE Cat NB2) industrial modules, this variant supports multi-frequency bands with extended multi-regional coverage. Its compact, low-power design makes it well suited for smart IoT applications such as asset tracking, environmental monitoring, smart metering, and remote healthcare. A 10.6×12.8-mm, 51-pin LGA package further enables miniaturization in space-constrained designs.
ST provides an evaluation kit that includes a ready-to-use Conexa IoT SIM card and two SMA antennas, helping developers quickly prototype and validate NB-IoT connectivity in real-world conditions. This is supported by an expanding ecosystem featuring the Easy-Connect software library and design examples.
The post NB-IoT module adds built-in geolocation capabilities appeared first on EDN.
Boost controller powers brighter automotive displays

A 60-V boost controller from Diodes, the AL3069Q packs four 80-V current-sink channels for driving LED backlights in automotive displays. Its adaptive boost-voltage control allows operation from a 4.5-V to 60-V input range—covering common automotive power rails at 12 V, 24 V, and 48 V—and its switching frequency is adjustable from 100 kHz to 1 MHz.

The AL3069Q’s four current-sink channels are set using an external resistor, providing typical ±0.5% current matching between channels and devices to ensure uniform brightness across the display. Each channel delivers 250 mA continuous or up to 400 mA pulsed, enabling support for a range of display sizes and LED panels up to 32-inch diagonal, such as those used in infotainment systems, instrument clusters, and head-up displays. PWM-to-analog dimming, with a minimum duty cycle of 1/5000 at 100 Hz, improves brightness control while minimizing LED color shift.
Diode’s AL3069Q offers robust protection and fault diagnostics, including cycle-by-cycle current limit, soft-start, UVLO, programmable OVP, OTP, and LED-open/-short detection. Additional safeguards cover sense resistor, Schottky diode, inductor, and VOUT faults, with a dedicated pin to signal any fault condition.
The automotive-compliant controller costs $0.54 each in 1000-unit quantities.
The post Boost controller powers brighter automotive displays appeared first on EDN.
Hybrid device elevates high-energy surge protection

TDK’s G series integrates a metal oxide varistor and a gas discharge tube into a single device to provide enhanced surge protection. The two elements are connected in series, combining the strengths of both technologies to deliver greater protection than either component can offer on its own. This hybrid configuration also reduces leakage current to virtually zero, helping extend the overall lifetime of the device.

The G series comprises two leaded variants—the G14 and G20—with disk diameters of 14 mm and 20 mm, respectively. G14 models support AC operating voltages from 50 V to 680 V, while G20 versions extend this range to 750 V. They can handle maximum surge currents of 6,000 A (G14) and 10,000 A (G20) for a single 8/20-µs pulse, and absorb up to 200 J (G14) or 490 J (G20) of energy.
Operating over a temperature range of –40 °C to +105 °C, the G series is suitable for use in power supplies, chargers, appliances, smart metering, communication systems, and surge protection devices. Integrating both protection elements into a single, epoxy-coated 2-pin package simplifies design and reduces board space compared to using discrete components.
To access the datasheets for the G14 series (ordering code B72214G) and the G20 series (B72220G), click here.
The post Hybrid device elevates high-energy surge protection appeared first on EDN.
Power supplies enable precise DC testing

R&S has launched the NGT3600 series of DC power supplies, delivering up to 3.6 kW for a wide range of test and measurement applications. This versatile line provides clean, stable power with low voltage and current ripple and noise. With a resolution of 100 µA for current and 1 mV for voltage, as well as adjustable output voltages up to 80 V, the supplies offer both precision and flexibility.

The dual-channel NGT3622 combines two fully independent 1800-W outputs in a single compact instrument. Its channels can be connected in series or parallel, allowing either the voltage or the current to be doubled. For applications requiring even more power, up to three units can be linked to provide as much as 480 V or 300 A across six channels. The NGT3622 supports current and voltage testing under load, efficiency measurements, and thermal characterization of components such as DC/DC converters, power supplies, motors, and semiconductors.
Engineers can use the NGT3600 series to test high-current prototypes such as base stations, validate MPPT algorithms for solar inverters, and evaluate charging-station designs. In the automotive sector, the series supports the transition to 48-V on-board networks by simulating these networks and powering communication systems, sensors, and control units during testing.
All models in the NGT3600 series are directly rack-mountable with no adapter required. They will be available beginning January 13, 2026, from R&S and selected distribution partners. For more information, click here.
The post Power supplies enable precise DC testing appeared first on EDN.
Space-ready Ethernet PHYs achieve QML Class P

Microchip’s two radiation-tolerant Ethernet PHY transceivers are the company’s first devices to earn QML Class P/ESCC 9000P qualification. The single-port VSC8541RT and quad-port VSC8574RT support data rates up to 1 Gbps, enabling dependable data links in mission-critical space applications.

Achieving QML Class P/ESCC 9000P certification involves rigorous testing—such as Total Ionizing Dose (TID) and Single Event Effects (SEE) assessments—to verify that devices tolerate the harsh radiation conditions of space. The certification also ensures long-term availability, traceability, and consistent performance.
The VSC8541RT and VSC8574RT withstand 100 krad(Si) TID and show no single-event latch-up at LET levels below 78 MeV·cm²/mg at 125 °C. The VSC8541RT integrates a single Ethernet copper port supporting MII, RMII, RGMII, and GMII MAC interfaces, while the VSC8574RT includes four dual-media copper/fiber ports with SGMII and QSGMII MAC interfaces. Their low power consumption and wide operating temperature ranges make them well-suited for missions where thermal constraints and power efficiency are key design considerations.
The post Space-ready Ethernet PHYs achieve QML Class P appeared first on EDN.
Active current mirror

Current mirrors are a commonly useful circuit function, and sometimes high precision is essential. The challenge of getting current mirrors to be precise has created a long list of tricks and techniques. The list includes matched transistors, monolithic transistor multiples, emitter degeneration, fancy topologies with extra transistors, e.g., Wilson, cascode, etc.
But when all else fails and precision just can’t suffer any compromise, Figure 1 shows the nuclear option. Just add a rail-to-rail I/O (RRIO) op-amp!
Figure 1 An active current sink mirror. Assuming resistor equality and negligible A1 offset error, A1 feedback forces Q1 to maintain accurate current sink I/O equality I2 = I1.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The theory of operation of the ACM couldn’t be more straightforward. Vr , which is equal to I1*R, is wired to A1’s noninverting input, forcing it to drive Q1 to conduct I2 such that I2R = I1R.
Therefore, if the resistors are equal, A1’s accuracy limiting parameters, like offset voltage, gain-bandwidth, bias and offset currents, etc., are adequate, and Q1 doesn’t saturate, I1 can be equal to I2 just as precisely as you like.
Obviously, Vr must be >> Voffset, and A1’s output span must be >> Q1’s threshold even after subtracting Vr.
Substitute a PFET for Figure 1’s NFET, and a current-sourcing mirror results, as shown in Figure 2.

Figure 2 An active current source mirror. This is identical to Figure 1, except this Q1 is a PFET and the polarities are swapped.
Active current mirror (ACM) precision can be better than that of easily available sense resistors. So, a bit of post-assembly trimming, as illustrated in Figure 3, might be useful.

Figure 3 If adequately accurate resistors aren’t handy, a trimmer pot might be useful for post-assembly trimming.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A current mirror reduces Early effect
- A two-way mirror—current mirror that is
- A two-way Wilson current mirror
- Current mirror improves PWM regulator’s performance
The post Active current mirror appeared first on EDN.
Charting the course for a truly multi-modal device edge

The world is witnessing an artificial intelligence (AI) tsunami. While the initial waves of this technological shift focused heavily on the cloud, a powerful new surge is now building at the edge. This rapid infusion of AI is set to redefine Internet of Things (IoT) devices and applications, from sophisticated smart homes to highly efficient industrial environments.
This evolution, however, has created significant fragmentation in the market. Many existing silicon providers have adopted a strategy of bolting on AI capabilities to legacy hardware originally designed for their primary end markets. This piecemeal approach has resulted in inconsistent performance, incompatible toolchains, and a confusing landscape for developers trying to deploy edge AI solutions.
To unlock the transformative potential of edge AI, industry must pivot. We must move beyond retrofitted solutions and embrace a purpose-built, AI-native approach that integrates hardware and software right from the foundational design.
The AI-native mandate
“AI-native” is more than a marketing term; it’s a fundamental architectural commitment where AI is the central consideration, not an afterthought. Here’s what it looks like.
- The hardware foundation: Purpose-built silicon
As IoT workloads evolve to handle data across multiple modalities, from vision and voice to audio and time series, the underlying silicon must present itself as a flexible, secure platform capable of efficient processing. Core to such design considerations include NPU architectures that can scale, and are supported by highly integrated vision, voice, video and display pipelines.
- The software ecosystem: Openness and portability
To accelerate innovation and combat fragmentation for IoT AI, the industry needs to embrace open standards. While the ‘language’ of model formats and frameworks is becoming more industry-standard, the ecosystem of edge AI compilers is largely being built from vendor-specific and proprietary offerings. Efficient execution of AI workloads is heavily dependent on optimized data movement and processing across scalar, vector, and matrix accelerator domains.
By open-sourcing compilers, companies encourage faster innovation through broader community adoption, providing flexibility to developers and ultimately facilitating more robust device-to-cloud developer journeys. Synaptics is encouraging broader adoption from the community by open-sourcing edge AI tooling and software, including Synaptics’ Torq edge AI platform, developed in partnership with Google Research.
- The dawn of a new device landscape
AI-native silicon will fuel the creation of entirely new device categories. We are currently seeing the emergence of a new class of devices truly geared around AI, such as wearables—smart glasses, smartwatches, and wristbands. Crucially, many of these devices are designed to operate without being constantly tethered to a smartphone.
Instead, they soon might connect to a small, dedicated computing element, perhaps carried in a pocket like a puck, providing intelligence and outcomes without requiring the user to look at a traditional phone display. This marks the beginning of a more distributed intelligence ecosystem.
The need for integrated solutions
This evolving landscape is complex, demanding a holistic approach. Intelligent processing capabilities must be tightly coupled with secure, reliable connectivity to deliver a seamless end-user experience. Connected IoT devices need to leverage a broad range of technologies from the latest Wi-Fi and Bluetooth standards to Thread and ZigBee.
Chip, device and system-level security are also vital, especially considering multi-tenant deployments of sensitive AI models. For intelligent IoT devices, particularly those that are battery-powered or wearable, security must be maintained consistently as the device transitions in and out of different power states. The combination of processing, security, and power must all work together effectively.
Navigating this new era of the AI edge requires a fundamental shift in mindset, a change from retrofitting existing technology to building products with a clear, AI-first mission. Take the case of Synaptics SL2610 processor, one of the industry’s first AI-native, transformer-capable processors designed specifically for the edge. It embodies the core hardware and software principles needed for the future of intelligent devices, running on a Linux platform.
By embracing purpose-built hardware, rallying around open software frameworks, and maintaining a strategy of self-reliance and strategic partnerships, the industry can move past the current market noise and begin building the next generation of truly intelligent, powerful, and secure devices.
Mehul Mehta is a Senior Director of Product Marketing at Synaptics Inc., where he is responsible for defining the Edge AI IoT SoC roadmap and collaborating with lead customers. Before joining Synaptics, Mehul held leadership roles at DSP Group spanning product marketing, software development, and worldwide customer support.
Related Content
- Edge AI: Bringing Intelligence Closer to the Source
- An edge AI processor’s pivot to the open-source world
- Edge AI powers the next wave of industrial intelligence
- Synaptics, Google partnership targets edge AI for the IoT
- How Advanced Packaging is Unleashing Possibilities for Edge AI
The post Charting the course for a truly multi-modal device edge appeared first on EDN.
Infused concrete yields greatly improved structural supercapacitor

A few years ago, a team at MIT researched and published a paper on using concrete as an energy-storage supercapacitor (MIT engineers create an energy-storing supercapacitor from ancient materials) (also called an ultracapacitor), which is a battery based on electric fields rather than electrochemical principles. Now, the same group has developed a battery with ten times the storage per volume of that earlier version, by using concrete infused with various materials and electrolytes such as (but not limited to) nano-carbon black.
Concrete is the world’s most common building material and has many virtues, including basic strength, ruggedness, and longevity, and few restrictions on final shape and form. The idea of also being able to use it as an almost-free energy storage system is very attractive.
By combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, their electron-conducting carbon concrete (EC3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy, Figure 1.
Figure 1 As with most batteries, schematic diagram and physical appearance are simple, and it’s the details that are the challenge. Source: Massachusetts Institute of Technology
This greatly improved energy density was made possible by their deeper understanding of how the nanocarbon black network inside EC3 functions and interacts with electrolytes, as determined using some sophisticated instrumentation. By using focused ion beams for the sequential removal of thin layers of the EC3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the joint EC³ Hub and MIT Concrete Sustainability Hub team was able to reconstruct the conductive nanonetwork at the highest resolution yet. The analysis showed that the network is essentially a fractal-like “web” that surrounds EC3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.
A cubic meter of this version of EC3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy, which is enough to power an actual modest-sized refrigerator for a day. Via extrapolation (always the tricky aspect of these investigations), they say that 45 cubic meters of EC3 with an energy density of 0.22 kWh/m3 – a typical house-sided foundation—would have enough capacity to store about 10 kilowatt-hours of energy, the average daily electricity usage for a household, Figure 2.

Figure 2 These are just a few of the many performance graphs that the team developed. Source: Massachusetts Institute of Technology
They achieved highest performance with organic electrolytes, especially those that combined quaternary ammonium salts—found in everyday products like disinfectants—with acetonitrile, a clear, conductive liquid often used in industry, Figure 3.

Figure 3 They also identified needed properties for the electrolyte and investigated many possibilities for this critical component. Source: Massachusetts Institute of Technology
If this all sounds only like speculation from a small-scale benchtop lab project, it is, and it isn’t. Much of the work was done in cooperation with the American Concrete Institute, a research and promotional organization that studies all aspects of concrete, including formulation, application, standardized tests, long-term performance, and more.
While the MIT team, perhaps not surprisingly, is positioning this development as the next great thing—and it certainly gets a lot of attention in the mainstream media due to its tantalizing keywords of “concrete” and “battery”—there are genuine long-term factors to evaluate related to scaling up to a foundation-sized mass:
- Does the final form of the concrete matter, such a large cube versus flat walls?
- What are the partial and large-scale failure modes?
- What are the long-term effects of weather exposure, as this material is concrete (which is well understood) but with an additive?
- What happens when an EC3 foundation degrades or fails—do you have to lift the house and replace the foundation?
- What are the short and long-term influences on performance, and how does the formulation affect that performance?
The performance and properties of the many existing concrete formulations have been tested in the lab and in the field over decades, and “improvements” are not done casually, especially in consideration of the end application.
Since demonstrating this concrete battery in structural mode lacks visual impact, the MIT team built a more attention-grabbing demonstration battery of stacked cells to provide 12-V of power. They used this to operate a 12-V computer fan and a 5-V USB output (via a buck regulator) for a handheld gaming console, Figure 4.

Figure 4 A 12-V concrete battery powering a small fan and game console provides a visual image which is more dramatic and attention-grabbing. Source: Massachusetts Institute of Technology
The work is detailed in their paper “High energy density carbon–cement supercapacitors for architectural energy storage,” published in Proceedings of the National Academy of Sciences (PNAS). It’s behind a paywall, but there is a posted student thesis, “Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases.” Finally, there’s also a very informative 18-slide, 21-minute PowerPoint presentation at YouTube (with audio), “Carbon-cement supercapacitors: A disruptive technology for renewable energy storage,” that was developed by the MIT team for the ACI.
What’s your view? Is this a truly disruptive energy-storage development? Or will the realities of scaling up in physical volume and long-term performance, as well as “replacement issues,” make this yet another interesting advance that falls short in the real world?
Check back in five to ten years to find out. If nothing else, this research reminds us that there is potential for progress in power and energy beyond the other approaches we hear so much about.
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- New Cars Make Tapping Battery Power Tough
- What If Battery Progress Is Approaching Its End?
- Battery-Powered Large Home Appliances: Good Idea or Resource Misuse?
- Is a concrete rechargeable battery in your future?
The post Infused concrete yields greatly improved structural supercapacitor appeared first on EDN.
A simpler circuit for characterizing JFETs
The circuit presented by Cor Van Rij for characterizing JFETs is a clever solution. Noteworthy is the use of a five-pin test socket wired to accommodate all of the possible JFET pinout arrangements.
This idea uses that socket arrangement in a simpler circuit. The only requirement is the availability of two digital multimeters (DMMs), which add the benefit of having a hold function to the measurements. In addition to accuracy, the other goals in developing this tester were:
- It must be simple enough to allow construction without a custom printed circuit board, as only one tester was required.
- Use components on hand as much as possible.
- Accommodate both N- and P-channel devices while using a single voltage supply.
- Use a wide range of supply voltages.
- Incorporate a current limit with LED indication when the limit is reached.
The resulting circuit is shown in Figure 1.
Figure 1 Characterizing JFETs using a socket arrangement. The fixture requires the use of two DMMs.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Q1, Q2, R1, R3, R5, D2, and TEST pushbutton S3 comprise the simple current limit circuit (R4 is a parasitic Q-killer).
S3 supplies power to S1, the polarity reversal switch, and S2 selects the measurement. J1 and J2 are banana jacks for the DMM set to read the drain current. J3 and J4 are banana jacks for the DMM set to read Vgs(off).
Note the polarities of the DMM jacks. They are arranged so that the drain current and Vgs(off) read correctly for the type of JFET being tested—positive IDSS and negative Vgs(off) for N-channel devices and negative IDSS and positive Vgs(off) for P-channel devices.
R2 and D1 indicate the incoming power, while R6 provides a minimum load for the current limiter. Resistor R8 isolates the DUT from the effects of DMM-lead parasitics, and R9 provides a path to earth ground for static dissipation.
Testing JFETsFigure 2 shows the tester setup measuring Vgs(off) and IDSS for an MPF102, an N-channel device. The specified values of this device are Vgs(off) of -8v maximum and IDSS of 2 to 20 mA. Note that the hold function of the meters was used to maintain the measurements for the photograph. The supply for this implementation is a nominal 12-volt “wall wart” salvaged from a defunct router.

Figure 2 The test of an MPF302 N-Channel JFET using the JFET characterization circuit.
Figure 3 shows the current limit in action by setting the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102. The limit is 52.2 mA, and the I-LIMIT LED is brightly lit.

Figure 3 The current limit test that sets the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102.
John L. Waugaman’s love of electronics began when I built a crystal set at age 10 with my father’s help. Earning a BSEE from Carnegie-Mellon University led to a 30-year career in industry designing product inspection equipment and four patents. After being RIF’d, I spent the next 20 years as a consultant specializing in analog design in industrial and military projects. Now I’m retired, sort of, but still designing. It’s in my blood, I guess.
Related Content
- A simple circuit to let you characterize JFETs more accurately
- A Conversation with Mike Engelhardt on QSPICE
- Characteristics of Junction Field Effect Transistors (JFET)
- Simple circuit lets you characterize JFETs
- Building a JFET voltage-tuned Wien bridge oscillator
- Discrete JFETs still prominent in design: Low input voltage power supply
The post A simpler circuit for characterizing JFETs appeared first on EDN.
Gold-plated PWM-control of linear and switching regulators
“Gold-plated” without the gold plating
Alright, I admit that the title is a bit over the top. So, what do I mean by it? I mean that:
(1) The application of PWM control to a regulator does not significantly degrade the inherent DC accuracy of its output voltage,
(2) Any ability of the regulator’s output voltage to reach below that of its internal reference is supported, and
(3) This is accomplished without the addition of a new reference voltage.
Refer to Figure 1.

Figure 1 This circuit meets the requirements of “Gold-Plated PWM control” as stated above.
Wow the engineering world with your unique design: Design Ideas Submission Guide
How it worksThe values of components Cin, Cout, Cf, and L1 are obtained from the regulator’s datasheet. (Note that if the regulator is linear, L1 is replaced with a short.)
The datasheet typically specifies a preferred value of Rg, a single resistor between ground and the feedback pin FB.
Taking the DC voltage VFB of the regulator’s FB pin into account, R3 is selected so that U2a supplies a V_sup voltage greater than or equal to 3.0 V. C7 and R3 ensure that the composite is non-oscillatory, even with decoupling capacitor C6 in place.C6 is required for the proper operation of the SN74AC04 IC U1.
The following equations govern the circuit’s performance, where Vmax is the desired maximum regulator output voltage:
R3 = ( Vsup / VFB – 1 ) · 10k
Rg1 = Rg / ( 1 – ( VFB / Vsup ) / ( 1 – VFB/Vmax ))
Rg2 = Rg · Rg1 / ( Rg1 – Rg )
Rf = Rg · ( Vmax / VFB – 1 )
They enable the regulator output to reach zero volts (if it is capable of such) when the PWM inputs are at their highest possible duty cycle.
U1 is part of two separate PWMs whose composite output can provide up to 16 bits of resolution. Ra and Rb + Rc establish a factor of 256 for the relative significance of the PWMs.
If eight bits or less of resolution is required, Rb and Rc, and the least significant PWM, can be eliminated, and all six inverters can be paralleled.
The PWMs’ minimum frequency requirements shown are important because when those are met, the subsequent filter passes a peak-to-peak ripple less than 2-16 of the composite PWM’s full-scale range. This filter consists of Ra, Rb + Rc, R5 to R7, C3 to C5, and U2b.
ErrorsThe most stringent need to minimize errors comes from regulators with low and highly accurate reference voltages. Let’s consider 600 mV and 0.5% from which we arrive at a 3-mV output error maximum inherent to the regulator. (This is overly restrictive, of course, because it assumes zero-tolerance resistors to set the output voltage. If 0.1% resistors were considered, we’d add 0.2% to arrive at 0.7% and more than 4 mV.)
Broadly, errors come from imperfect resistor ratios and component tolerances, op-amp input offset voltages and bias currents, and non-linear SN74AC04 output resistances. The 0.1% resistors are reasonably cheap.
Resistor ratiosIf nominally equal in value, such resistors, forming a ratio, contribute a worst-case error of ± 0.1%. For those of different values, the worst is ± 0.2%. Important ratios involve:
- Rg1, Rg2, and Rf
- R3 and R4
- Ra and Rb + Rc
Various Rf, Rg ratios are inherent to regulator operation.
The Rg1, Rg2; R3, R4; and Ra, Rb + Rc pairs have been introduced as requirements for PWM control.
The Ra / (Rb + Rc) error is ± 0.2%, but since this involves a ratio of 8-bit PWMs at most, it incurs less than 1 least significant bit (LSbit) of error.
The Rg1, Rg2 pair introduces an error of ±0.2 % at most.
The R3, R4 pair is responsible for a worst-case ±0.2 %. All are less than the 0.5% mentioned earlier.
Temperature driftThe OPA2376 has a worst-case input offset voltage of 25 µV over temperature. Even if U2a has a gain of 5 to convert FB’s 600 mV to 3 V, this becomes only 125 µV.
Bias current is 10-pA maximum at 25°C, but we are given a typical value only at 125°C of 250 pA.
Of the two op-amps, U2b sees the higher input resistance. But its current would have to exceed 6 nA to produce even 1-mV of offset, so these op-amps are blameless.
To determine U1’s output resistance, its spec shows that its minimum logic high voltage for a 3-V supply is 2.46 V under a 12-mA load. This means that the maximum for each inverter is 45 Ω, which gives us 9 Ω for five in parallel. (The maximum voltage drop is lower for a logic low 12 mA, resulting in a lower resistance, but we don’t know how much lower, so we are forced to worst-case it at a ridiculous 0 V!)
Counting C3 as a short under dynamic conditions, the five inverters see a 35-kΩ load, leading to a less than 0.03% error.
Wrapping upThe regulator and its output range might need an even higher voltage, but the input voltage IN has been required to exceed 3.2 V. This is because U1 is spec’d to swing to no further than 80 mV from its supply rails under loads of 2 kΩ or more. (I’ve added some margin, but it’s needed only for the case of maximum output voltage.)
You should specify Vmax to be slightly higher than needed so that U2b needn’t swing all the way to ground. This means that a small negative supply for U2 is unnecessary. IN must also be less than 5.5 V to avoid exceeding U2’s spec. If a larger value of IN is required by the regulator, an inexpensive LDO can provide an appropriate U2 supply.
I grant that this design might be overkill, but I wanted to see what might be required to meet the goals I set. But who knows, someone might find it or some aspect of it useful.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- PWM buck regulator interface generalized design equations
- Improve PWM controller-induced ripple in voltage regulators
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Negative time-constant and PWM program a versatile ADC front end
- Another PWM controls a switching voltage regulator
The post Gold-plated PWM-control of linear and switching regulators appeared first on EDN.
The role of AI processor architecture in power consumption efficiency

From 2005 to 2017—the pre-AI era—the electricity flowing into U.S. data centers remained remarkably stable. This was true despite the explosive demand for cloud-based services. Social networks such as Facebook, Netflix, real-time collaboration tools, online commerce, and the mobile-app ecosystem all grew at unprecedented rates. Yet continual improvements in server efficiency kept total energy consumption essentially flat.
In 2017, AI deeply altered this course. The escalating adoption of deep learning triggered a shift in data-center design. Facilities began filling with power-hungry accelerators, mainly GPUs, for their ability to crank through massive tensor operations at extraordinary speed. As AI training and inference workloads proliferated across industries, energy demand surged.
By 2023, U.S. data centers had doubled their electricity consumption relative to a decade earlier with an estimated 4.4% of all U.S. electricity now feeding data-center racks, cooling systems, and power-delivery infrastructure.
According to the Berkeley Lab report, data-center load growth has tripled over the past decade and is projected to double or triple again by 2028. The report estimates that AI workloads alone could by that time consume as much electricity annually as 22% of all U.S. households—a scale comparable to powering tens of millions of homes.

Total U.S. data-center electricity consumption increased ten-fold from 2014 through 2028. Source: 2024 U.S. Data Center Energy Usage Report, Berkeley Lab
This trajectory raises a question: What makes modern AI processors so energy-intensive? Whether rooted in semiconductor physics, parallel-compute structures, memory-bandwidth bottlenecks, or data-movement inefficiencies, understanding the causes becomes a priority. Analyzing the architectural foundations of today’s AI hardware may lead to corrective strategies to ensure that computational progress does not come at the expense of unsustainable energy demand.
What’s driving energy consumption in AI processors
Unlike traditional software systems—where instructions execute in a largely sequential fashion, one clock cycle and one control-flow branch at a time—large language models (LLMs) demand massively parallel elaboration of multiple-dimensional tensors. Matrices many gigabytes in size must be fetched from memory, multiplied, accumulated, and written back at amazing rates. In state-of-the-art models, this process encompasses hundreds of billions to trillions of parameters, each of which must be evaluated repeatedly during training.
Training models at this scale require feeding enormous datasets through racks of GPU servers running continuously for weeks or even months. While the computational intensity is extreme, so is the energy footprint. For example, the training run for OpenAI’s GPT-4 is estimated to have consumed around 50 gigawatt-hours of electricity. That’s roughly equivalent to powering the entire city of San Francisco for three days.
This immense front-loaded investment in energy and capital defines the economic model of leading-edge AI. Model developers must absorb stunning training costs upfront, hoping to recover them later through the widespread use of the inferred model.
Profitability hinges on the efficiency of inference, the phase during which users interact with the model to generate answers, summaries, images, or decisions. “For any company to make money out of a model—that only happens on inference,” notes Esha Choukse, a Microsoft Azure researcher who investigates methods for improving the efficiency of large-scale AI inference systems. His quote appeared in the May 20, 2025, MIT Technology Review article “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.”
Indeed, experts across the industry consistently emphasize that inference not training is becoming the dominant driver of AI’s total energy consumption. This shift is driven by the proliferation of real-time AI services—millions of daily chat sessions, continuous content generation pipelines, AI copilots embedded into productivity tools, and ever-expanding recommender and ranking systems. Together, these workloads operate around the clock, in every region, across thousands of data centers.
As a result, it’s now estimated that 80–90% of all compute cycles serve AI inference. As models continue to grow, user demand accelerates, and applications diversify, further widening this imbalance. The challenge is no longer merely reducing the cost of training but fundamentally rethinking the processor architectures and memory systems that underpin inference at scale.
Deep dive into semiconductor engineering
Understanding energy consumption in modern AI processors requires examining two fundamental factors: data processing and data movement. In simple terms, this is the difference between computing data and transporting data across a chip and its surrounding memory hierarchy.
At first glance, the computational side seems conceptually straightforward. In any AI accelerator, sizeable arrays of digital logic—multipliers, adders, accumulators, activation units—are orchestrated to execute quadrillions of operations per second. Peak theoretical performance is now measured in petaFLOPS with major vendors pushing toward exaFLOP-class systems for AI training.
However, the true engineering challenge lies elsewhere. The overwhelming contributor to energy consumption is not arithmetic—it is the movement of data. Every time a processor must fetch a tensor from cache or DRAM, shuffle activations between compute clusters, or synchronize gradients across devices, it expends orders of magnitude more energy than performing the underlying math.
A foundational 2014 analysis by Professor Mark Horowitz at Stanford University quantified this imbalance with remarkable clarity. Basic Boolean operations require only tiny amounts of energy—on the order of picojoules (pJ). A 32-bit integer addition consumes roughly 0.1 pJ, while a 32-bit multiplication uses approximately 3 pJ.
By contrast, memory operations are dramatically more energy hungry. Reading or writing a single bit in a register costs around 6 pJ, and accessing 64 bits from DRAM can require roughly 2 nJ. This represents nearly a 10,000× energy differential between simple computation and off-chip memory access.
This discrepancy grows even more pronounced at scale. The deeper a memory request must travel—from L1 cache to L2, from L2 to L3, from L3 to high-bandwidth memory (HBM), and finally out to DRAM—the higher the energy cost per bit. For AI workloads, which depend on massive, bandwidth-intensive layers of tensor multiplications, the cumulative energy consumed by memory traffic considerably outstrips the energy spent on arithmetic.
In the transition from traditional, sequential instruction processing to today’s highly parallel, memory-dominated tensor operations, data movement—not computation—has emerged as the principal driver of power consumption in AI processors. This single fact shapes nearly every architectural decision in modern AI hardware, from enormous on-package HBM stacks to complex interconnect fabrics like NVLink, Infinity Fabric, and PCIe Gen5/Gen6.
Today’s computing horsepower: CPUs vs. GPUs
To gauge how these engineering principles affect real hardware, consider the two dominant processor classes in modern computing:
- CPUs, the long-standing general-purpose engines of software execution
- GPUs, the massively parallel accelerators that dominate AI training and inference today
A flagship CPU such as AMD’s Ryzen Threadripper PRO 9995WX (96 cores, 192 threads) consumes roughly 350 W under full load. These chips are engineered for versatility—branching logic, cache coherence, system-level control—not raw tensor throughput.
AI processors, in contrast, are in a different league. Nvidia’s latest B300 accelerator draws around 1.4 kW on its own. A full Nvidia DGX B300 rack unit, housing eight accelerators plus supporting infrastructure, can reach 14 kW. Even in the most favorable comparison, this represents a 4× increase in power consumption per chip—and when comparing full server configurations, the gap can expand to 40× or more.
Crucially, these raw power numbers are only part of the story. The dramatic increases in energy usage are multiplied by AI deployments in data centers where tens of thousands of such GPUs are running around the clock.
Yet hidden beneath these amazing numbers lies an even more consequential industry truth, rarely discussed in public and almost never disclosed by vendors.
The well-kept industry secret
To the best of my knowledge, no major GPU or AI accelerator vendor publishes the delivered compute efficiency of their processors defined as the ratio of actual throughput achieved during AI workloads to the chip’s theoretical peak FLOPS.
Vendors justify this absence by noting that efficiency depends heavily on the software workload; memory access patterns, model architecture, batch size, parallelization strategy, and kernel implementation can all impact utilization. This is true, and LLMs place extreme demands on memory bandwidth causing utilization to drop substantially.
Even acknowledging these complexities, vendors still refrain from providing any range, estimate, or context for typical real-world efficiency. The result is a landscape where theoretical performance is touted loudly, while effective performance remains opaque.
The reality, widely understood among system architects but seldom stated plainly is simple: “Modern GPUs deliver surprisingly low real-world utilization for AI workloads—often well below 10%.”
A processor advertised at 1 petaFLOP of peak AI compute may deliver only ~100 teraFLOPS of effective throughput when running a frontier-scale model such as GPT-4. The remaining 900 teraFLOPS are not simply unused—they are dissipated as heat requiring extensive cooling systems that further compound total energy consumption.
In effect, much of the silicon in today’s AI processors is idle most of the time, stalled on memory dependencies, synchronization barriers, or bandwidth bottlenecks rather than constrained by arithmetic capability.
This structural inefficiency is the direct consequence of the imbalance described earlier: arithmetic is cheap, but data movement is extraordinarily expensive. As models grow and memory footprints balloon, this imbalance worsens.
Without a fundamental rethinking of processor architecture—and especially of the memory hierarchy—the energy profile of AI systems will continue to scale unsustainably.
Rethinking AI processors
The implications of this analysis point to a clear conclusion: the architecture of AI processors must be fundamentally rethought. CPUs and GPUs each excel in their respective domains—CPUs in general-purpose control-heavy computation, GPUs in massively parallel numeric workloads. Neither was designed for the unprecedented data-movement demands imposed by modern large-scale AI.
Hierarchical memory caches, the cornerstone of traditional CPU design, were originally engineered as layers to mask the latency gap between fast compute units and slow external memory. They were never intended to support the terabyte-scale tensor operations that dominate today’s AI workloads.
GPUs inherited versions of these cache hierarchies and paired them with extremely wide compute arrays, but the underlying architectural mismatch remains. The compute units can generate far more demand for data than any cache hierarchy can realistically supply.
As a result, even the most advanced AI accelerators operate at embarrassingly low utilization. Their theoretical petaFLOP capabilities remain mostly unrealized—not because the math is difficult, but because the data simply cannot be delivered fast enough or close enough to the compute units.
What is required is not another incremental patch layered atop conventional designs. Instead, a new class of AI-oriented processor architecture must emerge, one that treats data movement as the primary design constraint rather than an afterthought. Such architecture must be built around the recognition that computation is cheap, but data movement is expensive by orders of magnitude.
Processors of the future will not be defined by the size of their multiplier arrays or peak FLOPS ratings, but by the efficiency of their data pathways.
Lauro Rizzatti is a business advisor at VSORA, a company offering silicon solutions for AI inference. He is a verification consultant and industry expert on hardware emulation.
Related Content
- Solving AI’s Power Struggle
- The Challenges of Powering Big AI Chips
- AI Power and Cooling Spawn Forecasting Frenzy
- Benchmarking AI Processors: Measuring What Matters
- Breaking Through Memory Bottlenecks: The Next Frontier for AI Performance
The post The role of AI processor architecture in power consumption efficiency appeared first on EDN.
Resonant inductors offer a wide inductance range

ITG Electronics launches the RL858583 Series of resonant inductors, delivering a wide inductance range, high current, and high efficiency in a compact DIP package. The family of ferrite-based, high-current inductors target demanding power electronics applications.
(Source: ITG Electronics)
The RL858583 Series features an inductance range of 6.8 μH to 22.0 μH with a tight 5% tolerance. Custom inductance values are available.
The series supports currents up to 39 A, with approximately 30% roll-off, in a compact 21.5 × 21.0 × 21.5-mm footprint. This provides exceptional current handling in a compact DIP package, ITG said.
Designed for reliability in high-stress operating conditions, the inductors offer a rated voltage of 600 VAC/1,000 VDC and dielectric strength up to 4,500 VDC. The devices feature low DC resistance (DCR) from 3.94 mΩ to 17.40 mΩ and AC resistance (ACR) values from 70 mΩ to 200 mΩ, which helps to minimize power losses and to ensure high efficiency across a range of frequencies. The operating temperature ranges from -55℃ to 130℃.
The combination of high current capability, compact design, and customizable inductance options makes them suited for resonant converters, inverters, and other high-performance power applications, according to ITG Electronics. The RL858583 Series resonant inductors are RoHS-compliant and halogen-free.
The post Resonant inductors offer a wide inductance range appeared first on EDN.
Power resistors handle high-energy pulse applications

Bourns, Inc. releases its Riedon BRF Series of precision power foil resistors for high-energy pulse applications. These power resistors offer power ratings up to 2,500 W and a temperature coefficient of resistance (TCR) as low as ±15 ppm/°C, making them suited as energy dissipation solutions for circuits that require high precision. Applications include current sensing, power management, industrial power control, and energy storage.
(Source: Bourns, Inc.)
The power resistor series is available in two- and four-terminal options with termination current ratings up to 150 A. This enables developers to tailor the resistors to their exact design requirements, Bourns said.
Other key specifications include a resistance range from 0.001 to 500 Ω, low inductance of <50 nH, and load stability to 0.1%. The operating temperature range is -40°C to 130°C.
The BRF Series of power resistors is built using metal foil technology housed in an aluminum heat sink and a low-profile package. These precision power resistors are designed to meet the rugged and space-constrained requirements of high-energy pulse applications such as power converters, battery energy storage systems, industrial power supplies, inverters, and motor drives.
Available now, the Riedon BRF series is RoHS compliant. Click here for Bourns’ portfolio of metal foil resistors.
The post Power resistors handle high-energy pulse applications appeared first on EDN.
The Linksys MX4200C: A retailer-branded router with memory deficiencies

How timely! My teardown of Linksys’ VLP01 router, submitted in late September, was published one day prior to when I started working on this write-up in late October.

What’s the significance, aside from the chronological cadence? Well, at the end of that earlier piece, I wrote:
There’s another surprise waiting in the wings, but I’ll save that for another teardown another (near-future, I promise) day.
That day is today. And if you’ve already read my earlier piece (which you have, right?), you know that I actually spent the first few hundred words of it talking about a different Linksys router, the LN1301, also known as the MX4300:

I bought a bunch of ‘em on closeout from Woot (yep, the same place that the refurbished VLP01 two-pack came from), and I even asked my wife to pick up one too, with the following rationale:
That’ll give me plenty of units for both my current four-node mesh topology and as-needed spares…and eventually I may decide to throw caution to the wind and redirect one of the spares to a (presumed destructive) teardown, too.
Last month’s bigger brotherHold that thought. Today’s teardown victim was another refurbished Linksys router two-pack from Woot, purchased a few months later, this February to be exact. Woot promotion-titled the product page as a “Linksys AX4200 Velop Mesh Wi-Fi 6 System”, and the specs further indicated that it was a “Linksys MX8400-RM2 AX4200 Velop Mesh Wi-Fi 6 Router System 2-Pack”. It cost me $19.99 plus tax (with free shipping) after another $5 promotion-code discount, and I figured that, as with the two-VLP01 kit, I’d tear down one of the two routers for your enjoyment and hold onto the other for use as a mesh node. Here’s its stock image on Woot’s website:

Looks kinda like the MX4300, doesn’t it? I admittedly didn’t initially notice the physical similarity, in part because of the MX8400 product name replicated on the outer box label:

When I started working on the sticker holding the lid in place, I noticed a corner of a piece of literature sticking out, which turned out to be the warranty brochure. Nice packing job, Linksys!

Lifting the lid:

You’ll find both routers inside, along with two Ethernet cable strands rattling around loose. Underneath the thick blue cardstock piece labeled “Setup Guide” to the right:

are the two power supplies, along with…umm…the setup guide plus a support document:

Some shots of the wall wart follow:

including the specs:

and finally, our patient, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front view:

left side:

back, both an overview and a closeup of the various connectors: power, WAN, three LAN, and USB-A. Hmm…where have I seen that combo before?


right side:

top, complete with the status LED:

and…wait. What’s this?

In addition to the always-informative K7S-03580 FCC ID, check out that MX4200C product name. When I saw it, I realized two key things:
- Linksys was playing a similar naming game to what they’d done with the VLP01. Quoting from my earlier teardown: “…an outer box shot of what I got…which, I’ve just noticed, claims that it’s an AC2400 configuration
(I’m guessing this is because Linksys is mesh-adding the two devices’ theoretical peak bandwidths together? Lame, Linksys, lame…)” This time, they seemingly added the numbers in the two MX4200 device names together to come up with the “bigger is better” MX8400 moniker. - The MX4200(C, in this case) is mighty close to MX4300. Now also realizing the physical similarity, I suspected I had a near-clone (and much less expensive, not to mention more widely available) sibling to the no-longer-available router I’d discussed a month earlier, which, being rare, I was therefore so reticent to (presumably destructively) disassemble.
Some background from my online research before proceeding:
- The MX4200 came in two generational versions, both of them integrating 512 Mbytes of flash memory for firmware storage. V1 of the MX4200 included 512 Mbytes of RAM and had dimensions of 18.5cm (7.3 inches) high and 7.9cm (3.1 inches) wide. The larger, 24.3cm (9.57 inches) high and 11cm (4.45 inches) wide, V2 MX4200 also doubled the internal RAM capacity to 1 GByte.
- This MX4200C is supposedly a Costco-only variant (meaning what beyond the custom bottom sticker? Dunno), conceptually reminiscent of the Walmart-only VLP01 I’d taken apart last month. I can’t find any specs on it, but given its dimensional commonality with the V2 MX4200, I’ll be curious to peer inside and see if it embeds 1 GByte of RAM, too.
- And the MX4300? It’s also dimensionally reminiscent of the V2 MX4200. But this time, there are 2 GBytes of RAM inside it. Last month, I’d mentioned that the MX4300 also bumps up the flash memory to 1 GByte, but the online source I’d gotten that info from was apparently incorrect. It’s 512 GBytes, the same as in versions of the MX4200.
Clearly, now that I’m aware of the commonality between this MX4200C and the MX4300, I’m going to be more careful (but still comprehensive) than I might otherwise be with my dissection, in the hope of a subsequent full resurrection. To wit, here we go, following the same initial steps I used for the much smaller VLP01 a month ago. The only top groove I was able to punch through was the back edge, and even then, I had to switch to a flat-head screwdriver to make tangible disassembly progress (without permanently creasing the spudger blade in the process):

Voila:


Next to go, again as before, are those four screws:


And now for a notable deviation from last month’s disassembly scheme. That time, there were also screws under the bottom rubber “feet” that needed to be removed before I could gain access to the insides. This time, conversely, when I picked up the assembly in preparation for turning it upside-down…

Alrighty, then!

Behold our first glimpses of the insides. Referencing the earlier outer case equivalents (with the qualifier that, visually obviously, the PCB is installed diagonally), here’s the front:

Left side:

Back, along with another accompanying connectors closeup (note, by the way, the two screws at the bottom of the exposed portion of the PCB):


And right side:

Let’s next get rid of the plastic shield around the connectors, which, as was the case last month, lifted away straightaway:

And next, the finned heatsink to its left (in the earlier photo) and the rear right half of the assemblage (when viewed from the front):



We have liftoff:


Oh, goodie, Faraday cages! Hold that thought:

Rotating the assemblage around exposes the other (front left) half and its metal plate, which, with the just-seen four heatsink screws also no longer holding it in place, lifts right off as well:




You probably already noticed the colored wires in the prior shots. Here are the up-top antennas and LED assembly where they end up:


And here’s where at least some of them originate:



Unhooking the wire harness running up the side of the assemblage, along with removing the two screws noted earlier at the bottom of the PCB, enables the board’s subsequent release:

Here’s what I’m calling the PCB backside (formerly in the rear right region) which the finned heatsink previously partially covered and which you’ve already seen:

And here’s the newly-exposed-to-view frontside (formerly front left, to be precise), with even more Faraday cages awaiting my pry-off attention:

I’m happy to oblige. Upper left corner first:

Temporarily (because, as previously mentioned, I aspire to put everything back together in functionally resurrected form later) bend the tab away, and with thanks to Google Image search results for the tip, a Silicon Labs EFR32MG21 Series 2 Multiprotocol Wireless SoC, supporting Bluetooth, Thread, and Zigbee mesh protocols, comes into view. The previously shown single-lead antenna connection on the other side of the PCB is presumably associated with it:

To its left, uncaged, is a Fidelix FMND4G08S3J-ID 512 Mbyte NAND flash memory, presumably for holding the system firmware.
Most of the rest of the cages’ contents are bland, unless you’re into lots of passives; as you’ll soon see, their associated ICs on the other side are more exciting:




Note in all these so-far cases, as well as the remainder, that thermal tape is employed for heat transfer purposes, not paste. Linksys’ decision not only makes it easier to see what’s underneath it will also increase the subsequent likelihood of tape-back-in-place reassembly functional success:

And after all those passives, the final cage at bottom left ended up being IC-inclusive again, this time containing a Qualcomm PMP8074 power management controller:

Now for a revisit of the other side of the PCB, starting with the top-most cage and working our way to the bottom. The first one, with two antenna connectors notably above it, encompasses a portion of the wireless networking subsystem and is based on two Qualcomm Wi-Fi SoCs, the QCN5024 for 2.4 GHz and QCN5054 for 5 GHz. Above the former are two Skyworks SKY85340-11 front-end modules (FEMs); the latter is topped off by two Skyworks SKY85755-11s:


The next cage is for the processor, a quad-core 1.4 GHz Qualcomm IPQ8174, the same SoC and speed bin as in the Linksys MX4300 I discussed last month, and the volatile memory, two ESMT M15T2G16128A 2 Gbit DDR3-933 SDRAMs. I guess we now know how the MX4200C differs from the V2 MX4200; Linksys halved the RAM to 512 GBytes total, reminiscent of the V1 MX4200’s allocation, to come up with this Costco-special product spin.



The third one, this time with four antennae connectors below it, houses the remainder of the (5 GHz-only, in this case) Wi-Fi subsystem; four more Qualcomm QCN5054s, each with a mated Skyworks SKY85755-11 FEM:


And last but not least, at bottom right is the final cage, containing a Qualcomm QCA8075 five-port 10/100/1000 Mbps Ethernet transceiver, only four ports’ worth of which are seemingly leveraged in this design (one WAN, three LAN, if you’ll recall from earlier). Its function is unsurprising given its layout proximity to the two Botthand LG2P109RN dual-port magnetic transformers to its right:


And with that, I’ll wrap up for today. More info on the MX4200 (V1, to be precise) can be found at WikiDevi. Over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- A fresh gander at a mesh router
- The pros and cons of mesh networking
- Teardown: The router that took down my wireless network
- Is it time to upgrade to mesh networking?
The post The Linksys MX4200C: A retailer-branded router with memory deficiencies appeared first on EDN.
Understand quadrature encoders with a quick technical recap

An unexpected revisit to my earlier post on mouse encoder hacking sparked a timely opportunity to reexamine quadrature encoders, this time with a clearer lens and a more targeted focus on their signal dynamics and practical integration. So, let’s get a fresh restart and dive straight into the quadrature signal magic.
Starting with a flake of theory, a quadrature signal refers to a pair of sinusoidal waveforms—typically labeled I (in-phase) and Q (quadrature)—that share the same frequency but are offset by 90° in phase. These orthogonal signals do not interfere with each other and together form the foundation for representing complex signals in systems ranging from communications to control.

Figure 1 A visualization illustrates the idealized output from a quadrature encoder, highlighting the phase relationship. Source: Author
In the context of quadrature encoders, the term describes two square wave signals, known as A and B channels, which are also 90° out of phase. This phase offset enables the system to detect the direction of rotation, count discrete steps or pulses for accurate position tracking, and enhance resolution through edge detection techniques.
As you may already be aware, encoders are essential components in motion control systems and are generally classified into two primary types: incremental and absolute. A common configuration within incremental encoders is the quadrature encoder, which uses two output channels offset in phase to detect both direction and position with greater precision, making it ideal for tracking relative motion.
Standard incremental encoders also generate pulses as the shaft rotates, providing movement data; however, they lose positional reference when power is interrupted. In contrast, absolute encoders assign a unique digital code to each shaft position, allowing them to retain exact location information even after a power loss—making them well-suited for applications that demand high reliability and accuracy.
Note that while quadrature encoders are often mentioned alongside incremental and absolute types, they are technically a subtype of incremental encoders rather than a separate category.
Oh, I almost forgot: The Z output of an ABZ incremental encoder plays a crucial role in precision positioning. Unlike the A and B channels, which continuously pulse to indicate movement and direction, the Z channel—also known as the index or marker pulse—triggers just once per revolution.
This single pulse serves as a reference point, especially useful during initialization or calibration, allowing systems to accurately identify a home or zero position. That is to say, the index pulse lets you reset to a known position and count full rotations; it’s handy for multi-turn setups or recovery after power loss.

Figure 2 A sample drawing depicts the encoder signals, with the index pulse clearly marked. Source: Author
Hands-on with a real-world quadrature rotary encoder
A quadrature rotary encoder detects rotation and direction via two offset signals; it’s used in motors, knobs, and machines for fine-tuned control. Below is the circuit diagram of a quadrature encoder I designed for a recent project using a couple of optical sensors.

Figure 3 Circuit diagram shows a simple quadrature encoder setup that employs optical sensors. Source: Author
Before we proceed, it’s worth taking a moment to reflect on a few essential points.
- A rotary encoder is an electromechanical device used to measure the rotational motion of a motor shaft or the position of a dial or knob. It commonly utilizes quadrature encoding, an incremental signaling technique that conveys both positional changes and the direction of rotation. On the other hand, linear encoder measures displacement along a straight path and is commonly used in applications requiring high-precision linear motion.
- Quadrature encoders feature two output channels, typically designated as channel A and channel B. By monitoring the pulse count and identifying which channel leads, the encoder interface can determine both the distance and direction of rotation.
- Many encoders also incorporate a third channel, known as the index channel (or Z channel), which emits a single pulse per full revolution. This pulse serves as a reference point, enabling the system to identify the encoder’s absolute position in addition to its relative movement.
- Each complete cycle of the A and B channels in a quadrature encoder generates square wave signals that are offset by 90 degrees in phase. This cycle produces four distinct signal transitions—A rising, B rising, A falling, and B falling—allowing for higher resolution in position tracking. The direction of rotation is determined by the phase relationship between the channels: if channel A leads channel B, the rotation is typically clockwise; if B leads A, it indicates counterclockwise motion.
- To interpret the pulse data generated by a quadrature encoder, it must be connected to an encoder interface. This interface translates the encoder’s output signals into a series of counts or cycles, which can then be converted into a number of rotations based on the encoder’s cycles per revolution (CPR) counts. Some manufacturers also specify pulses per revolution (PPR), which typically refers to the number of electrical pulses generated on a single channel per full rotation and may differ from CPR depending on the decoding method used.

Figure 4 The above diagram offers a concise summary of quadrature encoding basics. Source: Author
That’s all; now, back to the schematic diagram.
In the previously illustrated quadrature rotary encoder design, transmissive (through-beam) sensors work in tandem with a precisely engineered shaft encoder wheel to detect rotational movement. Once everything is correctly wired and tuned, your quadrature rotary encoder is ready for use. It outputs two phase-shifted signals, enabling direction and speed detection.
In practice, most quadrature encoders rely on one of three sensor technologies: optical, magnetic, or capacitive. Among these, optical encoders are the most commonly used. They operate by utilizing a light source and a photodetector array to detect the passage or reflection of light through an encoder disk.
A note for custom-built encoder wheels: When designing your own encoder wheel, precision is everything. Ensure the slot spacing and width are consistent and suited to your sensor’s resolution requirements. And do not overlook alignment; accurate positioning with the beam path is essential for generating clean, reliable signals.
Layers beneath the spin
So, once again we circled back to quadrature encoders—this time with a bit more intent and (hopefully) a deeper dive. Whether you are just starting to explore them or already knee-deep in decoding signals, it’s clear these seemingly simple components carry a surprising amount of complexity.
From pulse counting and direction sensing to the quirks of noisy environments, there is a whole layer of subtleties that often go unnoticed. And let us be honest—how often do we really consider debounce logic or phase shift errors until they show up mid-debug and throw everything off?
That is the beauty of it: the deeper you dig, the more layers you uncover.
If this stirred up curiosity or left you with more questions than answers, let us keep the momentum going. Share your thoughts, drop your toughest questions, or suggest what you would like to explore next. Whether it’s hardware oddities, decoding strategies, or real-world implementation hacks—we are all here to learn from each other.
Leave a comment below or reach out with your own encoder war stories. The conversation—and the learning—is far from over.
Let us keep pushing the boundaries of what we think we know, together.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Decode a quadrature encoder in software
- Understanding Incremental Encoder Signals
- AVR takes under 1µs to process quadrature encoder
- Linear position sensor/encoder offers analog and digital evaluation
- How to use FPGAs for quadrature encoder-based motor control applications
The post Understand quadrature encoders with a quick technical recap appeared first on EDN.
Motor drivers advance with new features

Industrial automation, robotics, and electric mobility are increasingly driving demand for improved motor driver ICs as well as solutions that make it easier to design motor drives. With energy consumption being a key factor in these applications, developers are looking for motor drivers that offer higher efficiency and lower power consumption.
At the same time, integrating motor drivers into existing systems is becoming more challenging, as they need to work seamlessly with a variety of motors and control algorithms such as trapezoidal, sinusoidal, and field-oriented control (FOC), according to Global Market Insights Inc.
The average electric vehicle uses 15–20 motor drivers across a variety of systems, including traction motors, power steering, and brake systems, compared with eight to 12 units in internal-combustion-engine vehicles, and industrial robots typically use six to eight motor drivers for joint articulation, positioning, and end-effector control, according to Emergen Research.
The motor driver IC market is expected to grow at a compound annual growth rate of 6.8% from 2024 to 2034, according to Emergen Research, driven by industrial automation, EVs, and smart consumer electronics. Part of this growth is attributed to Industry 4.0 initiatives that drive the demand for more advanced motor control solutions, including the use of artificial intelligence and machine-learning algorithms in motor control systems.
Emergen Research also reports that silicon carbide and gallium nitride (GaN) materials are gaining traction in high-power applications thanks to their higher switching characteristics compared with silicon-based solutions.
Other trends include the growing demand for precise motor control, the integration of advanced sensorless control, and low electromagnetic interference (EMI), according to the market research firms.
Here are a few examples of new motor drivers for industrial and automotive applications, as well as development solutions such as software, reference designs, and evaluation kits that help ease the development of motor drives.
Motor driversMelexis recently launched the MLX81339, a configurable motor driver with a pulse-width modulation (PWM)/serial interface for a range of industrial applications. This motor driver IC is designed for compact, three-phase brushless DC (BLDC) and stepper motor control up to 40 W in industrial applications such as fans, pumps, and positioning systems.
The motor driver targets a range of markets, including smart industrial and consumer sectors, in applications such as positioning motors, thermal valves, robotic actuators, residential and industrial ventilation systems, and dishwashing pumps. The MLX81339 is also qualified for automotive fan and blower applications.
A key feature of this motor control IC is the programmable flash memory, which enables full application customization. Designed for three-phase BLDC or bipolar stepper motors, these advanced drivers use silent FOC. It delivers reliable startup, stopping, and precise speed control from low to maximum speed, Melexis said.
The MLX81339 motor driver supports control up to 20 W at 12 V and 40 W at 24 V, integrating a three-phase driver with a configurable current limit up to 3 A, as well as under-/overvoltage, overcurrent, and overtemperature protection. Other key specifications include a wide supply voltage range of 6 V to 26 V and an operating temperature range of –40°C to 125°C (junction temperature up to 150°C).
The MLX81339 also incorporates 8× general-purpose I/Os and several interfaces, including PWM/FG, I2C, UART, and SPI, for easy integration into both legacy and smart systems. It also supports both sensor-based and sensorless control.
Melexis offers the Melexis StartToRun web tool to accelerate motor driver prototyping, eliminating engineering tasks by generating configuration files based on simple user inputs. In addition to the motor and electrical parameters, the tool includes prefilled mechanical values.
The MLX81339, housed in QFN24 and SO8-EP packages, is available now. A code-free and configurable MLX80339 for rapid deployment will be released in the first quarter of 2026.
Melexis’s MLX81339 motor driver (Source: Melexis)
Earlier this year, STMicroelectronics introduced the VNH9030AQ, an integrated full-bridge DC motor driver with high-side and low-side MOSFET gate drivers, real-time diagnostics, and protection against overvoltage transients, undervoltage, short-circuit conditions, and cross-conduction, aimed at reducing design complexity and cost. Delivering greater flexibility to system designers, the MOSFETs can be configured either in parallel or in series, allowing them to be used in systems with multiple motors or to meet other specific requirements.
The integrated non-dissipative current-sense circuitry monitors the current flowing through the device to distinguish each motor phase, contributing to the driver’s efficiency. The standby power consumption is very low over the full operating temperature range, easing use in zonal controller platforms, ST said.
This DC motor driver can be used in a range of automotive applications, including functional safety. The driver also provides a dedicated pin for real-time output status, easing the design into functional-safety and general-purpose low-/mid-power DC-motor-driven applications while reducing the requirements for external circuitry.
With an RDS(on) of 30 mΩ per leg, the VNH9030AQ can handle mid- and low-power DC-motor-driven applications such as door-control modules, washer pumps, powered lift gates, powered trunks, and seat adjusters.
The driver is part of a family of devices that leverage ST’s latest VIPower M0-9 technology, which permits monolithic integration of power and logic circuitry. All products, including the VNH9030AQ, are housed in a 6 × 6-mm, thermally enhanced triple-pad QFN package. The package is designed for optimal underside cooling and shares a common pinout to ease layout and software reuse.
The VNH9030AQ is available now. ST also offers a ready-to-use VNH9030AQ evaluation board and the TwisterSim dynamic electro-thermal simulator to simulate the motor driver’s behavior under various operating conditions, including electrical and thermal stresses.
STMicroelectronics’ VNH9030AQ half-bridge DC motor driver (Source: STMicroelectronics)
Targeting both automotive and industrial applications, the Qorvo Inc. 160-V three-phase BLDC motor driver also aims to reduce solution size, design time, and cost with an integrated power manager and configurable analog front end (AFE). The ACT72350 160-V gate driver can replace as many as 40 discrete components in a BLDC motor control system, and the configurable AFE enables designers to configure their exact sensing and position detection requirements.
The ACT72350 includes a configurable power manager with an internal DC/DC buck converter and LDOs to support internal components and serve as an optional supply for the host microcontroller (MCU). In addition, by offering a wide, 25-V to 160-V input range, designers can reuse the same design for a variety of battery-operated motor control applications, including power and garden tools, drones, EVs, and e-bikes.
The ACT72350 provides the analog circuitry needed to implement a BLDC motor control system and can be paired with a variety of MCUs, Qorvo said. It provides high efficiency via programmable propagation delay, precise current sensing, and BEMF feedback, as well as differentiated features for safety-critical applications.
The SOI-based motor driver is available now in a 9.0 × 9.0-mm, 57-pin QFN package. An evaluation kit is available, along with a model of the ACT72350 in Qorvo’s QSPICE circuit simulation software at www.qspice.com.
Qorvo’s ACT72350 three-phase BLDC motor driver (Source: Qorvo Inc.)
Software, reference designs, and evaluation kits
Motor driver IC and power semiconductor manufacturers also deliver software suites, reference designs, and development kits to simplify motor drive design and development. A few examples include Power Integrations’ MotorXpert software, Efficient Power Conversion Corp.’s (EPC’s) GaN-based motor driver reference design, and a modular motor driver evaluation kit developed by Würth Elektronik and Nexperia.
Power Integrations continues to enhance its MotorXpert software for its BridgeSwitch and BridgeSwitch-2 half-bridge motor driver ICs. The latest version, MotorXpert v3.0, enables FOC without shunts and their associated sensors. It also adds support for advanced modulation schemes and features V/F and I/F control to ensure startup under any load condition.
Designed to simplify single- and three-phase sensorless motor drive designs, the v3.0 release adds a two-phase modulation scheme, suited for high-temperature environments, reducing inverter switching losses by 33%, according to the company. It allows developers to trade off the temperature of the inverter versus torque ripple, particularly useful in applications such as hot water circulation pumps, reducing heat-sink requirements and enclosure cost, the company said.
The software also delivers a five-fold improvement to the waveform visualization tool and an enhanced zoom function, providing more data for motor tuning and debugging. The host-side application includes a graphical user interface with Power Integrations’ digital oscilloscope visualization tool to make it easy to design and configure parameters and operation and to simplify debugging. Also easing development are parameter tool tips and a tuning assistant.
The software suite is MCU-agnostic and includes a porting guide to simplify deployment with a range of MCUs. It is implemented in the C language to MISRA standards.
Power Integrations said development time is greatly reduced by the included single- and three-phase code libraries with sensorless support, reference designs, and other tools such as a power supply design and analysis tool. Applications include air conditioning fans, refrigerator compressors, fluid pumps, washing machine and dryer drums, range hoods, industrial fans, and heat pumps.
Power Integrations’ MotorXpert software suite (Source: Power Integrations)
EPC claims the first GaN-based motor driver reference design for humanoid robots with the launch of the EPC91118 reference design for motor joints. The EPC91118 delivers up to 15 ARMS per phase from a wide input DC voltage, ranging from 15 V to 55 V, in an ultra-compact, circular form factor.
The reference design is optimized for space-constrained and weight-sensitive applications such as humanoid limbs and drone propulsion. It shrinks inverter size by 66% versus silicon, EPC said, and eliminates the need for electrolytic capacitors due to the GaN ICs and high-frequency operation. The high switching frequency instead allows the use of smaller MLCCs.
The reference design is centered around the EPC23104 ePower stage IC, a monolithic GaN IC that enables higher switching frequencies and reduced losses. The power stage is combined with current sensing, a rotor shaft magnetic encoder, an MCU, RS-485 communications, and 5-V and 3.3-V power supplies on a single board that fits within a 32-mm-diameter footprint (55-mm-diameter outer frame; 32-mm-diameter inverter).
EPC’s EPC91118 motor driver reference design (Source: Efficient Power Conversion Corp.)
Aimed at faster development of motor controllers, Würth Elektronik and Nexperia have collaborated on the NEVB-MTR1-KIT1 modular motor driver evaluation kit. The kit can be configured for use in under two minutes and is powered via USB-C.
The companies highlight the modularity of the evaluation board that can be adapted to a wide range of motors, control algorithms, and test setups, enabling faster optimization as well as faster iterations and testing. With an open architecture, the kit enables MCUs and components to be easily exchanged, and the open-source firmware allows developers to quickly adapt and develop motor controllers under real-world conditions, according to the companies.
The kit includes a three-phase inverter board, a motor controller board, an MCU development board, pre-wired motor connections, and a BLDC motor. A key feature is the high-current connectors integrated by Würth Elektronik, which enable evaluations up to 1 kW at 48 V.
The demands on dynamics, fault tolerance, and energy efficiency in drive systems are rising steadily, resulting in increasingly more complex motor control system design, according to the companies. The selection of the right switches (MOSFETs and IGBTs), gate drivers, and protection circuits is critical to ensure lower switching losses, better thermal behavior, and stable dynamics.
The behavior of the components must be carefully validated under real-world conditions, taking into consideration factors such as parasitic elements, switching transients, and EMI, according to the companies. The modular kit helps with this by enabling different motors and control concepts to be evaluated.
The Würth Elektronik and Nexperia NEVB-MTR1-KIT1 motor drive evaluation kit (Source: Würth Elektronik)
The post Motor drivers advance with new features appeared first on EDN.
A 0-20mA source current to 4-20mA loop current converter
A 4 to 20 mA loop current is a popular terminology with Instrumentation/Electronics engineers in process industries. Field transmitters like pressure,temperature,flow, etc., give out 4 to 20 mA current signals corresponding to the respective process parameters.
Industrial equipment, such as plant control rooms (situated at a distance from the field), will house a distributed control system (DCS) or programmable logic controller (PLC) to monitor, record, and control these process parameters. This equipment will supply 24 VDC to a typical transmitter through one wire and receive current proportional to the process parameter through another wire.
Typically, two wires are needed to connect the supply voltage and ground, and two more wires are needed to connect the current signal. Thus, a two-wire system cuts cable cost by 50%. Hence, all field devices must conform to this two-wire system in process industries. DCS/PLC should receive a current in the range 4 to 20 mA. A current of zero indicates the cable has been cut.
Still, there is equipment, like gas analyzers, which give out a conventional 0 to 20 mA current output. These signals are to be converted into the 4 to 20 mA loop current format to feed the DCS/PLC in the control room.
Figure 1’s circuit does exactly this.
Figure 1 A 0 to 20 mA current source to a 4 to 20 mA loop current converter module circuit. The SPAN & ZERO potentiometers can be multiturn PCB mountable types for precision adjustment. Q1 should have a heatsink.
Connect the 24-V power supply, digital ammeter, and a load resistor to J2 as shown in Figure 1.
Then, connect a current generator to the J1 connector. This current flows through R3 and is converted to a voltage.
The output of U1B is this voltage multiplied by (1+(R10/R11)), which is nearly one. Let us call this Vspan. The output of U3 is Vreg.
There are three currents at pin3 of U1A. Let us analyze the basic equation of this circuit:


The third current through R4 is:
![]()
The total current at pin3 of U1A is:
![]()
![]()
In this circuit, R4/R6 is chosen to be 99; therefore:
![]()
Both U1A and Q1 adjust the current flow through R6, satisfying the above equation in closed-loop control. U3 generates 5 VDC from the 24 VDC input for circuit operation.
R12 loads the regulator to draw a small current. Q2 and R1 limit the output current to around 26 mA.
How to calibrate this circuitConnect a 24 VDC power supply to J2, a load resistor of 200 Ω, and a digital ammeter
to J2 as shown in Figure 1. Connect a current generator to J1 as shown.
Keep the current as zero. Adjust Rzero until Ioutput reaches 4 mA.
Now, set the current generator to 20 mA. Adjust Rspan until Ioutput shows 20 mA.
Repeat this a few times to get the correct values. Now this current converter is calibrated.
How to improve accuracyThis circuit gives an accuracy of < 1%. To improve accuracy, select components with close tolerances.
You may introduce a 2.5-V reference IC after U3. Connect R2 and Rzero to this reference. In this case, R2 will be 50 KΩ and Rzero will be 20 KΩ.
Figure 2 illustrates how this current converter module is connected between the field transmitter and the control room’s DCS/PLC. Make sure to introduce a suitable surge suppressor in the line going to the field.
This module does not need a separate power supply. This can be kept in the field near the equipment giving out 0 to 20 mA.

Figure 2 A block diagram that shows the connection of the current converter in process industries.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- A two-wire temperature transmitter using an RTD sensor
- Two-wire interface has galvanic isolation
- Low-cost NiCd battery charger with charge level indicator
- Single phase mains cycle skipping controller sans harmonics
- Two-wire remote sensor preamp
The post A 0-20mA source current to 4-20mA loop current converter appeared first on EDN.
Top 10 AC/DC power supplies

AC/DC power supply manufacturers have focused their latest designs on meeting the increased demand for higher efficiency and miniaturization in industrial and medical systems. A few of them are also leveraging wide-bandgap (WBG) technologies such as gallium nitride (GaN) and silicon carbide (SiC) to achieve gains in efficiency in their latest-generation power supplies.
It is understood that these power supplies need to meet a range of safety certifications for industrial and medical applications. They must also be rugged enough to operate in harsh environments.
Here are 10 top AC/DC power supplies introduced over the past year for industrial and medical applications. In some cases, these AC/DC power supplies meet certifications for both medical and industrial markets, allowing them to be used in both applications.
Medical and industrial power suppliesGaN technology is making its way into AC/DC power supplies for industrial and medical applications, helping to improve performance and shrink designs. Bel Fuse Inc. recently introduced its 65-W GaN-based AC/DC power supplies in a compact footprint. The latest additions to the Bel Power Solutions portfolio are the MDP65 for medical applications and the HDP65 for industrial and ITE, both offering up to 92% efficiency.
The series is available in two mechanical mount options: printed-circuit-board (PCB) mount or open frame. The compact package size of 1 × 3 inches offers 50% real-estate savings compared with 2 × 3-inch devices for increased power density in lower-power applications.
The MDP65 series is a cost-effective option for the medical market while providing critical safety. Suited for Type BF medical applications, it is compliant with the IEC/EN 60601-1 safety standard and features 2 × Means of Patient Protection (MOPP) isolation. The HDP65 devices meet safety standards IEC 62368-1, EN 62368-1, UL 62368-1, and C-UL (equivalent to CAN/CSA-C22.2 No.62368-1). Both series are safety-agency-certified, meeting the latest regulatory requirements with UL and Nemko approvals.
Both series output 65-W power, offer a universal, 90- to 264-VAC input voltage range, and deliver a high power density of 17.20 W/in.3. They also feature an operating temperature range of –20°C to 70°C, ensuring reliable performance even when incorporated into compact, sealed diagnostic or portable monitoring units where heat dissipation is a challenge, the company said.
Bel Fuse’s HDP65 and MDP65 power supplies (Source: Bel Fuse Inc.)
Claiming to set new standards in power density and on-board intelligence, XP Power has introduced its FLXPro series of chassis-mount AC/DC power supplies to address space constraints and the need for increased power. The FLXPro series is also designed with SiC/GaN, achieving efficiencies up to 93%, which helps to reduce system operating costs, cooling requirements, and system size.
The FLX1K3 fully digital configurable modular power supply delivers power levels of 1.3 kW at high-line conditions and 1 kW at low-line conditions with a power density of up to 23.2 W/in.3. It is housed in a compact 1U form factor, measuring 254.0 × 88.9 × 40.6 mm (10.0 × 3.50 × 1.6 inches) and is designed to simplify power systems in healthcare, industrial, semiconductor manufacturing, analytical instrumentation, automation, renewable energy systems, and robotics applications.
The FLXPro design features up to four customer-selected, inherently flexible output modules with selectable outputs from 9 VDC to 66 VDC and a wide adjustment range (10% to –40%), which can be configured under live conditions to form part of a customer’s active control system, XP Power said. The output modules can be combined into multiple parallel and series configurations, and multiple FLXPro units can also be combined in parallel for higher-power applications.
XP Power said this flexibility optimizes application performance and control, addressing requirements for fixed and variable loads.
A unique feature of the FLXPro series is the fully digital architecture for both the input stage and output modules. It is the foundation for XP Power’s new iPSU Intelligent Power technology, which converts internal data into usable information for quick decisions that improve application safety and reduce operating costs.
The FLXPro series also provides extensive diagnostics, including a new Black Box Snapshot feature that reduces troubleshooting time after shutdown events by recording in-depth system status at, and prior to, shutdown; tri-color LEDs that indicate power supply health with a truth table incorporated on the chassis for simple interpretation without manuals or digital communications; and multiple internal temperature measurements for fast status checks through temperature diagnostics that drive intelligent fan control and overtemperature warnings and alarms.
FLXPro also features built-in user-defined digital controls, signals, alarms, and output controllability. Inputs, outputs, and firmware can be configured through the user interface or directly over direct digital communications. It supports ES1 isolated digital communications and uses PMBus over I2C for digital communications, enabling real-time control, monitoring, and data logging. The operating temperature range is –20°C to 70°C.
XP Power’s FLXPro series (Source: XP Power)
Also addressing industrial and medical applications with an efficient and power-dense design is Murata Manufacturing Co. Ltd.’s PQC600 open-frame AC/DC power supplies. Target markets include hospital beds, dentist chairs, medical equipment, and industrial process machinery.
The industrial-grade PQC600 offers 600 W of power in a package that is less than 1U in height. It leverages the Murata Power Solutions transformer design with an optimized layout and package design. With a 600-W forced-air cooling design, it achieves an efficiency of 95% at full load. Key features include an optimized interleaved power-factor correction, back-end synchronous rectification, and a droop-current-sharing feature, enabling multiple units to be configured in parallel for greater power scalability.
The PQC600 is certified to the IEC 60601-1 Edition 3 medical safety standard, which includes 2 × MOPP from primary to secondary, 1 MOPP from the chassis to ground, and 1 MOPP from output to chassis. It also complies with the IEC 60601-1-2 4th Edition for electromagnetic compatibility (EMC) standards and is suitable for use with medical devices that have Type B or Type BF applied parts.
Also targeting the need for high efficiency and miniaturization is the NSP-75/100/150/200/320 series of AC/DC enclosed-type power supplies from Mean Well Enterprises Co. Ltd. The NSP series surpasses Mean Well’s RSP series, which has been on the market for over 10 years, with a higher cost-performance ratio. It offers a wider, 85- to 305-VAC input range; an extended temperature range of –40°C to 85°C with full load operation possible up to 60°C, making it suitable for harsher environments; and a smaller footprint, ranging from 28% to 46% smaller than the RSP series.
The NSP series offers high efficiency of up to 90% to 94.5% with low no-load power consumption (<0.3 W to 0.5 W), depending on the model, and 200% peak-power-output capability. Other features include short, overload, overvoltage, and overtemperature protection; programmable output voltage; ultra-low leakage of <350 µA; and operation at altitudes up to 5,000 meters.
The AC/DC power supplies also offer safety certifications in multiple industries, including ICT, industrial, medical, household, and green energy applications, and meet OVC III requirements. Safety certifications include CB/DEKRA/UL/RCM/BSMI/CCC/EAC/BIS/KC/CE/UKCA, and IEC/EN/UL 62368-1, 61010-1, 61558-1, 62477-1, and SEMI 47 for semiconductor equipment. They meet 2 × MOPP and medical BF-grade applications.
Mean Well’s NSP-320 power supply (Source: Mean Well Enterprises Co. Ltd.)
Medical power supplies
P-Duke Technology Co. Ltd. launched the MAD150 medical-grade AC/DC power supply series, capable of delivering up to 150 W of continuous output power and 200-W peak power for five seconds. The compact, 3 × 2-inch package is available in open-frame, enclosed, and DIN-rail options, with connection types including JST connectors, Molex connectors, and screw terminals.
Suited for most industries worldwide, the series features a universal input range from 85 to 264 VAC and supports DC input voltages from 88 to 370 VDC. The MAD150 series provides single-output options for medical devices at 12, 15, 18, 24, 28, 36, 48, and 54 VDC, with up to 7% output adjustability.
Designed for medical applications and suited for BF-type parts, it offers less than 100-μA patient leakage current, 2 × MOPP, and 4,000-VAC input-to-output isolation. Applications include portable medical devices, diagnostic equipment, monitoring equipment, hospital beds, and medical carts.
These devices reduce thermal generation, offer an extended temperature range of –40°C to 85°C, and provide a conversion efficiency up to 94%. It operates at altitudes up to 5,000 meters.
The MAD150 is certified to IEC/EN/ANSI/AAMI ES 60601-1 (Medical electrical equipment – Part 1: General requirements for basic safety and essential performance) and IEC/EN/UL 62368-1 (Audio/video, information and communication technology equipment – Part 1: Safety requirements).
Advanced Energy Industries Inc. has introduced the NCF425 series of 425-W cardiac floating (CF)-rated medical open-frame AC/DC power supplies with CF-level isolation and leakage current. These standard, off-the-shelf power supplies, simplifying isolation and speeding time to market, are certified to IEC 60601-1 and streamline critical medical device product development.
Advanced Energy said it is one of the few companies that provides standard, off-the-shelf CF-rated power products. The system-level CF rating is the most stringent medical device electrical safety classification, with certification needed for equipment that has direct contact with the heart or bloodstream, the company explained.
The company’s CF-rated portfolio was initially launched in September 2024 with the introduction of the NCF150, followed by the NCF250 and NCF600. The NCF series achieves a sub-10-µA leakage current and integrates the high levels of isolation required in critical medical devices.
This latest release offers additional options and helps reduce the number of isolation components required, translating into a smaller system size and lower cost.
The NCF family is designed to simplify thermal and electromagnetic interference (EMI) management, reduce system size and weight, and reduce the bill of materials. It also includes functionality typically provided at the system level, which reduces time and complexity in the development process, the company said.
The NCF425 is certified to the medical safety standard IEC 60601-1 and meets 2 × MOPP. Key features include a maximum output power of 425 W in a 3.5 × 6 × 1.5-inch form factor and a 5-kV defibrillator pulse protection. Applications include surgical generators, RF ablation, pulsed field ablation, cardiac-assist devices and monitors, and cardiac-mapping systems.
Advanced Energy’s NCF425 series (Source: Advanced Energy Industries Inc.)
Industrial power supplies
Delivering a high level of programmability and flexibility, XP Power’s 1.5-kW HDA1500 series suits a variety of applications across a range of industries. For example, the HDA1500 can be used in applications such as robotics, lasers, LED heating, and semiconductor manufacturing, providing benefits in digital control, communication, and status LEDs.
Rated for 1.5 kW of power with no minimum load requirement, the HDA1500 power supplies offer efficiency up to 93%, allowing for a more compact form factor as well as reducing operating costs. The HDA1500 units can be operated in parallel with active current sharing when more power is required in a rack.
Advanced digital control in power solutions has not always been widely available, according to XP Power, with the HDA1500 offering precise digital adjustment of both output current and output voltage from 0% to 105% for greater user flexibility.
The standard advanced digital control is key to the flexibility of the HDA1500, the company said. Driven by a graphical user interface, the power supply can be adjusted via several digital protocols, including PMBus, RS-485/-232, Modbus, and Ethernet, which also allow for easy integration into more advanced power control schemes.
The HDA1500 units operate from a universal single-phase mains input (90 to 264 VAC) and are reported to offer one of the widest single-rail output selections on the market, covering popular voltages between 12 VDC and 400 VDC in a portfolio of 11 units. At low-line operation, the power supplies can deliver more power than many competitive offerings, the company said.
With an operating temperature range of –25°C to 60°C, the units require no derating below 50°C. Other features include built-in protection, including overtemperature, overload, overvoltage, and short-circuit; a 5-VDC/1-A standby supply rail that keeps external circuitry alive when the main supply is powered down; and remote sense, particularly for applications in which power cables are extended.
The power supplies meet a range of ITE-related approvals, including EN55032 Class A and EN61000-3-x for emissions, as well as EN61000-4-x for immunity. Safety approvals include IEC/UL/EN62368-1 as well as all applicable CE and UKCA directives. Applications include test and measurement, factory automation, process control, semiconductor fabrication, and renewable energy systems.
XP Power’s HDA1500 series (Source: XP Power)
Targeting space-constrained industrial applications is the CBM300S series of 300-W fanless AC/DC power supplies from Cincon Electronics Co. Ltd. The series is housed in a brick package that measures 106.7 × 85.0 mm (4.2 × 3.35 inches) with an ultra-slim profile of 19.7 mm (0.78 inches). The device delivers 300-W-rated power with a peak power capability of 360 W.
The CBM300S operates with an input range of 90 to 264 VAC and accepts DC input ranging from 120 to 370 VDC. Seven output voltage options are available: 12, 15, 24, 28, 36, 48, and 54 VDC, all classified as Class I.
The series comes with safety approvals for IEC/UL/EN 62368-1 3rd edition and is EMC-compliant with EN 55032 Class B and CISPR/FCC Class B standards.
A key feature of the CBM300S is its exceptionally low leakage current of 0.75 mA maximum. It also delivers efficiency of up to 94% and operates across a wide temperature range of –40°C to 90°C, making it suitable for harsh environments.
This power supply can function at altitudes up to 5,000 meters and maintains a low no-load input power consumption of less than 0.5 W. The MTBF is rated at 240,000 hours. It also offers protection features, including output overcurrent, output overvoltage, overtemperature, and continuous short-circuit protections.
The CBM300S power supplies can be used in a variety of industrial/ITE applications, including automation equipment, test and measurement instruments, commercial equipment, telecom and network devices, and other industrial applications.
Recom Power GmbH introduced a series of flexible and highly efficient AC/DC power supplies in a small form factor for new energy applications. Applications include energy management and monitoring and powering actuators, as well as general-purpose applications.
The 20-W RAC20NE-K/277 series is available in board-mount or open-frame options. The board-mount, encapsulated power supplies measure 52.5 × 27.6 × 23.0 mm, and the open-frame devices with Molex connections measure 80.0 × 23.8 × 22.5 mm.
AC/DC power supplies increasingly must operate over nominal supply values from 100 VAC to 277 VAC, Recom said, and the RAC20NE-K/277 matches this requirement with 20 W available at optional 12-, 24-, or 36-VDC outputs. This series is available with encapsulated versions with constant-voltage- or constant-current-limiting characteristics and a constant-voltage open-frame type with 12- or 24-VDC output.
The RAC20NE-K/277 series is highly efficient, Recom said, allowing reliable operation at full load to 60°C ambient and to 85°C with derating. It also offers <100-mW no-load power consumption.
The parts are Class II–insulated and OVC III–rated up to 5,000 meters and meet EN 55032 Class B EMC requirements with a floating or grounded output. Standby and no-load power dissipation meet eco-design requirements.
Recom’s RAC20NE-K/277 (Source: Recom Power GmbH)
If you’re looking for greater flexibility with more options, TDK Corp.’s ZWS-C series of 10- to 50-W industrial power supplies offers new mounting and protection options. The TDK-Lambda brand ZWS-C series of 10-, 15-, 30-, and 50-W-rated industrial AC/DC power supplies was initially launched in an open-frame configuration. Four additional options are now available: a metal L-bracket (with or without a cover), pins for PCB mounting, and two-sided board coating for all voltage and power levels.
These options can provide additional operator protection, lower the cost of wiring harnesses, or reduce the impact of dust and contamination in harsh environments, TDK said.
The ZWS-C series is available with 5-, 12-, 15-, 24-, and 48-V (50 W only) output voltages. The ZWS10C and ZWS15C models measure 63.5 × 45.7 × 22.1 mm, the ZWS30C package measures 76.2 × 50.8 × 24.2 mm, and the ZWS50C footprint measures 76.2 × 50.8 × 26.7 mm. The operating temperature with convection cooling and standard mounting ranges from –10°C to 70°C, derating linearly to 50% load between 50°C and 70°C.
The power supplies can operate at full load with an external airflow of 0.8 m/s, and no-load power consumption is typically less than 0.3 W. Other features include a 3-kVAC input-to-output, 2-kVAC input-to-ground, and 750-VAC output-to-ground (Class I) isolation. The models meet EN55011/EN55032-B conducted and radiated EMI in either Class I or Class II (double-insulated) construction, without the need for external filtering or shielding.
All models are also certified to the IEC/UL/CSA/EN62368-1 for AV, information, and communication equipment standards; EN60335-1 for household electrical equipment; IEC/EN61558-1; and IEC/EN61558-2-16. They also comply with IEC 61000-3-2 (harmonics) and IEC 61000-4 (immunity) and carry the CE and UKCA marks for the Low Voltage, EMC, and RoHS Directives.
Thanks to electrolytic capacitor lifetimes of up to 15 years, the ZWS-C models can be used in factory automation, robotics, semiconductor fabrication manufacturing, and test and measurement equipment.
TDK’s ZWS15C model (Source: TDK Corp.)
The post Top 10 AC/DC power supplies appeared first on EDN.



