Українською
  In English
EDN Network
Power resistors handle high-energy pulse applications

Bourns, Inc. releases its Riedon BRF Series of precision power foil resistors for high-energy pulse applications. These power resistors offer power ratings up to 2,500 W and a temperature coefficient of resistance (TCR) as low as ±15 ppm/°C, making them suited as energy dissipation solutions for circuits that require high precision. Applications include current sensing, power management, industrial power control, and energy storage.
(Source: Bourns, Inc.)
The power resistor series is available in two- and four-terminal options with termination current ratings up to 150 A. This enables developers to tailor the resistors to their exact design requirements, Bourns said.
Other key specifications include a resistance range from 0.001 to 500 Ω, low inductance of <50 nH, and load stability to 0.1%. The operating temperature range is -40°C to 130°C.
The BRF Series of power resistors is built using metal foil technology housed in an aluminum heat sink and a low-profile package. These precision power resistors are designed to meet the rugged and space-constrained requirements of high-energy pulse applications such as power converters, battery energy storage systems, industrial power supplies, inverters, and motor drives.
Available now, the Riedon BRF series is RoHS compliant. Click here for Bourns’ portfolio of metal foil resistors.
The post Power resistors handle high-energy pulse applications appeared first on EDN.
The Linksys MX4200C: A retailer-branded router with memory deficiencies

How timely! My teardown of Linksys’ VLP01 router, submitted in late September, was published one day prior to when I started working on this write-up in late October.

What’s the significance, aside from the chronological cadence? Well, at the end of that earlier piece, I wrote:
There’s another surprise waiting in the wings, but I’ll save that for another teardown another (near-future, I promise) day.
That day is today. And if you’ve already read my earlier piece (which you have, right?), you know that I actually spent the first few hundred words of it talking about a different Linksys router, the LN1301, also known as the MX4300:

I bought a bunch of ‘em on closeout from Woot (yep, the same place that the refurbished VLP01 two-pack came from), and I even asked my wife to pick up one too, with the following rationale:
That’ll give me plenty of units for both my current four-node mesh topology and as-needed spares…and eventually I may decide to throw caution to the wind and redirect one of the spares to a (presumed destructive) teardown, too.
Last month’s bigger brotherHold that thought. Today’s teardown victim was another refurbished Linksys router two-pack from Woot, purchased a few months later, this February to be exact. Woot promotion-titled the product page as a “Linksys AX4200 Velop Mesh Wi-Fi 6 System”, and the specs further indicated that it was a “Linksys MX8400-RM2 AX4200 Velop Mesh Wi-Fi 6 Router System 2-Pack”. It cost me $19.99 plus tax (with free shipping) after another $5 promotion-code discount, and I figured that, as with the two-VLP01 kit, I’d tear down one of the two routers for your enjoyment and hold onto the other for use as a mesh node. Here’s its stock image on Woot’s website:

Looks kinda like the MX4300, doesn’t it? I admittedly didn’t initially notice the physical similarity, in part because of the MX8400 product name replicated on the outer box label:

When I started working on the sticker holding the lid in place, I noticed a corner of a piece of literature sticking out, which turned out to be the warranty brochure. Nice packing job, Linksys!

Lifting the lid:

You’ll find both routers inside, along with two Ethernet cable strands rattling around loose. Underneath the thick blue cardstock piece labeled “Setup Guide” to the right:

are the two power supplies, along with…umm…the setup guide plus a support document:

Some shots of the wall wart follow:

including the specs:

and finally, our patient, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front view:

left side:

back, both an overview and a closeup of the various connectors: power, WAN, three LAN, and USB-A. Hmm…where have I seen that combo before?


right side:

top, complete with the status LED:

and…wait. What’s this?

In addition to the always-informative K7S-03580 FCC ID, check out that MX4200C product name. When I saw it, I realized two key things:
- Linksys was playing a similar naming game to what they’d done with the VLP01. Quoting from my earlier teardown: “…an outer box shot of what I got…which, I’ve just noticed, claims that it’s an AC2400 configuration
(I’m guessing this is because Linksys is mesh-adding the two devices’ theoretical peak bandwidths together? Lame, Linksys, lame…)” This time, they seemingly added the numbers in the two MX4200 device names together to come up with the “bigger is better” MX8400 moniker. - The MX4200(C, in this case) is mighty close to MX4300. Now also realizing the physical similarity, I suspected I had a near-clone (and much less expensive, not to mention more widely available) sibling to the no-longer-available router I’d discussed a month earlier, which, being rare, I was therefore so reticent to (presumably destructively) disassemble.
Some background from my online research before proceeding:
- The MX4200 came in two generational versions, both of them integrating 512 Mbytes of flash memory for firmware storage. V1 of the MX4200 included 512 Mbytes of RAM and had dimensions of 18.5cm (7.3 inches) high and 7.9cm (3.1 inches) wide. The larger, 24.3cm (9.57 inches) high and 11cm (4.45 inches) wide, V2 MX4200 also doubled the internal RAM capacity to 1 GByte.
- This MX4200C is supposedly a Costco-only variant (meaning what beyond the custom bottom sticker? Dunno), conceptually reminiscent of the Walmart-only VLP01 I’d taken apart last month. I can’t find any specs on it, but given its dimensional commonality with the V2 MX4200, I’ll be curious to peer inside and see if it embeds 1 GByte of RAM, too.
- And the MX4300? It’s also dimensionally reminiscent of the V2 MX4200. But this time, there are 2 GBytes of RAM inside it. Last month, I’d mentioned that the MX4300 also bumps up the flash memory to 1 GByte, but the online source I’d gotten that info from was apparently incorrect. It’s 512 GBytes, the same as in versions of the MX4200.
Clearly, now that I’m aware of the commonality between this MX4200C and the MX4300, I’m going to be more careful (but still comprehensive) than I might otherwise be with my dissection, in the hope of a subsequent full resurrection. To wit, here we go, following the same initial steps I used for the much smaller VLP01 a month ago. The only top groove I was able to punch through was the back edge, and even then, I had to switch to a flat-head screwdriver to make tangible disassembly progress (without permanently creasing the spudger blade in the process):

Voila:


Next to go, again as before, are those four screws:


And now for a notable deviation from last month’s disassembly scheme. That time, there were also screws under the bottom rubber “feet” that needed to be removed before I could gain access to the insides. This time, conversely, when I picked up the assembly in preparation for turning it upside-down…

Alrighty, then!

Behold our first glimpses of the insides. Referencing the earlier outer case equivalents (with the qualifier that, visually obviously, the PCB is installed diagonally), here’s the front:

Left side:

Back, along with another accompanying connectors closeup (note, by the way, the two screws at the bottom of the exposed portion of the PCB):


And right side:

Let’s next get rid of the plastic shield around the connectors, which, as was the case last month, lifted away straightaway:

And next, the finned heatsink to its left (in the earlier photo) and the rear right half of the assemblage (when viewed from the front):



We have liftoff:


Oh, goodie, Faraday cages! Hold that thought:

Rotating the assemblage around exposes the other (front left) half and its metal plate, which, with the just-seen four heatsink screws also no longer holding it in place, lifts right off as well:




You probably already noticed the colored wires in the prior shots. Here are the up-top antennas and LED assembly where they end up:


And here’s where at least some of them originate:



Unhooking the wire harness running up the side of the assemblage, along with removing the two screws noted earlier at the bottom of the PCB, enables the board’s subsequent release:

Here’s what I’m calling the PCB backside (formerly in the rear right region) which the finned heatsink previously partially covered and which you’ve already seen:

And here’s the newly-exposed-to-view frontside (formerly front left, to be precise), with even more Faraday cages awaiting my pry-off attention:

I’m happy to oblige. Upper left corner first:

Temporarily (because, as previously mentioned, I aspire to put everything back together in functionally resurrected form later) bend the tab away, and with thanks to Google Image search results for the tip, a Silicon Labs EFR32MG21 Series 2 Multiprotocol Wireless SoC, supporting Bluetooth, Thread, and Zigbee mesh protocols, comes into view. The previously shown single-lead antenna connection on the other side of the PCB is presumably associated with it:

To its left, uncaged, is a Fidelix FMND4G08S3J-ID 512 Mbyte NAND flash memory, presumably for holding the system firmware.
Most of the rest of the cages’ contents are bland, unless you’re into lots of passives; as you’ll soon see, their associated ICs on the other side are more exciting:




Note in all these so-far cases, as well as the remainder, that thermal tape is employed for heat transfer purposes, not paste. Linksys’ decision not only makes it easier to see what’s underneath it will also increase the subsequent likelihood of tape-back-in-place reassembly functional success:

And after all those passives, the final cage at bottom left ended up being IC-inclusive again, this time containing a Qualcomm PMP8074 power management controller:

Now for a revisit of the other side of the PCB, starting with the top-most cage and working our way to the bottom. The first one, with two antenna connectors notably above it, encompasses a portion of the wireless networking subsystem and is based on two Qualcomm Wi-Fi SoCs, the QCN5024 for 2.4 GHz and QCN5054 for 5 GHz. Above the former are two Skyworks SKY85340-11 front-end modules (FEMs); the latter is topped off by two Skyworks SKY85755-11s:


The next cage is for the processor, a quad-core 1.4 GHz Qualcomm IPQ8174, the same SoC and speed bin as in the Linksys MX4300 I discussed last month, and the volatile memory, two ESMT M15T2G16128A 2 Gbit DDR3-933 SDRAMs. I guess we now know how the MX4200C differs from the V2 MX4200; Linksys halved the RAM to 512 GBytes total, reminiscent of the V1 MX4200’s allocation, to come up with this Costco-special product spin.



The third one, this time with four antennae connectors below it, houses the remainder of the (5 GHz-only, in this case) Wi-Fi subsystem; four more Qualcomm QCN5054s, each with a mated Skyworks SKY85755-11 FEM:


And last but not least, at bottom right is the final cage, containing a Qualcomm QCA8075 five-port 10/100/1000 Mbps Ethernet transceiver, only four ports’ worth of which are seemingly leveraged in this design (one WAN, three LAN, if you’ll recall from earlier). Its function is unsurprising given its layout proximity to the two Botthand LG2P109RN dual-port magnetic transformers to its right:


And with that, I’ll wrap up for today. More info on the MX4200 (V1, to be precise) can be found at WikiDevi. Over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- A fresh gander at a mesh router
- The pros and cons of mesh networking
- Teardown: The router that took down my wireless network
- Is it time to upgrade to mesh networking?
The post The Linksys MX4200C: A retailer-branded router with memory deficiencies appeared first on EDN.
Understand quadrature encoders with a quick technical recap

An unexpected revisit to my earlier post on mouse encoder hacking sparked a timely opportunity to reexamine quadrature encoders, this time with a clearer lens and a more targeted focus on their signal dynamics and practical integration. So, let’s get a fresh restart and dive straight into the quadrature signal magic.
Starting with a flake of theory, a quadrature signal refers to a pair of sinusoidal waveforms—typically labeled I (in-phase) and Q (quadrature)—that share the same frequency but are offset by 90° in phase. These orthogonal signals do not interfere with each other and together form the foundation for representing complex signals in systems ranging from communications to control.

Figure 1 A visualization illustrates the idealized output from a quadrature encoder, highlighting the phase relationship. Source: Author
In the context of quadrature encoders, the term describes two square wave signals, known as A and B channels, which are also 90° out of phase. This phase offset enables the system to detect the direction of rotation, count discrete steps or pulses for accurate position tracking, and enhance resolution through edge detection techniques.
As you may already be aware, encoders are essential components in motion control systems and are generally classified into two primary types: incremental and absolute. A common configuration within incremental encoders is the quadrature encoder, which uses two output channels offset in phase to detect both direction and position with greater precision, making it ideal for tracking relative motion.
Standard incremental encoders also generate pulses as the shaft rotates, providing movement data; however, they lose positional reference when power is interrupted. In contrast, absolute encoders assign a unique digital code to each shaft position, allowing them to retain exact location information even after a power loss—making them well-suited for applications that demand high reliability and accuracy.
Note that while quadrature encoders are often mentioned alongside incremental and absolute types, they are technically a subtype of incremental encoders rather than a separate category.
Oh, I almost forgot: The Z output of an ABZ incremental encoder plays a crucial role in precision positioning. Unlike the A and B channels, which continuously pulse to indicate movement and direction, the Z channel—also known as the index or marker pulse—triggers just once per revolution.
This single pulse serves as a reference point, especially useful during initialization or calibration, allowing systems to accurately identify a home or zero position. That is to say, the index pulse lets you reset to a known position and count full rotations; it’s handy for multi-turn setups or recovery after power loss.

Figure 2 A sample drawing depicts the encoder signals, with the index pulse clearly marked. Source: Author
Hands-on with a real-world quadrature rotary encoder
A quadrature rotary encoder detects rotation and direction via two offset signals; it’s used in motors, knobs, and machines for fine-tuned control. Below is the circuit diagram of a quadrature encoder I designed for a recent project using a couple of optical sensors.

Figure 3 Circuit diagram shows a simple quadrature encoder setup that employs optical sensors. Source: Author
Before we proceed, it’s worth taking a moment to reflect on a few essential points.
- A rotary encoder is an electromechanical device used to measure the rotational motion of a motor shaft or the position of a dial or knob. It commonly utilizes quadrature encoding, an incremental signaling technique that conveys both positional changes and the direction of rotation. On the other hand, linear encoder measures displacement along a straight path and is commonly used in applications requiring high-precision linear motion.
- Quadrature encoders feature two output channels, typically designated as channel A and channel B. By monitoring the pulse count and identifying which channel leads, the encoder interface can determine both the distance and direction of rotation.
- Many encoders also incorporate a third channel, known as the index channel (or Z channel), which emits a single pulse per full revolution. This pulse serves as a reference point, enabling the system to identify the encoder’s absolute position in addition to its relative movement.
- Each complete cycle of the A and B channels in a quadrature encoder generates square wave signals that are offset by 90 degrees in phase. This cycle produces four distinct signal transitions—A rising, B rising, A falling, and B falling—allowing for higher resolution in position tracking. The direction of rotation is determined by the phase relationship between the channels: if channel A leads channel B, the rotation is typically clockwise; if B leads A, it indicates counterclockwise motion.
- To interpret the pulse data generated by a quadrature encoder, it must be connected to an encoder interface. This interface translates the encoder’s output signals into a series of counts or cycles, which can then be converted into a number of rotations based on the encoder’s cycles per revolution (CPR) counts. Some manufacturers also specify pulses per revolution (PPR), which typically refers to the number of electrical pulses generated on a single channel per full rotation and may differ from CPR depending on the decoding method used.

Figure 4 The above diagram offers a concise summary of quadrature encoding basics. Source: Author
That’s all; now, back to the schematic diagram.
In the previously illustrated quadrature rotary encoder design, transmissive (through-beam) sensors work in tandem with a precisely engineered shaft encoder wheel to detect rotational movement. Once everything is correctly wired and tuned, your quadrature rotary encoder is ready for use. It outputs two phase-shifted signals, enabling direction and speed detection.
In practice, most quadrature encoders rely on one of three sensor technologies: optical, magnetic, or capacitive. Among these, optical encoders are the most commonly used. They operate by utilizing a light source and a photodetector array to detect the passage or reflection of light through an encoder disk.
A note for custom-built encoder wheels: When designing your own encoder wheel, precision is everything. Ensure the slot spacing and width are consistent and suited to your sensor’s resolution requirements. And do not overlook alignment; accurate positioning with the beam path is essential for generating clean, reliable signals.
Layers beneath the spin
So, once again we circled back to quadrature encoders—this time with a bit more intent and (hopefully) a deeper dive. Whether you are just starting to explore them or already knee-deep in decoding signals, it’s clear these seemingly simple components carry a surprising amount of complexity.
From pulse counting and direction sensing to the quirks of noisy environments, there is a whole layer of subtleties that often go unnoticed. And let us be honest—how often do we really consider debounce logic or phase shift errors until they show up mid-debug and throw everything off?
That is the beauty of it: the deeper you dig, the more layers you uncover.
If this stirred up curiosity or left you with more questions than answers, let us keep the momentum going. Share your thoughts, drop your toughest questions, or suggest what you would like to explore next. Whether it’s hardware oddities, decoding strategies, or real-world implementation hacks—we are all here to learn from each other.
Leave a comment below or reach out with your own encoder war stories. The conversation—and the learning—is far from over.
Let us keep pushing the boundaries of what we think we know, together.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Decode a quadrature encoder in software
- Understanding Incremental Encoder Signals
- AVR takes under 1µs to process quadrature encoder
- Linear position sensor/encoder offers analog and digital evaluation
- How to use FPGAs for quadrature encoder-based motor control applications
The post Understand quadrature encoders with a quick technical recap appeared first on EDN.
Motor drivers advance with new features

Industrial automation, robotics, and electric mobility are increasingly driving demand for improved motor driver ICs as well as solutions that make it easier to design motor drives. With energy consumption being a key factor in these applications, developers are looking for motor drivers that offer higher efficiency and lower power consumption.
At the same time, integrating motor drivers into existing systems is becoming more challenging, as they need to work seamlessly with a variety of motors and control algorithms such as trapezoidal, sinusoidal, and field-oriented control (FOC), according to Global Market Insights Inc.
The average electric vehicle uses 15–20 motor drivers across a variety of systems, including traction motors, power steering, and brake systems, compared with eight to 12 units in internal-combustion-engine vehicles, and industrial robots typically use six to eight motor drivers for joint articulation, positioning, and end-effector control, according to Emergen Research.
The motor driver IC market is expected to grow at a compound annual growth rate of 6.8% from 2024 to 2034, according to Emergen Research, driven by industrial automation, EVs, and smart consumer electronics. Part of this growth is attributed to Industry 4.0 initiatives that drive the demand for more advanced motor control solutions, including the use of artificial intelligence and machine-learning algorithms in motor control systems.
Emergen Research also reports that silicon carbide and gallium nitride (GaN) materials are gaining traction in high-power applications thanks to their higher switching characteristics compared with silicon-based solutions.
Other trends include the growing demand for precise motor control, the integration of advanced sensorless control, and low electromagnetic interference (EMI), according to the market research firms.
Here are a few examples of new motor drivers for industrial and automotive applications, as well as development solutions such as software, reference designs, and evaluation kits that help ease the development of motor drives.
Motor driversMelexis recently launched the MLX81339, a configurable motor driver with a pulse-width modulation (PWM)/serial interface for a range of industrial applications. This motor driver IC is designed for compact, three-phase brushless DC (BLDC) and stepper motor control up to 40 W in industrial applications such as fans, pumps, and positioning systems.
The motor driver targets a range of markets, including smart industrial and consumer sectors, in applications such as positioning motors, thermal valves, robotic actuators, residential and industrial ventilation systems, and dishwashing pumps. The MLX81339 is also qualified for automotive fan and blower applications.
A key feature of this motor control IC is the programmable flash memory, which enables full application customization. Designed for three-phase BLDC or bipolar stepper motors, these advanced drivers use silent FOC. It delivers reliable startup, stopping, and precise speed control from low to maximum speed, Melexis said.
The MLX81339 motor driver supports control up to 20 W at 12 V and 40 W at 24 V, integrating a three-phase driver with a configurable current limit up to 3 A, as well as under-/overvoltage, overcurrent, and overtemperature protection. Other key specifications include a wide supply voltage range of 6 V to 26 V and an operating temperature range of –40°C to 125°C (junction temperature up to 150°C).
The MLX81339 also incorporates 8× general-purpose I/Os and several interfaces, including PWM/FG, I2C, UART, and SPI, for easy integration into both legacy and smart systems. It also supports both sensor-based and sensorless control.
Melexis offers the Melexis StartToRun web tool to accelerate motor driver prototyping, eliminating engineering tasks by generating configuration files based on simple user inputs. In addition to the motor and electrical parameters, the tool includes prefilled mechanical values.
The MLX81339, housed in QFN24 and SO8-EP packages, is available now. A code-free and configurable MLX80339 for rapid deployment will be released in the first quarter of 2026.
Melexis’s MLX81339 motor driver (Source: Melexis)
Earlier this year, STMicroelectronics introduced the VNH9030AQ, an integrated full-bridge DC motor driver with high-side and low-side MOSFET gate drivers, real-time diagnostics, and protection against overvoltage transients, undervoltage, short-circuit conditions, and cross-conduction, aimed at reducing design complexity and cost. Delivering greater flexibility to system designers, the MOSFETs can be configured either in parallel or in series, allowing them to be used in systems with multiple motors or to meet other specific requirements.
The integrated non-dissipative current-sense circuitry monitors the current flowing through the device to distinguish each motor phase, contributing to the driver’s efficiency. The standby power consumption is very low over the full operating temperature range, easing use in zonal controller platforms, ST said.
This DC motor driver can be used in a range of automotive applications, including functional safety. The driver also provides a dedicated pin for real-time output status, easing the design into functional-safety and general-purpose low-/mid-power DC-motor-driven applications while reducing the requirements for external circuitry.
With an RDS(on) of 30 mΩ per leg, the VNH9030AQ can handle mid- and low-power DC-motor-driven applications such as door-control modules, washer pumps, powered lift gates, powered trunks, and seat adjusters.
The driver is part of a family of devices that leverage ST’s latest VIPower M0-9 technology, which permits monolithic integration of power and logic circuitry. All products, including the VNH9030AQ, are housed in a 6 × 6-mm, thermally enhanced triple-pad QFN package. The package is designed for optimal underside cooling and shares a common pinout to ease layout and software reuse.
The VNH9030AQ is available now. ST also offers a ready-to-use VNH9030AQ evaluation board and the TwisterSim dynamic electro-thermal simulator to simulate the motor driver’s behavior under various operating conditions, including electrical and thermal stresses.
STMicroelectronics’ VNH9030AQ half-bridge DC motor driver (Source: STMicroelectronics)
Targeting both automotive and industrial applications, the Qorvo Inc. 160-V three-phase BLDC motor driver also aims to reduce solution size, design time, and cost with an integrated power manager and configurable analog front end (AFE). The ACT72350 160-V gate driver can replace as many as 40 discrete components in a BLDC motor control system, and the configurable AFE enables designers to configure their exact sensing and position detection requirements.
The ACT72350 includes a configurable power manager with an internal DC/DC buck converter and LDOs to support internal components and serve as an optional supply for the host microcontroller (MCU). In addition, by offering a wide, 25-V to 160-V input range, designers can reuse the same design for a variety of battery-operated motor control applications, including power and garden tools, drones, EVs, and e-bikes.
The ACT72350 provides the analog circuitry needed to implement a BLDC motor control system and can be paired with a variety of MCUs, Qorvo said. It provides high efficiency via programmable propagation delay, precise current sensing, and BEMF feedback, as well as differentiated features for safety-critical applications.
The SOI-based motor driver is available now in a 9.0 × 9.0-mm, 57-pin QFN package. An evaluation kit is available, along with a model of the ACT72350 in Qorvo’s QSPICE circuit simulation software at www.qspice.com.
Qorvo’s ACT72350 three-phase BLDC motor driver (Source: Qorvo Inc.)
Software, reference designs, and evaluation kits
Motor driver IC and power semiconductor manufacturers also deliver software suites, reference designs, and development kits to simplify motor drive design and development. A few examples include Power Integrations’ MotorXpert software, Efficient Power Conversion Corp.’s (EPC’s) GaN-based motor driver reference design, and a modular motor driver evaluation kit developed by Würth Elektronik and Nexperia.
Power Integrations continues to enhance its MotorXpert software for its BridgeSwitch and BridgeSwitch-2 half-bridge motor driver ICs. The latest version, MotorXpert v3.0, enables FOC without shunts and their associated sensors. It also adds support for advanced modulation schemes and features V/F and I/F control to ensure startup under any load condition.
Designed to simplify single- and three-phase sensorless motor drive designs, the v3.0 release adds a two-phase modulation scheme, suited for high-temperature environments, reducing inverter switching losses by 33%, according to the company. It allows developers to trade off the temperature of the inverter versus torque ripple, particularly useful in applications such as hot water circulation pumps, reducing heat-sink requirements and enclosure cost, the company said.
The software also delivers a five-fold improvement to the waveform visualization tool and an enhanced zoom function, providing more data for motor tuning and debugging. The host-side application includes a graphical user interface with Power Integrations’ digital oscilloscope visualization tool to make it easy to design and configure parameters and operation and to simplify debugging. Also easing development are parameter tool tips and a tuning assistant.
The software suite is MCU-agnostic and includes a porting guide to simplify deployment with a range of MCUs. It is implemented in the C language to MISRA standards.
Power Integrations said development time is greatly reduced by the included single- and three-phase code libraries with sensorless support, reference designs, and other tools such as a power supply design and analysis tool. Applications include air conditioning fans, refrigerator compressors, fluid pumps, washing machine and dryer drums, range hoods, industrial fans, and heat pumps.
Power Integrations’ MotorXpert software suite (Source: Power Integrations)
EPC claims the first GaN-based motor driver reference design for humanoid robots with the launch of the EPC91118 reference design for motor joints. The EPC91118 delivers up to 15 ARMS per phase from a wide input DC voltage, ranging from 15 V to 55 V, in an ultra-compact, circular form factor.
The reference design is optimized for space-constrained and weight-sensitive applications such as humanoid limbs and drone propulsion. It shrinks inverter size by 66% versus silicon, EPC said, and eliminates the need for electrolytic capacitors due to the GaN ICs and high-frequency operation. The high switching frequency instead allows the use of smaller MLCCs.
The reference design is centered around the EPC23104 ePower stage IC, a monolithic GaN IC that enables higher switching frequencies and reduced losses. The power stage is combined with current sensing, a rotor shaft magnetic encoder, an MCU, RS-485 communications, and 5-V and 3.3-V power supplies on a single board that fits within a 32-mm-diameter footprint (55-mm-diameter outer frame; 32-mm-diameter inverter).
EPC’s EPC91118 motor driver reference design (Source: Efficient Power Conversion Corp.)
Aimed at faster development of motor controllers, Würth Elektronik and Nexperia have collaborated on the NEVB-MTR1-KIT1 modular motor driver evaluation kit. The kit can be configured for use in under two minutes and is powered via USB-C.
The companies highlight the modularity of the evaluation board that can be adapted to a wide range of motors, control algorithms, and test setups, enabling faster optimization as well as faster iterations and testing. With an open architecture, the kit enables MCUs and components to be easily exchanged, and the open-source firmware allows developers to quickly adapt and develop motor controllers under real-world conditions, according to the companies.
The kit includes a three-phase inverter board, a motor controller board, an MCU development board, pre-wired motor connections, and a BLDC motor. A key feature is the high-current connectors integrated by Würth Elektronik, which enable evaluations up to 1 kW at 48 V.
The demands on dynamics, fault tolerance, and energy efficiency in drive systems are rising steadily, resulting in increasingly more complex motor control system design, according to the companies. The selection of the right switches (MOSFETs and IGBTs), gate drivers, and protection circuits is critical to ensure lower switching losses, better thermal behavior, and stable dynamics.
The behavior of the components must be carefully validated under real-world conditions, taking into consideration factors such as parasitic elements, switching transients, and EMI, according to the companies. The modular kit helps with this by enabling different motors and control concepts to be evaluated.
The Würth Elektronik and Nexperia NEVB-MTR1-KIT1 motor drive evaluation kit (Source: Würth Elektronik)
The post Motor drivers advance with new features appeared first on EDN.
A 0-20mA source current to 4-20mA loop current converter
A 4 to 20 mA loop current is a popular terminology with Instrumentation/Electronics engineers in process industries. Field transmitters like pressure,temperature,flow, etc., give out 4 to 20 mA current signals corresponding to the respective process parameters.
Industrial equipment, such as plant control rooms (situated at a distance from the field), will house a distributed control system (DCS) or programmable logic controller (PLC) to monitor, record, and control these process parameters. This equipment will supply 24 VDC to a typical transmitter through one wire and receive current proportional to the process parameter through another wire.
Typically, two wires are needed to connect the supply voltage and ground, and two more wires are needed to connect the current signal. Thus, a two-wire system cuts cable cost by 50%. Hence, all field devices must conform to this two-wire system in process industries. DCS/PLC should receive a current in the range 4 to 20 mA. A current of zero indicates the cable has been cut.
Still, there is equipment, like gas analyzers, which give out a conventional 0 to 20 mA current output. These signals are to be converted into the 4 to 20 mA loop current format to feed the DCS/PLC in the control room.
Figure 1’s circuit does exactly this.
Figure 1 A 0 to 20 mA current source to a 4 to 20 mA loop current converter module circuit. The SPAN & ZERO potentiometers can be multiturn PCB mountable types for precision adjustment. Q1 should have a heatsink.
Connect the 24-V power supply, digital ammeter, and a load resistor to J2 as shown in Figure 1.
Then, connect a current generator to the J1 connector. This current flows through R3 and is converted to a voltage.
The output of U1B is this voltage multiplied by (1+(R10/R11)), which is nearly one. Let us call this Vspan. The output of U3 is Vreg.
There are three currents at pin3 of U1A. Let us analyze the basic equation of this circuit:


The third current through R4 is:
![]()
The total current at pin3 of U1A is:
![]()
![]()
In this circuit, R4/R6 is chosen to be 99; therefore:
![]()
Both U1A and Q1 adjust the current flow through R6, satisfying the above equation in closed-loop control. U3 generates 5 VDC from the 24 VDC input for circuit operation.
R12 loads the regulator to draw a small current. Q2 and R1 limit the output current to around 26 mA.
How to calibrate this circuitConnect a 24 VDC power supply to J2, a load resistor of 200 Ω, and a digital ammeter
to J2 as shown in Figure 1. Connect a current generator to J1 as shown.
Keep the current as zero. Adjust Rzero until Ioutput reaches 4 mA.
Now, set the current generator to 20 mA. Adjust Rspan until Ioutput shows 20 mA.
Repeat this a few times to get the correct values. Now this current converter is calibrated.
How to improve accuracyThis circuit gives an accuracy of < 1%. To improve accuracy, select components with close tolerances.
You may introduce a 2.5-V reference IC after U3. Connect R2 and Rzero to this reference. In this case, R2 will be 50 KΩ and Rzero will be 20 KΩ.
Figure 2 illustrates how this current converter module is connected between the field transmitter and the control room’s DCS/PLC. Make sure to introduce a suitable surge suppressor in the line going to the field.
This module does not need a separate power supply. This can be kept in the field near the equipment giving out 0 to 20 mA.

Figure 2 A block diagram that shows the connection of the current converter in process industries.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- A two-wire temperature transmitter using an RTD sensor
- Two-wire interface has galvanic isolation
- Low-cost NiCd battery charger with charge level indicator
- Single phase mains cycle skipping controller sans harmonics
- Two-wire remote sensor preamp
The post A 0-20mA source current to 4-20mA loop current converter appeared first on EDN.
Top 10 AC/DC power supplies

AC/DC power supply manufacturers have focused their latest designs on meeting the increased demand for higher efficiency and miniaturization in industrial and medical systems. A few of them are also leveraging wide-bandgap (WBG) technologies such as gallium nitride (GaN) and silicon carbide (SiC) to achieve gains in efficiency in their latest-generation power supplies.
It is understood that these power supplies need to meet a range of safety certifications for industrial and medical applications. They must also be rugged enough to operate in harsh environments.
Here are 10 top AC/DC power supplies introduced over the past year for industrial and medical applications. In some cases, these AC/DC power supplies meet certifications for both medical and industrial markets, allowing them to be used in both applications.
Medical and industrial power suppliesGaN technology is making its way into AC/DC power supplies for industrial and medical applications, helping to improve performance and shrink designs. Bel Fuse Inc. recently introduced its 65-W GaN-based AC/DC power supplies in a compact footprint. The latest additions to the Bel Power Solutions portfolio are the MDP65 for medical applications and the HDP65 for industrial and ITE, both offering up to 92% efficiency.
The series is available in two mechanical mount options: printed-circuit-board (PCB) mount or open frame. The compact package size of 1 × 3 inches offers 50% real-estate savings compared with 2 × 3-inch devices for increased power density in lower-power applications.
The MDP65 series is a cost-effective option for the medical market while providing critical safety. Suited for Type BF medical applications, it is compliant with the IEC/EN 60601-1 safety standard and features 2 × Means of Patient Protection (MOPP) isolation. The HDP65 devices meet safety standards IEC 62368-1, EN 62368-1, UL 62368-1, and C-UL (equivalent to CAN/CSA-C22.2 No.62368-1). Both series are safety-agency-certified, meeting the latest regulatory requirements with UL and Nemko approvals.
Both series output 65-W power, offer a universal, 90- to 264-VAC input voltage range, and deliver a high power density of 17.20 W/in.3. They also feature an operating temperature range of –20°C to 70°C, ensuring reliable performance even when incorporated into compact, sealed diagnostic or portable monitoring units where heat dissipation is a challenge, the company said.
Bel Fuse’s HDP65 and MDP65 power supplies (Source: Bel Fuse Inc.)
Claiming to set new standards in power density and on-board intelligence, XP Power has introduced its FLXPro series of chassis-mount AC/DC power supplies to address space constraints and the need for increased power. The FLXPro series is also designed with SiC/GaN, achieving efficiencies up to 93%, which helps to reduce system operating costs, cooling requirements, and system size.
The FLX1K3 fully digital configurable modular power supply delivers power levels of 1.3 kW at high-line conditions and 1 kW at low-line conditions with a power density of up to 23.2 W/in.3. It is housed in a compact 1U form factor, measuring 254.0 × 88.9 × 40.6 mm (10.0 × 3.50 × 1.6 inches) and is designed to simplify power systems in healthcare, industrial, semiconductor manufacturing, analytical instrumentation, automation, renewable energy systems, and robotics applications.
The FLXPro design features up to four customer-selected, inherently flexible output modules with selectable outputs from 9 VDC to 66 VDC and a wide adjustment range (10% to –40%), which can be configured under live conditions to form part of a customer’s active control system, XP Power said. The output modules can be combined into multiple parallel and series configurations, and multiple FLXPro units can also be combined in parallel for higher-power applications.
XP Power said this flexibility optimizes application performance and control, addressing requirements for fixed and variable loads.
A unique feature of the FLXPro series is the fully digital architecture for both the input stage and output modules. It is the foundation for XP Power’s new iPSU Intelligent Power technology, which converts internal data into usable information for quick decisions that improve application safety and reduce operating costs.
The FLXPro series also provides extensive diagnostics, including a new Black Box Snapshot feature that reduces troubleshooting time after shutdown events by recording in-depth system status at, and prior to, shutdown; tri-color LEDs that indicate power supply health with a truth table incorporated on the chassis for simple interpretation without manuals or digital communications; and multiple internal temperature measurements for fast status checks through temperature diagnostics that drive intelligent fan control and overtemperature warnings and alarms.
FLXPro also features built-in user-defined digital controls, signals, alarms, and output controllability. Inputs, outputs, and firmware can be configured through the user interface or directly over direct digital communications. It supports ES1 isolated digital communications and uses PMBus over I2C for digital communications, enabling real-time control, monitoring, and data logging. The operating temperature range is –20°C to 70°C.
XP Power’s FLXPro series (Source: XP Power)
Also addressing industrial and medical applications with an efficient and power-dense design is Murata Manufacturing Co. Ltd.’s PQC600 open-frame AC/DC power supplies. Target markets include hospital beds, dentist chairs, medical equipment, and industrial process machinery.
The industrial-grade PQC600 offers 600 W of power in a package that is less than 1U in height. It leverages the Murata Power Solutions transformer design with an optimized layout and package design. With a 600-W forced-air cooling design, it achieves an efficiency of 95% at full load. Key features include an optimized interleaved power-factor correction, back-end synchronous rectification, and a droop-current-sharing feature, enabling multiple units to be configured in parallel for greater power scalability.
The PQC600 is certified to the IEC 60601-1 Edition 3 medical safety standard, which includes 2 × MOPP from primary to secondary, 1 MOPP from the chassis to ground, and 1 MOPP from output to chassis. It also complies with the IEC 60601-1-2 4th Edition for electromagnetic compatibility (EMC) standards and is suitable for use with medical devices that have Type B or Type BF applied parts.
Also targeting the need for high efficiency and miniaturization is the NSP-75/100/150/200/320 series of AC/DC enclosed-type power supplies from Mean Well Enterprises Co. Ltd. The NSP series surpasses Mean Well’s RSP series, which has been on the market for over 10 years, with a higher cost-performance ratio. It offers a wider, 85- to 305-VAC input range; an extended temperature range of –40°C to 85°C with full load operation possible up to 60°C, making it suitable for harsher environments; and a smaller footprint, ranging from 28% to 46% smaller than the RSP series.
The NSP series offers high efficiency of up to 90% to 94.5% with low no-load power consumption (<0.3 W to 0.5 W), depending on the model, and 200% peak-power-output capability. Other features include short, overload, overvoltage, and overtemperature protection; programmable output voltage; ultra-low leakage of <350 µA; and operation at altitudes up to 5,000 meters.
The AC/DC power supplies also offer safety certifications in multiple industries, including ICT, industrial, medical, household, and green energy applications, and meet OVC III requirements. Safety certifications include CB/DEKRA/UL/RCM/BSMI/CCC/EAC/BIS/KC/CE/UKCA, and IEC/EN/UL 62368-1, 61010-1, 61558-1, 62477-1, and SEMI 47 for semiconductor equipment. They meet 2 × MOPP and medical BF-grade applications.
Mean Well’s NSP-320 power supply (Source: Mean Well Enterprises Co. Ltd.)
Medical power supplies
P-Duke Technology Co. Ltd. launched the MAD150 medical-grade AC/DC power supply series, capable of delivering up to 150 W of continuous output power and 200-W peak power for five seconds. The compact, 3 × 2-inch package is available in open-frame, enclosed, and DIN-rail options, with connection types including JST connectors, Molex connectors, and screw terminals.
Suited for most industries worldwide, the series features a universal input range from 85 to 264 VAC and supports DC input voltages from 88 to 370 VDC. The MAD150 series provides single-output options for medical devices at 12, 15, 18, 24, 28, 36, 48, and 54 VDC, with up to 7% output adjustability.
Designed for medical applications and suited for BF-type parts, it offers less than 100-μA patient leakage current, 2 × MOPP, and 4,000-VAC input-to-output isolation. Applications include portable medical devices, diagnostic equipment, monitoring equipment, hospital beds, and medical carts.
These devices reduce thermal generation, offer an extended temperature range of –40°C to 85°C, and provide a conversion efficiency up to 94%. It operates at altitudes up to 5,000 meters.
The MAD150 is certified to IEC/EN/ANSI/AAMI ES 60601-1 (Medical electrical equipment – Part 1: General requirements for basic safety and essential performance) and IEC/EN/UL 62368-1 (Audio/video, information and communication technology equipment – Part 1: Safety requirements).
Advanced Energy Industries Inc. has introduced the NCF425 series of 425-W cardiac floating (CF)-rated medical open-frame AC/DC power supplies with CF-level isolation and leakage current. These standard, off-the-shelf power supplies, simplifying isolation and speeding time to market, are certified to IEC 60601-1 and streamline critical medical device product development.
Advanced Energy said it is one of the few companies that provides standard, off-the-shelf CF-rated power products. The system-level CF rating is the most stringent medical device electrical safety classification, with certification needed for equipment that has direct contact with the heart or bloodstream, the company explained.
The company’s CF-rated portfolio was initially launched in September 2024 with the introduction of the NCF150, followed by the NCF250 and NCF600. The NCF series achieves a sub-10-µA leakage current and integrates the high levels of isolation required in critical medical devices.
This latest release offers additional options and helps reduce the number of isolation components required, translating into a smaller system size and lower cost.
The NCF family is designed to simplify thermal and electromagnetic interference (EMI) management, reduce system size and weight, and reduce the bill of materials. It also includes functionality typically provided at the system level, which reduces time and complexity in the development process, the company said.
The NCF425 is certified to the medical safety standard IEC 60601-1 and meets 2 × MOPP. Key features include a maximum output power of 425 W in a 3.5 × 6 × 1.5-inch form factor and a 5-kV defibrillator pulse protection. Applications include surgical generators, RF ablation, pulsed field ablation, cardiac-assist devices and monitors, and cardiac-mapping systems.
Advanced Energy’s NCF425 series (Source: Advanced Energy Industries Inc.)
Industrial power supplies
Delivering a high level of programmability and flexibility, XP Power’s 1.5-kW HDA1500 series suits a variety of applications across a range of industries. For example, the HDA1500 can be used in applications such as robotics, lasers, LED heating, and semiconductor manufacturing, providing benefits in digital control, communication, and status LEDs.
Rated for 1.5 kW of power with no minimum load requirement, the HDA1500 power supplies offer efficiency up to 93%, allowing for a more compact form factor as well as reducing operating costs. The HDA1500 units can be operated in parallel with active current sharing when more power is required in a rack.
Advanced digital control in power solutions has not always been widely available, according to XP Power, with the HDA1500 offering precise digital adjustment of both output current and output voltage from 0% to 105% for greater user flexibility.
The standard advanced digital control is key to the flexibility of the HDA1500, the company said. Driven by a graphical user interface, the power supply can be adjusted via several digital protocols, including PMBus, RS-485/-232, Modbus, and Ethernet, which also allow for easy integration into more advanced power control schemes.
The HDA1500 units operate from a universal single-phase mains input (90 to 264 VAC) and are reported to offer one of the widest single-rail output selections on the market, covering popular voltages between 12 VDC and 400 VDC in a portfolio of 11 units. At low-line operation, the power supplies can deliver more power than many competitive offerings, the company said.
With an operating temperature range of –25°C to 60°C, the units require no derating below 50°C. Other features include built-in protection, including overtemperature, overload, overvoltage, and short-circuit; a 5-VDC/1-A standby supply rail that keeps external circuitry alive when the main supply is powered down; and remote sense, particularly for applications in which power cables are extended.
The power supplies meet a range of ITE-related approvals, including EN55032 Class A and EN61000-3-x for emissions, as well as EN61000-4-x for immunity. Safety approvals include IEC/UL/EN62368-1 as well as all applicable CE and UKCA directives. Applications include test and measurement, factory automation, process control, semiconductor fabrication, and renewable energy systems.
XP Power’s HDA1500 series (Source: XP Power)
Targeting space-constrained industrial applications is the CBM300S series of 300-W fanless AC/DC power supplies from Cincon Electronics Co. Ltd. The series is housed in a brick package that measures 106.7 × 85.0 mm (4.2 × 3.35 inches) with an ultra-slim profile of 19.7 mm (0.78 inches). The device delivers 300-W-rated power with a peak power capability of 360 W.
The CBM300S operates with an input range of 90 to 264 VAC and accepts DC input ranging from 120 to 370 VDC. Seven output voltage options are available: 12, 15, 24, 28, 36, 48, and 54 VDC, all classified as Class I.
The series comes with safety approvals for IEC/UL/EN 62368-1 3rd edition and is EMC-compliant with EN 55032 Class B and CISPR/FCC Class B standards.
A key feature of the CBM300S is its exceptionally low leakage current of 0.75 mA maximum. It also delivers efficiency of up to 94% and operates across a wide temperature range of –40°C to 90°C, making it suitable for harsh environments.
This power supply can function at altitudes up to 5,000 meters and maintains a low no-load input power consumption of less than 0.5 W. The MTBF is rated at 240,000 hours. It also offers protection features, including output overcurrent, output overvoltage, overtemperature, and continuous short-circuit protections.
The CBM300S power supplies can be used in a variety of industrial/ITE applications, including automation equipment, test and measurement instruments, commercial equipment, telecom and network devices, and other industrial applications.
Recom Power GmbH introduced a series of flexible and highly efficient AC/DC power supplies in a small form factor for new energy applications. Applications include energy management and monitoring and powering actuators, as well as general-purpose applications.
The 20-W RAC20NE-K/277 series is available in board-mount or open-frame options. The board-mount, encapsulated power supplies measure 52.5 × 27.6 × 23.0 mm, and the open-frame devices with Molex connections measure 80.0 × 23.8 × 22.5 mm.
AC/DC power supplies increasingly must operate over nominal supply values from 100 VAC to 277 VAC, Recom said, and the RAC20NE-K/277 matches this requirement with 20 W available at optional 12-, 24-, or 36-VDC outputs. This series is available with encapsulated versions with constant-voltage- or constant-current-limiting characteristics and a constant-voltage open-frame type with 12- or 24-VDC output.
The RAC20NE-K/277 series is highly efficient, Recom said, allowing reliable operation at full load to 60°C ambient and to 85°C with derating. It also offers <100-mW no-load power consumption.
The parts are Class II–insulated and OVC III–rated up to 5,000 meters and meet EN 55032 Class B EMC requirements with a floating or grounded output. Standby and no-load power dissipation meet eco-design requirements.
Recom’s RAC20NE-K/277 (Source: Recom Power GmbH)
If you’re looking for greater flexibility with more options, TDK Corp.’s ZWS-C series of 10- to 50-W industrial power supplies offers new mounting and protection options. The TDK-Lambda brand ZWS-C series of 10-, 15-, 30-, and 50-W-rated industrial AC/DC power supplies was initially launched in an open-frame configuration. Four additional options are now available: a metal L-bracket (with or without a cover), pins for PCB mounting, and two-sided board coating for all voltage and power levels.
These options can provide additional operator protection, lower the cost of wiring harnesses, or reduce the impact of dust and contamination in harsh environments, TDK said.
The ZWS-C series is available with 5-, 12-, 15-, 24-, and 48-V (50 W only) output voltages. The ZWS10C and ZWS15C models measure 63.5 × 45.7 × 22.1 mm, the ZWS30C package measures 76.2 × 50.8 × 24.2 mm, and the ZWS50C footprint measures 76.2 × 50.8 × 26.7 mm. The operating temperature with convection cooling and standard mounting ranges from –10°C to 70°C, derating linearly to 50% load between 50°C and 70°C.
The power supplies can operate at full load with an external airflow of 0.8 m/s, and no-load power consumption is typically less than 0.3 W. Other features include a 3-kVAC input-to-output, 2-kVAC input-to-ground, and 750-VAC output-to-ground (Class I) isolation. The models meet EN55011/EN55032-B conducted and radiated EMI in either Class I or Class II (double-insulated) construction, without the need for external filtering or shielding.
All models are also certified to the IEC/UL/CSA/EN62368-1 for AV, information, and communication equipment standards; EN60335-1 for household electrical equipment; IEC/EN61558-1; and IEC/EN61558-2-16. They also comply with IEC 61000-3-2 (harmonics) and IEC 61000-4 (immunity) and carry the CE and UKCA marks for the Low Voltage, EMC, and RoHS Directives.
Thanks to electrolytic capacitor lifetimes of up to 15 years, the ZWS-C models can be used in factory automation, robotics, semiconductor fabrication manufacturing, and test and measurement equipment.
TDK’s ZWS15C model (Source: TDK Corp.)
The post Top 10 AC/DC power supplies appeared first on EDN.
SiC power modules gain low-resistance options

SemiQ expands its 1200-V Gen3 SiC MOSFET family with SOT-227 modules offering on-resistance values of 7.4 mΩ, 14.5 mΩ, and 34 mΩ. GCMS models are co-packaged with a Schottky barrier diode (SBD), while GCMX types rely on the intrinsic body diode.

The modules are designed for medium-voltage, high-power systems such as battery chargers, photovoltaic inverters, server power supplies, and energy storage units. Each device undergoes wafer-level gate-oxide burn-in testing above 1400 V and avalanche testing to 800 mJ (330 mJ for 34-mΩ types).

The 7.4-mΩ GCMX007C120S1-E1 reduces switching losses to 4.66 mJ (3.72 mJ turn-on, 0.94 mJ turn-off) and features a body-diode reverse-recovery charge of 593 nC. Junction-to-case thermal resistance ranges from 0.23 °C/W for the 7.4-mΩ device to 0.70 °C/W for the 34-mΩ module.
All models have a rugged, isolated backplate for direct heat-sink mounting. Samples and volume pricing are available upon request. For more information about the 1200-V Gen3 SiC MOSFET modules, click here.
The post SiC power modules gain low-resistance options appeared first on EDN.
SiC modules boost power cycling performance

Wolfspeed’s YM 1200-V six-pack power modules deliver up to 3× the power cycling capability of comparable devices in the same industry-standard footprint. The company reports that the modules also provide 15% higher inverter current.

Built with Gen 4 SiC MOSFETs, the modules are suited for e-mobility propulsion systems, automotive traction inverters, and hybrid electric vehicles. Their YM package incorporates a direct-cooled pin fin baseplate, sintered die attach, hard epoxy encapsulant, and copper clip interconnects. An optimized power terminal layout minimizes package inductance, reducing overshoot voltage and lowering switching losses.
In addition to their 1200-V blocking voltage, YM module variants offer current ratings of 700 A, 540 A, and 390 A, with corresponding RDS(on) values at 25°C of 1.6 mΩ, 2.1 mΩ, and 3.1 mΩ. According to Wolfspeed, the modules achieve a 22% improvement in RDS(on) at 125°C over the previous generation and reduce turn-on energy by roughly 60% across operating temperatures. An integrated soft-body diode further cuts switching losses by 30% and VDS overshoot by 50% during reverse recovery compared to the prior generation.
The 1200‑V SiC six‑pack power modules are now available for customer sampling and will reach full distributor availability in early 2026.
The post SiC modules boost power cycling performance appeared first on EDN.
Power switch offers smart overload control

Joining ST’s lineup of safety switches, the IPS1050LQ is a low-side switch featuring smart overload protection with configurable inrush and current limits. Three pins allow selection between static and dynamic modes and set the operating current limit. In dynamic mode, connecting a capacitor enables an initial inrush of up to 25 A, which then steps down in stages to the programmed limit.

The output stage of the IPS1050LQ supports up to 65 V, making it suitable for industrial equipment such as PLCs and CNC machines. Its typical on-resistance of just 25 mΩ ensures energy-efficient switching for resistive, capacitive, or inductive loads, with active clamping enabling fast demagnetization of inductive loads at turn-off. Comprehensive safety features include undervoltage, overvoltage, overload, short-circuit, ground disconnection, VCC disconnection, and an overtemperature indicator pin that provides thermal protection.
Now in production, the IPS1050LQ in a 6×6-mm QFN32L package starts at $2.19 each in 1000-unit quantities.
The post Power switch offers smart overload control appeared first on EDN.
Rad-tolerant MCUs cut space-grade costs

Vorago has announced four rad-tolerant MCUs for LEO missions, which it says cost far less than conventional space-grade components. Part of the VA4 series of rad-hardened MCUs, these new chips provide an economical alternative to high-risk upscreened COTS components.

Based on Arm Cortex-M4 processors, the Radiation-Tolerant by Design (RTbD) MCUs are priced nearly 75% lower than Vorago’s HARDSIL radiation-hardened products. The RTbD lineup includes the extended-mission VA42620 and VA42630, as well as the cost-optimized VA42628 and VA42629 for short- or lower-orbit missions. By embedding radiation protection directly into the silicon, these MCUs tackle the reliability challenges of satellite constellations and provide a more efficient solution than conventional multi-chip redundancy approaches.
All four MCUs provide >30 krad(Si) TID tolerance, with the VA42630 integrating 256 KB of nonvolatile memory. Extended-mission devices are designed for harsher obits and primary flight control, while cost-optimized MCUs target thermal regulation and localized power management. These chips can be dropped into existing architectures with no redesign, enabling rapid deployment.
Vorago will begin shipping its first rad-tolerant chips in early Q1 2026.
The post Rad-tolerant MCUs cut space-grade costs appeared first on EDN.
Module streamlines smart home device connectivity

The KGM133S, the first in a range of Matter over Thread modules from Quectel, enables seamless interoperability for smart home devices like door locks, sensors, and lighting. Powered by Silicon Labs’ EFR32MG24 wireless chip, the module uses Matter 1.4 to connect devices across multiple ecosystems, including Apple Home, Google Home, Amazon Alexa, and Samsung SmartThings. Thread 1.4 support ensures compatibility with IPv6 addressing.

The KGM133S features an Arm Cortex-M33 processor running at up to 78 MHz, with 256 KB of SRAM and up to 3.5 MB of flash memory. With a receive sensitivity better than -105 dB and a maximum transmit power of 19.5 dBm, the module ensures reliable signal transmission. In addition to Matter over Thread, the KGM133S also supports Zigbee 3.0 and Bluetooth LE 6.0 connectivity.
Two LGA packaging options are available for the KGM133S to accommodate both compact and slim terminal designs. The first option (12.5×13.2×2.2 mm) features a fourth-generation IPEX or pin antenna, while the second option (12.5×16.6×2.2 mm) comes with an onboard PCB antenna.
A timeline for availability of the KGM133S wireless module was not disclosed at the time of this announcement.
The post Module streamlines smart home device connectivity appeared first on EDN.
Compute: Powering the transition from Industry 4.0 to 5.0

Industry 4.0 has transformed manufacturing, connecting machines, automating processes, and changing how factories think and operate. But its success has revealed a new constraint: compute. As automation, AI, and data-driven decision-making scale exponentially, the world’s factories are facing a compute challenge that extends far beyond performance. The next industrial era—Industry 5.0—will bring even more compute demand as it builds on the IoT to improve collaboration between humans and machines, industry, and the environment.
Progress in this next wave of industrial development is dependent on advances at the semiconductor level. Advances in chip design, materials science, and process innovation are essential. Alongside this, there needs to be a reimagining of how we power industrial intelligence, not just in terms of the processing capability but in how that capability is designed, sourced, and sustained.
Rethinking compute for a connected futureThe exponential rise of data and compute has placed intense pressure on the chips that drive industrial automation. AI-enabled systems, predictive maintenance, and real-time digital twins all require compute to move closer to where data is created: at the edge. However, edge environments come with tight energy, size, and cooling constraints, creating a growing imbalance between compute demand and power availability.
AI and digital triplets, which build on traditional digital twin models by leveraging agentic AI to continuously learn and analyze data in the field, have moved the requirement for processing to be closer to where the data is created. In use cases such as edge computing, where computing takes place within sensing and measuring devices directly, this can be intensive. That decentralization introduces new power and efficiency pressures on infrastructure that wasn’t designed for such intensity.
The result is a growing imbalance between performance and the limitations of semiconductor manufacturing. Businesses must have broader thinking around energy consumption, heat management, power balance, and raw materials sourcing. Sustainability can no longer be treated as an unwarranted cost or compliance exercise; it’s becoming a new indicator of competitiveness, where energy-efficient, low-emission compute enables manufacturers to meet growing data reliance without exceeding environmental limits.
Businesses must take these challenges seriously, as the demand for compute will only escalate with Industry 5.0. AI will become more embedded, and the data it relies on will grow in scale and sophistication.
If manufacturing designers dismiss these issues, they run the risk of bottlenecking their productivity with poor efficiency and sustainability. This means that when chip designers optimize for Industry 5.0 applications, they should consider responsibility, efficiency, and longevity alongside performance and cost. The challenge is no longer just “can we build faster systems?” It’s now “can we build systems that endure environmentally, economically, and geopolitically?”
Innovation starts at the material levelThe semiconductor revolution of Industry 5.0 won’t be defined solely by faster chips but by the science and sustainability embedded in how those chips are made. For decades, semiconductor progress has been measured in nanometers; the next leap forward will be measured in materials. Advances in compounds such as silicon carbide and gallium nitride are improving chip performance and transforming how the industry approaches sustainability, supply chain resilience, and sovereignty.
Advances in chip design, materials science, and process innovation are essential in the next wave of industrial development. (Source: Adobe Stock)
These materials allow for higher power efficiency and longer lifespans, reducing energy consumption across industrial systems. Combined with cleaner fabrication techniques such as ambient temperature processing and hydrogen-based chemistries, they mark a significant step toward sustainable compute. The result is a new paradigm where sustainability no longer comes at an artificial premium but is an inherent feature of technological progress.
Process innovations, such as ambient temperature fabrication and green hydrogen, offer new ways to reduce environmental footprint while improving yield and reliability. Beyond the technology itself and material innovations, more focus should be placed on decentralization and alternative sources of raw materials. This will empower businesses and the countries they operate in to navigate geopolitical and supply chain challenges.
Collaboration is the new competitive edgeThe compute challenge that Industry 5.0 presents isn’t an isolated problem to solve. The demand and responsibility for change doesn’t lie with a single company, government or research body. It requires an ecosystem mindset, where collaboration is encouraged, replacing competition in key areas of innovation and infrastructure.
Collaboration between semiconductor manufacturers, industrial original equipment manufacturers, policymakers, and researchers is important to accelerate energy-efficient design and responsible sourcing. Interconnected and shared platforms within the semiconductor ecosystem de-risk tech investments. This ensures the collective benefits of sustainability and resilience benefit entire industrial innovation, not just individual players.
The next era of industrial progress will see the most competitive organizations collaborate and work together, with the goal of shared innovation and progress.
Powering compute in the Industry 5.0 transitionThe evolution from Industry 4.0 to Industry 5.0 is more than a technological upgrade; it represents a change in attitude around how digital transformation is approached in industrial settings. This new era will see new approaches to technological sustainability, sovereignty, and collaboration that prioritize productivity and speed. Compute will be the central driver of this transition. Materials, processes, and partnerships will determine whether the industrial sector can grow without outpacing its own energy and sustainability limits.
Industry 5.0 presents a vision of industrialization that gives back more than it takes, amplifying both productivity and possibility. The transition is already underway. Now, businesses need to ensure innovation, efficiency, and resilience evolve together to power a truly sustainable era of compute.
The post Compute: Powering the transition from Industry 4.0 to 5.0 appeared first on EDN.
A holiday shopping guide for engineers: 2025 edition

As of this year, EDN has consecutively published my odes to holiday-excused consumerism for more than a half-decade straight (and intentionally ahead of Black Friday, if you hadn’t already deduced), now nearing ten editions in total. Here are the 2019, 2020, 2021, 2022, 2023, and 2024 versions; I skipped a few years between 2014 and its successors.
As usual, I’ve included up-front links to prior-year versions of the Holiday Shopping Guide for Engineers because I’ve done my best here to not regurgitate any past recommendations; the stuff I’ve previously suggested largely remains valid, after all. That said, it gets increasingly difficult each year not to repeat myself! And as such, I’ve “thrown in the towel” this year, at least to some degree…you’ll find a few repeat categories this time, albeit with new product suggestions within them.
Without any further ado, and as usual, ordered solely in the order in which they initially came out of my cranium…
A Windows 11-compatible (or alternative O/S-based) computerMicrosoft’s general support for Windows 10 ended nearly a month ago (on October 14, to be exact) as I’m writing these words. For you Windows users out there, options exist for extending Windows 10 support updates (ESUs) for another year on consumer-licensed systems, both paid (spending $30 or redeeming 1,000 Microsoft Rewards points, with both ESU options covering up to 10 devices) and free (after syncing your PC settings).
If you’re an IT admin, the corporate license ESU program specifics are different; see here. And, as I covered in hands-on detail a few months back, (unsanctioned) options also exist for upgrading officially unsupported systems to Windows 11, although I don’t recommend relying on them for long-term use (assuming the hardware-hack attempt is successful at all, that is). As I wrote back in June:
The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.
You could also convert your existing PC over to run a different O/S, such as ChromeOS Flex (originally Neverware’s CloudReady, then acquired and now maintained by Google) or a Linux distro of your preference. For that matter, you could also just “get a Mac”. That said, any of these options will likely also compel conversions to new apps for the new O/S foundation. The aggregate learning curve from all these software transitions can end up being a “bridge too far”.

Instead, I’d suggest you just “bite the bullet” and buy a new PC for yourself and/or others for the holidays, before CPUs, DRAM, SSDs, and other building block components become even more supply-constrained and tariff-encumbered than they are now, and to ease the inevitable eventual transition to Windows 11.
Then donate your old hardware to charity for someone else to O/S-convert and extend its useful life. That’s what I’ll be doing, for example, with my wife’s Dell Inspiron 5570, which, as it turns out, wasn’t Windows 11-upgradeable after all.
Between now and next October, when the Windows 10 ESU runs out (unless the deadline gets extended again), we’ll replace it with the Dell 16 Plus (formerly Inspiron 16 Plus) in the above photo.
An AI-enhanced mobile deviceThe new Dell laptop I just mentioned, which we’d bought earlier this summer (ironically just prior to Microsoft’s unveiling of the free Windows 10 ESU option), is compatible with Microsoft’s Copilot+ specifications for AI-enhanced PCs by virtue of the system’s Intel Core Ultra 7 256V CPU with an integrated 47 TOPS NPU.
That said, although its support for local (vs conventional cloud) AI inference is nice from a future-proofing standpoint, there’s not much evidence of compelling on-client AI benefits at this early stage, save perhaps for low-latency voice interface capabilities (not to mention broader uninterrupted AI-based functionality when broadband goes down).
The current situation is very different when it comes to fully mobile devices. Yes, I know, laptops also have built-in batteries, but they often still spend much of their operating life AC-tethered, and anyway, their battery packs are much beefier than the ones in the smartphones and tablets I’m talking about here.
Local AI processing is not only faster than to-and-back-from-cloud roundtrip delays (particularly lengthy over cellular networks), but it also doesn’t gobble up precious limited-monthly-allocation data. Then there’s the locally stored-and-processed data enhanced privacy factor to consider, along with the oft-substantial power saving accrued by not needing to constantly leverage the mobile device’s Wi-Fi and cellular data subsystems.
![]()
You may indeed believe (as, full disclosure, I do) that AI features are of limited-at-best benefit at the moment, at least for the masses. But I think we can also agree that ongoing widespread-and-expanding and intense industry attention on AI will sooner or later cultivate compelling capabilities.
Therefore, I’ve showcased mobile devices’ AI attributes in recent years’ announcement coverage (such as that of Google’s Pixel 10 series shown in the photo above) and why I recommend them, again from a future-proofing angle if nothing else, if you’re (and/or yours are) due for a gadget upgrade this year. Meanwhile, I’ll soldier on with my Pixel 7s…
Audio education resources
As regular readers likely already realize, audio has received particular showcase attention in my blog posts and teardowns this past year-plus (a trend which will admittedly also likely extend into at least next year). This provided, among other things, an opportunity for me to refresh and expand my intellectual understanding of the topic.
I kept coming across references to Bob Cordell, mentioning both his informative website and his classic tomes, Designing Audio Power Amplifiers (make sure you purchase the latest 2nd edition, published in 2019, whose front cover is shown above) and the newer Designing Audio Circuits and Systems, released just last year.
Fair warning: neither book is inexpensive, especially in hardback, but even in paperback, and neither is available in a lower-priced Kindle version, either. That said, per both reviews I’ve seen from others and my own impressions, they’re well worth the investments.
Another worthwhile read, this time complete with plenty of humor scattered throughout, is Schiit Happened: The Story of the World’s Most Improbable Start-Up, in this case available in both inexpensive paperback and even more cost-effective Kindle formats. Written by Jason Stoddard and Mike Moffat, the founders of Schiit Audio, whom I’ve already mentioned several times this year, it’s also available for free on the Head-Fi Forum, where Jason has continued his writing. But c’mon, folks, drop $14.99 (or $4.99) to support a scrappy U.S. audio success story.
As far as audio-related magazines go, I first off highly recommend a subscription to audioXpress. Generalist electronics design publications like EDN are great, of course, but topic-focused coverage like that offered by audioXpress for audio design makes for an effective information companion.
On the other end of the product development chain, where gear is purchased and used by owners, there’s Stereophile, for which I’ve also been a faithful reader for more years than I care to remember. And as for the creation, capture, mastering, and duplication of the music played on those systems, I highly recommend subscriptions to Sound on Sound and, if your budget allows for a second publication, Recording. Consistently great stuff, all of it.
Finally, as an analogy to my earlier EDN-plus-audioXpress pairing, back in 2021 I recommended memberships to generalist ACM and/or IEEE professional societies. This time, I’ll supplement that suggestion with an audio-focused companion, the AES (Audio Engineering Society).
Back when I was a full-time press guy with EDN, I used to be able to snag complimentary admission to the twice-yearly AES conventions along with other organization events, which were always rich sources of information and networking connection cultivation.
To my dying day, I will always remember one particularly fascinating lecture, which correlated Ludwig van Beethoven’s progressive hearing degradation and its (presenter-presumed) emotional and psychological effects to the evolution of the music styles that he composed over time. Then there were the folks from Fraunhofer that I first-time met at an AES convention, kicking off a longstanding professional collaboration. And…
Audio gearFor a number of years, my Drop- (formerly Massdrop)-sourced combo of the x Grace Design Standard DAC and Objective 2 Headphone Amp Desktop Edition afforded me with a sonically enhanced alternative to my computer’s built-in DAC and amp for listening to music over plugged-in headphones and powered speakers:

As I’ve “teased” in a recent writeup, however, I recently upgraded this unbalanced-connection setup to a four-component Schiit stack, complete with a snazzy aluminum-and-acrylic rack:

Why?
Part of the reason is that I wanted to sonically experience a tube-based headphone amp for myself, both in an absolute sense and relative to solid-state Schiit amplifiers also in my possession.
Part of it is that all these Schiit-sourced amps also integrate preamp outputs for alternative-listening connection to an external power amp-plus-passive speaker set:

Another part of the reason is that I’ve now got a hardware equalizer as an alternative to software EQ, the latter (obviously) only relevant for computer-sourced audio. And relatedly, part of it is that I’ve also now got a hardware-based input switcher, enabling me to listen to audio coming not only from my PC but also from another external source. What source, you might ask?
Why, one of the several turntables that I also acquired and more broadly pressed into service this past year, of course!

I’ve really enjoyed reconnecting with vinyl and accumulating a LP collection (although my wallet has admittedly taken a beating in the process), and encourage you (and yours) to do the same. Stand by for a more detailed description of my expanded office audio setup, including its balanced “stack” counterpart, in an upcoming dedicated topic to be published shortly.
For sonically enhancing the rest of the house, where a computer isn’t the primary audio source, companies such as Bluesound and WiiM sell various all-in-one audio streamers, both power amplifier-inclusive (for use with traditional passive speakers) and amp-less (for pairing with powered speakers or intermediary connection to a standalone external amp).
A Bluesound Node N130, for example, has long resided at the “man cave” half of my office:

And the class D amplifier inside the “Pro” version of the WiiM Amp, which I plan to press into service soon in my living room, even supports the PFFB feature I recently discussed:

(Apple-reminiscent Space Gray shown and self-owned; Dark Gray and Silver also available)
More developer hardwareHere’s the other area where, as I alluded to in the intro, I’m going to overlap a bit with a past-year Holiday Shopping Guide. Two years ago, I recommended some developer kits from both the Raspberry Pi Foundation and NVIDIA, including the latter’s then-$499 Jetson Orin Nano:

It’s subsequently been “replaced”, as well as notably priced-decreased, by the Orin Nano Super Developer Kit at $249.
Why the quotes around “replaced”? That’s because, as good news for anyone who acted on my earlier recommendation, the hardware’s exactly the same as before: “Super” is solely reflective of an enhanced software suite delivering claimed generative AI performance gains of up to 1.7x, and freely available to existing Jetson Orin Nano owners.
More recently, last month, NVIDIA unveiled the diminutive $3,999 DGX Spark:

with compelling potential, both per company claims and initial hands-on experiences:
As a new class of computer, DGX Spark delivers a petaflop of AI performance and 128GB of unified memory in a compact desktop form factor, giving developers the power to run inference on AI models with up to 200 billion parameters and fine-tune models of up to 70 billion parameters locally. In addition, DGX Spark lets developers create AI agents and run advanced software stacks locally.
albeit along with, it should also be noted, an irregular development history and some troubling early reviews. The system was initially referred to as Project DIGITS when unveiled publicly at the January 2025 CES. Its application processor, originally referred to as the N1X, is now renamed the GB10. Co-developed by NVIDIA (who contributed the Grace Blackwell GPU subsystem) and MediaTek (who supplied the multi-core CPU cluster and reportedly also handled full SoC integration duties), it was originally intended for—and may eventually still show up in—Arm-based Windows PCs.
But repeated development hurdles have (reportedly) delayed the actualization of both SoC and system shipment aspirations, and lingering functional bugs preclude Windows compatibility (therefore explaining the DGX Spark’s Linux O/S foundation).
More generally, just a few days ago as I write these words, MAKE Magazine’s latest issue showed up in my mailbox, containing the most recent iteration of the publication’s yearly “Guide to Boards” insert. Check it out for more hardware ideas for your upcoming projects.
A smart ring
Regular readers have likely also noticed my recent series of writeups on smart rings, comprising both an initial overview and subsequent reviews based on fingers-on evaluations.
As I write these words in mid-November, Ultrahuman’s products have been pulled from the U.S. market due to patent-infringement rulings, although they’re still available elsewhere in the world. RingConn conversely concluded a last-minute licensing agreement, enabling ongoing sales of its devices worldwide, including in the United States.
And as for the instigator of the patent infringement actions, market leader Oura, my review of the company’s Gen3 smart ring will appear at EDN shortly after you read these words, with my eval of the latest-generation Ring 4 (shown above) to follow next month.
Smart rings’ Li-ion batteries, like those of any device with fully integrated cells, won’t last forever, so you need to go into your experience with one of them eyes-open to the reality that it’ll ultimately be disposable (or, in my case, transform into a teardown project).
That said, the technology is sufficiently mature at this point that I feel comfortable recommending them to the masses. They provide useful health insights, even though they tend to notably overstate step counts for those who use computer keyboards a lot. And unlike a smart watch or other wrist-based fitness tracker, you don’t need to worry (so much, at least) about color- and style-coordinating a smart ring with the rest of your outfit ensemble.
(Not yet a) pair of smart glasses
Conversely, alas, I still can’t yet recommend smart glasses to anyone but early adopters (like me; see above). Meta’s latest announced device suite, along with various products from numerous (and a growing list of) competitors, suggests that this product category is still relatively immature, therefore dynamic in its evolutionary nature. I’d hate to suggest something for you to buy for others that’ll be obsolete in short order. For power users like you, on the other hand…
Happy holidays!And with that, having just passed through 2,500 words, I’ll close here. Upside: plenty of additional presents-to-others-and/or-self ideas are now littering the cutting-room floor, so I’ve already got no shortage of topics for next year’s edition! Until then, sound off in the comments, and happy holidays!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- A holiday shopping guide for consumer tech
- A holiday gift wish list for 2020
- A holiday shopping guide for engineers: 2021 edition
- A holiday shopping guide for engineers: 2022 edition
- A holiday shopping guide for engineers: 2023 edition
- A holiday shopping guide for engineers: 2024 edition
The post A holiday shopping guide for engineers: 2025 edition appeared first on EDN.
Pulse-density modulation (PDM) audio explained in a quick primer

Pulse-density modulation (PDM) is a compact digital audio format used in devices like MEMS microphones and embedded systems. This compact primer eases you into the essentials of PDM audio.
Let’s begin by revisiting a ubiquitous PDM MEMS microphone module based on MP34DT01-M—an omnidirectional digital MEMS audio sensor that continues to serve as a reliable benchmark in embedded audio design.

Figure 1 A MEMS microphone mounted on a minuscule module detects sound and produces a 1-bit PDM signal. Source: Author
When properly implemented, PDM can digitally encode high-quality audio while remaining cost-effective and easy to integrate. As a result, PDM streams are now widely adopted as the standard data output format for MEMS microphones.
On paper, the anatomy of a PDM microphone boils down to a few essential building blocks like:
- MEMS microphone element, typically a capacitive MEMS structure, unlike the electret capsules found in analog microphones.
- Analog preamplifier boosts the low-level signal from the MEMS element for further processing.
- PDM modulator converts the analog signal into a high-frequency, 1-bit pulse-density modulated stream, effectively acting as an integrated ADC.
- Digital interface logic handles timing, clock synchronization, and data output to the host system.
Next is the function block diagram of T3902, a digital MEMS microphone that integrates a microphone element, impedance converter amplifier, and fourth-order sigma-delta (Σ-Δ) modulator. Its PDM interface enables time multiplexing of two microphones on a single data line, synchronized by a shared clock.

Figure 2 Functional block diagram outlines the internal segments of the T3902 digital MEMS microphone. Source: TDK
The analog signal generated by the MEMS sensing element in a PDM microphone—sometimes referred to as a digital microphone—is first amplified by an internal analog preamplifier. This amplified signal is then sampled at a high rate and quantized by the PDM modulator, which combines the processes of quantization and noise shaping. The result is a single-bit output stream at the system’s sampling rate.
Noise shaping plays a critical role by pushing quantization noise out of the audible frequency range, concentrating it at higher frequencies where it can be more easily filtered out. This ensures relatively low noise within the audio band and higher noise outside it.
The microphone’s interface logic accepts a master clock signal from the host device—typically a microcontroller (MCU) or a digital signal processor (DSP)—and uses it to drive the sampling and bitstream transmission. The master clock determines both the sampling rate and the bit transmission rate on the data line.
Each 1-bit sample is asserted on the data line at either the rising or falling edge of the master clock. Most PDM microphones support stereo operation by using edge-based multiplexing: one microphone transmits data on the rising edge, while the other transmits on the falling edge.
During the opposite edge, the data output enters a high-impedance state, allowing both microphones to share a single data line. The PDM receiver is then responsible for demultiplexing the combined stream and separating the two channels.
As a side note, the aforesaid microphone module is hardwired to treat data as valid when the clock signal is low.
The magic behind 1-bit audio streams
Now, back in the driveway. PDM is a clever way to represent a sampled signal using just a stream of single bits. It relies on delta-sigma conversion, also known as sigma-delta, and it’s the core technology behind many oversampling ADCs and DACs.
At first glance, a one-bit stream seems hopelessly noisy. But here is the trick: by sampling at very high rates and applying noise-shaping techniques, most of that noise is pushed out of the audible range—above 20 kHz—where it no longer interferes with the listening experience. That is how PDM preserves audio fidelity despite its minimalist encoding.
There is a catch, though. You cannot properly dither a 1-bit stream, which means a small amount of distortion from quantization error is always present. Still, for many applications, the trade-off is worth it.
Diving into PDM conversion and reconstruction, we begin with the direct sampling of an analog signal at a high rate—typically several megahertz or more. This produces a pulse-density modulation stream, where the density of 1s and 0s reflects the amplitude of the original signal.

Figure 3 An example that renders a single cycle of a sine wave as a digital signal using pulse density modulation. Source: Author
Naturally, the encoding relies on 1-bit delta-sigma modulation: a process that uses a one-bit quantizer to output either a 1 or a 0 depending on the instantaneous amplitude. A 1 represents a signal driven fully high, while a 0 corresponds to fully low.
And, because the audio frequencies of interest are much lower than the PDM’s sampling rate, reconstruction is straightforward. Passing the PDM stream through a low-pass filter (LPF) effectively restores the analog waveform. This works because the delta-sigma modulator shapes quantization noise into higher frequencies, which the low-pass filter attenuates, preserving the desired low-frequency content.
Inside digital audio: Formats at a glance
It goes without saying that in digital audio systems, PCM, I²S, PWM, and PDM each serve distinct roles tailored to specific needs. Pulse code modulation (PCM) remains the most widely used format for representing audio signals as discrete amplitude samples. Inter-IC Sound (I²S) excels in precise, low-latency audio data transmission and supports flexible stereo and multichannel configurations, making it a popular choice for inter-device communication.
Though not typically used for audio signal representation, pulse width modulation (PWM) plays a vital role in audio amplification—especially in Class D amplifiers—by encoding amplitude through duty cycle variation, enabling efficient speaker control with minimal power loss.
On a side note, you can convert a PCM signal to PDM by first increasing its sample rate (interpolation), then reducing its resolution to a single bit. Conversely, a PDM signal can be converted back to PCM by reducing its sampling rate (decimation) and increasing its word length. In both cases, the ratio of the PDM bit rate to the PCM sample rate is known as the oversampling ratio (OSR).
Crisp audio for makers: PDM to power simplified
Cheerfully compact and maker-friendly PDM input Class D audio power amplifier ICs simplify the path from microphone to speaker. By accepting digital PDM signals directly—often from MEMS mics—they scale down both complexity and component count. Their efficient Class D architecture keeps the power draw low and heat minimal, which is ideal for battery-powered builds.
That is to say, audio ICs like MAX98358 require minimal external components, making prototyping a pleasure. With filterless Class D output and built-in features, they simplify audio design, freeing makers to focus on creativity rather than complexity.
Sidewalk: For those eager to experiment, ample example code is available online for SoCs like the ESP32-S3, which use a sigma-delta driver to produce modulated output on a GPIO pin. Then with a passive or active low-pass filter, this output can be shaped into clean, sensible analog signal.
Well, the blueprint below shows an active low-pass filter using the Sallen & Key topology, arguably the simplest active two-pole filter configuration you will find.

Figure 4 Circuit blueprint outlines a simple active low-pass filter. Source: Author
Echoes and endings
As usual, I feel there is so much more to cover, but let’s jump to a quick wrap-up.
Whether you are decoding microphone specs or sketching out a signal chain, understanding PDM is a quiet superpower. It is not just about 1-bit streams; it’s about how digital sound travels, transforms, and finds its voice in your design. If this primer helped demystify the basics, you are already one step closer to building smarter, cleaner audio systems.
Let’s keep listening, learning, and simplifying.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Fundamentals of USB Audio
- Audio design by graphical tool
- Hands-on review: Is a premium digital audio player worth the price?
- Understanding superior professional audio design: A block-by-block approach
- Edge AI Game Changer: Actions Technology Is Redefining the Future of Audio Chips
The post Pulse-density modulation (PDM) audio explained in a quick primer appeared first on EDN.
MES meets the future

Industry 4.0 focuses on how automation and connectivity could transform the manufacturing canvas. Manufacturing execution systems (MES) with strong automation and connectivity capabilities thrived under the Industry 4.0 umbrella. With the recent expansion of AI usage through large language models (LLMs), Model Context Protocol, agentic AI, etc., we are facing a new era where MES and automation are no longer enough. Data produced on the shop floor can provide insights and lead to better decisions, and patterns can be analyzed and used as suggestions to overcome issues.
As factories become smarter, more connected, and increasingly autonomous, the intersection of MES, digital twins, AI-enabled robotics, and other innovations will reshape how operations are designed and optimized. This convergence is not just a technological evolution but a strategic inflection point. MES, once seen as the transactional layer of production, is transforming into the intelligence core of digital manufacturing, orchestrating every aspect of the shop floor.
MES as the digital backbone of smart manufacturingTraditionally, MES is the operational execution king: tracking production orders, managing work in progress, and ensuring compliance and traceability. But today’s factories demand more. Static, transactional systems no longer suffice when decisions are required in near-real time and production lines operate with little margin for error.
The modern MES is evolving and assuming a role as an intelligent orchestrator, connecting data from machines, people, and processes. It is not just about tracking what happened; it can explain why it happened and provide recommendations on what to do next.
Modern MES ecosystems will become the digital nervous system of the enterprise, combining physical and digital worlds and handling and contextualizing massive streams of shop-floor data. Advanced technologies such as digital twins, AI robotics, and LLMs can thrive by having the new MES capabilities as a foundation.
A data-centric MES delivers contextualized information critical for digital twins to operate, and together, they enable instant visibility of changes in production, equipment conditions, or environmental parameters, contributing to smarter factories. (Source: Critical Manufacturing)
Digital twins: the virtual mirror of the factory
A digital twin is more than a 3D model; it is a dynamic, data-driven representation of the real-world factory, continuously synchronized with live operational data. It enables users to simulate scenarios and test improvements before they ever touch the physical production line. It’s easy to understand how dependent on meaningful data these systems are.
Performing simulations of complex systems as a production line is an impossible task when relying on poor or, even worse, unreliable data. This is where a data-driven MES comes to the rescue. MES sits at the crossroads of every operational transaction: It knows what’s being produced, where, when, and by whom. It integrates human activities, machine telemetry, quality data, and performance metrics into one consistent operational narrative. A data-centric MES is the epitome of abundance of contextualized information crucial for digital twins to operate.
Several key elements made it possible for the MES ecosystems to evolve beyond their transactional heritage into a data-centric architecture built for interoperability and analytics. These include:
- Unified/canonical data model: MES consolidates and contextualizes data from diverse systems (ERP, SCADA, quality, maintenance) into a single model, maintaining consistency and traceability. This common model ensures that the digital twin always reflects accurate, harmonized information.
- Event-driven data streaming: Real-time updates are critical. An event-driven MES architecture continuously streams data to the digital twin, enabling instant visibility of changes in production, equipment conditions, or environmental parameters.
- Edge and cloud integration: MES acts as the intelligent gateway between the edge (where data is generated) and the cloud (where digital twins and analytics reside). Edge nodes pre-process data for latency-sensitive scenarios, while MES ensures that only contextual, high-value data is passed to higher layers for simulation and visualization.
- API-first and semantic connectivity: Modern MES systems expose data through well-defined APIs and semantic frameworks, allowing digital twin tools to query MES data dynamically. This flexibility provides the capability to “ask questions,” such as machine utilization trends or product genealogy, and receive meaningful answers in a timely manner.
It is an established fact that automation is crucial for manufacturing optimization. However, AI is bringing automation to a new level. Robotics is no longer limited to executing predefined movements; now, capable robots may learn and adapt their behavior through data.
Traditional industrial robots operate within rigidly predefined boundaries. Their movements, cycles, and tolerances are programmed in advance, and deviations are handled manually. Robots can deliver precision, but they lack adaptability: A robot cannot determine why a deviation occurs or how to overcome it. Cameras, sensors, and built-in machine-learning models provide robots with capabilities to detect anomalies in early stages, interpret visual cues, provide recommendations, or even act autonomously. This represents a shift from reactive quality control to proactive process optimization.
But for that intelligence to drive improvement at scale, it must be based on operational context. And that’s precisely where MES comes in. As in the case of digital twins, AI-enabled robots are highly dependent on “good” data, i.e., operational context. A data-centric MES ecosystem provides the context and coordination that AI alone cannot. This functionality includes:
- Operational context: MES can provide information such as the product, batch, production order, process parameters, and their tolerances to the robot. All of this information provides the required context for better decisions, aligned with process definition and rules.
- Real-time feedback: Robots send performance data back to the MES, validating it against known thresholds, and log results for traceability and future usage.
- Closed-loop control: MES can authorize adaptive changes (speed, temperature, or torque) based on recommendations inferred from past patterns while maintaining compliance.
- Human collaboration: Through MES dashboards and alerts, operators can monitor and oversee AI recommendations, combining human judgment with machine precision.
For this synergy to work, modern MES ecosystems must support:
- High-volume data ingestion from sensors and vision systems
- Edge analytics to pre-process robotic data close to the source
- API-based communication for real-time interaction between control systems and enterprise layers
- Centralized and contextualized data lakes storing both structured and unstructured contextualized information essential for AI model training
Every day, we see how incredibly fast technology evolves and how instantly its applications reshape entire industries. The wave of innovation fueled by AI, LLMs, and agentic systems is redefining the boundaries of manufacturing.
MES, digital twins, and robotics can be better interconnected, contributing to smarter factories. There is no crystal ball to predict where this transformation will lead, but one thing is undeniable: Data sits at the heart of it all—not just raw data but meaningful, contextualized, and structured information. On the shop floor, this kind of data is pure gold.
MES, by its very nature, occupies a privileged position: It is becoming the bridge between operations, intelligence, and strategy. Yet to leverage from that position, the modern MES must evolve beyond its transactional roots to become a true, data-driven ecosystem: open, scalable, intelligent, and adaptive. It must interpret context, enable real-time decisions, augment human expertise, and serve as the foundation upon which digital twins simulate, AI algorithms learn, and autonomous systems act.
This is not about replacing people with technology. When an MES provides workers with AI-driven insights grounded in operational reality, and when it translates strategic intent into executable actions, it amplifies human judgment rather than diminishing it.
The convergence is here. Technology is maturing. The competitive pressure is mounting. Manufacturers now face a defining choice: Evolve the MES into the intelligent heart of their operations or risk obsolescence as smarter, more agile competitors pull ahead.
Those who make this leap, recognizing that the future belongs to factories where human ingenuity and AI work as a team, will not just modernize their operations; they will secure their place in the future of manufacturing.
The post MES meets the future appeared first on EDN.
How to design a digital-controlled PFC, Part 1
Shifting from analog to digital control
An AC/DC power supply with input power greater than 75 W requires power factor correction (PFC) to:
- Take the universal AC input (90 V to 264 V) and rectify that input to a DC voltage.
- Maintain the output voltage at a constant level (usually 400 V) with a voltage control loop.
- Force the input current to follow the input voltage such that the electronics load appears to be a pure resistor with a current control loop.
Designing an analog-controlled PFC is relatively easy because the voltage and current control loops are already built into the controller, making it almost plug-and-play. The power-supply industry is currently transitioning from analog control to digital control, especially in high-performance power-supply design. In fact, nearly all newly designed power supplies in data centers use digital control.
Compared to analog control, digital-controlled PFC provides lower total harmonic distortion (THD), a better power factor, and higher efficiency, along with integrated housekeeping functions.
Switching from analog control to digital control is not easy; however, you will face new challenges where continuous signals are represented in a discrete format. And unlike an analog controller, the MCU used in digital control is essentially a “blank” chip; you must write firmware to implement the control algorithms.
Writing the correct firmware can be a headache for someone who has never done this before. To help you learn digital control, in this article series, I’ll provide a step-by-step guide on how to design a digital-controlled PFC, using totem-pole bridgeless PFC as a design example to illustrate the advantages of digital control.
A digital-controlled PFC systemAmong all PFC topologies, totem-pole bridgeless PFC provides the best efficiency. Figure 1 shows a typical totem-pole bridgeless PFC structure.
Figure 1 Totem-pole bridgeless PFC where Q1 and Q2 are high-frequency switches and will work as either a PFC boost switch or synchronous switch based on the VAC polarity. Source: Texas Instruments
Q1 and Q2 are high-frequency switches. Based on VAC polarity, Q1 and Q2 work as a PFC boost switch or synchronous switch, alternatively.
At a positive AC cycle (where the AC line is higher than neutral), Q2 is the boost switch, while Q1 works as a synchronous switch. The pulse-width modulation (PWM) signal for Q1 and Q2 are complementary: Q2 is controlled by D (the duty cycle from the control loop), while Q1 is controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle.
At a negative AC cycle (where the AC neutral is higher than line), the functionality of Q1 and Q2 swaps: Q1 becomes the boost switch, while Q2 works as a synchronous switch. The PWM signal for Q1 and Q2 are still complementary, but D now controls Q1 and 1-D controls Q2. Q3 remains on and Q4 remains off for the whole negative AC half cycle.
Figure 2 shows a typical digital-controlled PFC system block diagram with three major function blocks:
- An ADC to sense the VAC voltage, VOUT voltage, and inductor current for conversion into digital signals.
- A firmware-based average current-mode controller.
- A digital PWM generator.

Figure 2 Block diagram of a typical digital-controlled PFC system with three major function blocks. Source: Texas Instruments
I’ll introduce these function blocks one by one.
The ADCAn ADC is the fundamental element for an MCU; it senses an analog input signal and converts it to a digital signal. For a 12-bit ADC with a 3.3-V reference, Equation 1 expresses the ADC result for a given input signal Vin as:
Conversely, based on a given ADC conversion result, Equation 2 expresses the corresponding analog input signal as:

To obtain an accurate measurement, the ADC sampling rate must follow the Nyquist theorem, which states that a continuous analog signal can be perfectly reconstructed from its samples if the signal is sampled at a rate greater than twice its highest frequency component.
This minimum sampling rate, known as the Nyquist rate, prevents aliasing, a phenomenon where higher frequencies appear as lower frequencies after sampling, thus losing information about the original signal. For this reason, the ADC sampling rate is set at a much higher rate (tens of kilohertz) than the AC frequency (50 or 60 Hz).
Input AC voltage sensingThe AC input is high voltage; it cannot connect to the ADC pin directly. You must use a voltage divider, as shown in Figure 3, to reduce the AC input magnitude.

Figure 3 Input voltage sensing that allows you to connect the high AC input voltage to the ADC pin. Source: Texas Instruments
The input signal to the ADC pin should be within the measurement range of the ADC (0 V to 3.3 V). But to obtain a better signal-to-noise ratio, the input signal should be as big as possible. Hence, the voltage divider for VAC should follow Equation 3:

where VAC_MAX is the peak value of the maximum VAC voltage that you want to measure.
Adding a small capacitor (C) with low equivalent series resistance (ESR) in the voltage divider can remove any potential high-frequency noise; however, you should place C as close as possible to the ADC pin.
Two ADCs measure the AC line and neutral voltages; subtracting the two readings using firmware will obtain the VAC signal.
Output voltage sensingSimilarly, resistor dividers will attenuate the output voltage, as shown in Figure 4, then connect to an ADC pin. Again, adding C with low ESR in the voltage divider removes any potential high-frequency noise, with C placed as close as possible to the ADC pin.

Figure 4 Resistor divider for output voltage sensing, where C removes any potential high-frequency noise. Source: Texas Instruments
To fully harness the ADC measurement range, the voltage divider for VOUT should follow Equation 4:

where VOUT_OVP is the output overvoltage protection threshold.
AC current sensingIn a totem-pole bridgeless PFC, the inductor current is bidirectional, requiring a bidirectional current sensor such as a Hall-effect sensor. With a Hall-effect sensor, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 5.

Figure 5 The bidirectional hall-effect current sensor output is a sine wave with a DC offset when the input is a sine wave. Source: Texas Instruments
The Hall-effect sensor you use may have an output range that is less than what the ADC can measure. Scaling the Hall-effect sensor output to match the ADC measurement range using the circuit shown in Figure 6 will fully harness the ADC measurement range.
Figure 6 Hall-effect sensor output amplifier used to scale the Hall-effect sensor output to match the ADC measurement range. Source: Texas Instruments
Equation 5 expresses the amplification of the Hall-effect sensor output:
![]()
As I mentioned earlier, because the digital controller MCU is a blank chip, you must write firmware to mimic the PFC control algorithm used in the analog controller. This includes voltage loop implementation, current reference generation, current loop implementation, and system protection. I’ll go over these implementations in Part 2 of this article series.
Digital compensatorIn Figure 7, GV and GI are compensators for the voltage loop and current loop. One difference between analog control and digital control is that in analog control, the compensator is usually implemented through an operational amplifier, whereas digital control uses a firmware-based proportional-integral-derivative (PID) compensator.
For PFC, its small-signal model is a first-order system; therefore, a proportional-integral (PI) compensator is enough to obtain good bandwidth and phase margin. Figure 7 shows a typical digital PI controller structure.

Figure 7 A digital PI compensator where r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. Source: Texas Instruments
In Figure 7, r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. The compensator output, u(k), clamps to a specific range. The compensator also contains an anti-windup reset logic that allows the integral path to recover from saturation.
Figure 8 shows a C code implementation example for this digital PI compensator.

Figure 8 C code example for a digital PI compensator. Source: Texas Instruments
For other digital compensators such as PID, nonlinear PID, and first-, second-, and third-order compensators, see reference [1].
S/Z domain conversionIf you have an analog compensator that works well, and you want to use the same compensator in digital-controlled PFC, you can convert it through S/Z domain conversion. Assume that you have a type II compensator, as shown in Equation 6:
Replace s with bilinear transformation (Equation 7):
![]()
where Ts is the ADC sampling period.
Then H(s) is converted to H(z), as shown in Equation 8:

Rewrite Equation 8 as Equation 9:
![]()
To implement Equation 9 in a digital controller, store two previous control output variables: un-1, un-2, and two previous error histories: en-1, en-2. Then use current error en and Equation 9 to calculate the current control output, un.
Digital PWM generationA digital controller generates a PWM signal much like an analog controller, with the exception that a clock counter generates the RAMP signal; therefore, the PWM signal has limited resolution. The RAMP counter is configurable as up count, down count, or up-down count.
Figure 9 shows the generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation.

Figure 9 Generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation. Source: Texas Instruments
Programming the PERIOD resistor of the PWM generator will determine the switching frequency. For up-count and down-count mode, Equation 10 calculates the PERIOD register value as:

where fclk is the counter clock frequency and fsw is the desired switching frequency.
For the up-down count mode, Equation 11 calculates the PERIOD register value as:

Figure 10 shows an example of using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC.

Figure 10 Using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC. Source: Texas Instruments
Equation 12 shows that the COMP equals the current loop GI output multiplied by the switching period:
![]()
The higher the COMP value, the bigger the D.
To prevent short through between the top switch and the bottom switch, adding a delay on the rising edge of PWMA and the rising edge of PWMB inserts dead time between PWMA and PWMB. This delay is programmable, which means that it’s possible to dynamically adjust the dead time to optimize performance.
Blocks in digital-controlled PFCNow that you have learned about the blocks used in digital-controlled PFC, it’s time to close the control loop. In the next installment, I’ll discuss how to write firmware to implement an average current-mode controller.
Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.
Reference
- “C2000
Digital Control Library User’s Guide.” TI literature No. SPRUID3, January 2017.
Related Content
- Digital control for power factor correction
- Digital control unveils a new epoch in PFC design
- Power Tips #124: How to improve the power factor of a PFC
- Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC
- Power Tips #116: How to reduce THD of a PFC
The post How to design a digital-controlled PFC, Part 1 appeared first on EDN.
Optical combs yield extreme-accuracy gigahertz RF oscillator

It may seem at times that there is a divide between the optical/photonic domain and the RF one, with the terahertz zone between them as a demarcation. If you need to make a transition between the photonic and RF words, you use electrooptical devices such as LEDs and photodetectors of various types. Now, all or most optical systems are being used to perform functions in the optical band where electric comments can’t fulfill the needs, even pushing electronic approaches out of the picture.
In recent years, this divide has also been bridged by newer, advanced technologies such as integrated photonics where optical functions such as lasers, waveguides, tunable elements, filters, and splitters are fabricated on an optically friendly substrate such as lithium niobate (LiNbO3). There are even on-chip integrated transceivers and interconnects such as the ones being developed by Ayar Labs. The capabilities of some of these single- or stacked-chip electro-optical devices are very impressive.
However, there is another way in which electronics and optics are working together with a synergistic outcome. The optical frequency comb (OFC), also called optical comb, was originally developed about 25 years ago—for which John Hall and Theodor Hänsch received the 2005 Nobel Prize in Physics—to count the cycles from optical atomic clocks and for precision laser-based spectroscopy.
It has since found many other uses, of course, as it offers outstanding phase stability at optical frequencies for tuning or as a local oscillator (LO). Some of the diverse applications include X-ray and attosecond pulse generation, trace gas sensing in the oil and gas industry, tests of fundamental physics with atomic clocks, long-range optical links, calibration of atomic spectrographs, precision time/frequency transfer over fiber and through free space, and precision ranging.
Use of optical components is not limited to the optical-only domain. In the last few years, researchers have devised ways to use the incredible precision of the OFC to generate highly stable RF carriers in the 10-GHz range. Phase jitter in the optical signal is actually reduced as part of the down-conversion process, so the RF local oscillator has better performance than its source comb.
This is not an intuitive down-conversion scheme (Figure 1).

Figure 1 Two semiconductor lasers are injection-locked to chip-based spiral resonators. The optical modes of the spiral resonators are aligned, using temperature control, to the modes of the high-finesse Fabry-Perot (F-P) cavity for Pound–Drever–Hall (PDH) locking (a). A microcomb is generated in a coupled dual-ring resonator and is heterodyned with the two stabilized lasers. The beat notes are mixed to produce an intermediate frequency, fIF, which is phase-locked by feedback to the current supply of the microcomb seed laser (b). A modified uni-traveling carrier (MUTC) photodetector chip is used to convert the microcomb’s optical output to a 20-GHz microwave signal; a MUTC photodetector has response to hundreds of GHz (c). Source: Nature
But this simplified schematic diagram does not reveal the true complexity and sophistication of the approach, which is illustrated in Figure 2.

Figure 2 Two distributed-feedback (DFB) lasers at 1557.3 and 562.5 nm are self-injection-locked (SIL) to Si3N4 spiral resonators, amplified and locked to the same miniature F-P cavity. A 6-nm broad-frequency comb with an approximately 20 GHz repetition rate is generated in a coupled-ring resonator. The microcomb is seeded by an integrated DFB laser, which is self-injection-locked to the coupled-ring microresonator. The frequency comb passes through a notch filter to suppress the central line and is then amplified to 60 mW total optical power. The frequency comb is split to beat with each of the PDH-locked SIL continuous wave references. Two beat notes are amplified, filtered and then mixed to produce fIF, which is phase-locked to a reference frequency. The feedback for microcomb stabilization is provided to the current supply of the microcomb seed laser. Lastly, part of the generated microcomb is detected in an MUTC detector to extract the low-noise 20-GHz RF signal. Source: Nature
At present, this is not implemented as a single-chip device or even as a system with just a few discrete optical components; many of the needed precision functions are only available on individual substrates. A complete high-performance system takes a rack-sized chassis fitting in a single-height bay.
However, there has been significant progress on putting multiple functional locks into single-chip substrate, so it wouldn’t be surprising to see a monolithic (or nearly so) device within a decade or perhaps just a few years.
What sort of performance can such a system deliver? There are lots of numbers and perspectives to consider, and testing these systems—at these levels of performance—to assess their capabilities is as much of a challenge as fabricating them. It’s the metrology dilemma: how do you test a precision device? And how do you validate the testing arrangement itself?
One test result indicates that for a 10-GHz carrier, the phase noise is −102 dBc/Hz at 100 Hz offset and decreases to −141 dBc/Hz at 10 kHz offset. Another characterization compares this performance to that of other available techniques (Figure 3).

Figure 3 The platforms are all scaled to 10-GHz carrier and categorized based on the integration capability of the microcomb generator and the reference laser source, excluding the interconnecting optical/electrical parts. Filled (blank) squares are based on the optical frequency division (OFD) standalone microcomb approach: 22-GHz silica microcomb (i); 5-GHz Si3N4 microcomb (ii); 10.8-GHz Si3N4 microcomb (iii) ; 22-GHz microcomb (iv); MgF2 microcomb (v); 100-GHz Si3N4 microcomb (vi); 22-GHz fiber-stabilized SiO2 microcomb (vii); MgF2 microcomb (viii); 14-GHz MgF2 microcomb pumped by an ultrastable laser (ix); and 14-GHz microcomb-based transfer oscillator (x). Source: Nature
There are many good online resources available that explain in detail the use of optical combs for RF-carrier generation. Among these are “Photonic chip-based low-noise microwave oscillator” (Nature); “Compact and ultrastable photonic microwave oscillator” (Optics Letters via ResearchGate); and “Photonic Microwave Sources Divide Noise and Shift Paradigms” (Photonics Spectra).
In some ways, it seems there’s a “frenemy” relationship between today’s advanced photonics and the conventional world of RF-based signal processing. But as has usually been the case, the best technology will win out, and it will borrow from and collaborate with others. Photonics and electronics each have their unique attributes and bring something to the party, while their integrated pairing will undoubtedly enable functions we can’t fully envision—at least not yet.
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- Is Optical Computing in Our Future?
- Use optical fiber as an isolated current sensor?
- Analog Optical Fiber Forges RF-Link Alternative
- Silicon yields phased-arrays for optics, not just RF
- Attosecond laser pulses drive petahertz optical transistor switching
The post Optical combs yield extreme-accuracy gigahertz RF oscillator appeared first on EDN.
High-performance MCUs target industrial applications

STMicroelectronics raises the performance bar for embedded edge AI and industrial applications with the new STM32V8 high-performance microcontrollers (MCUs) for demanding industrial applications such as factory automation, motor control, and robotics. It is the first MCU built on ST’s 18-nm silicon-on-insulator (FD-SOI) process technology with embedded phase-change memory (PCM).
The STM32V8’s phase-change non-volatile memory (PCM) claims the smallest cell size on the market, enabling 4 MB of embedded non-volatile memory (NVM).
(Source: STMicroelectronics)
In addition, the STM32V8 is ST’s fastest STM32 MCU to date, designed for high reliability and harsh environments in embedded and edge AI applications, and can handle complex applications and maintain high energy efficiency. The STM32V8 achieves clock speeds of up to 800 MHz, thanks to the Arm Cortex-M85 core and the 18-nm FD-SOI process with embedded PCM. The FD-SOI technology delivers high energy efficiency and supports a maximum junction temperature of up to 140°C.
The MCU integrates special accelerators, including graphic, crypto/hash, and comes with a large selection of IP, including 1-Gb Ethernet, digital interfaces (FD-CAN, octo/hexa xSPI, I2C, UART/USART, and USB), analog peripherals, and timers. It also features state-of-the-art security with the STM32 Trust framework and the latest cryptographic algorithms and lifecycle management standards. It targets PSA Certified Level 3 and SESIP certification to meet compliance with the upcoming Cyber-Resilience Act (CRA).
The STM32V8 has been selected for the SpaceX Starlink constellation, using it in a mini laser system that connects the satellites traveling at extremely high speeds in low Earth orbit (LEO), ST said. This is thanks in part to the 18-nm FD-SOI technology that provides a higher level of reliability and robustness.
The STM32V8 supports bare-metal or RTOS-based development. It is supported by ST’s development resources, including STM32Cube software development and turnkey hardware including Discovery kits and Nucleo evaluation boards.
The STM32V8 is in early-stage access for selected customers. Key OEM availability will start in the first quarter of 2026, followed by broader availability.
The post High-performance MCUs target industrial applications appeared first on EDN.
FIR temperature sensor delivers high accuracy

Melexis claims the first automotive-grade surface-mount (SMD) far-infrared (FIR) temperature sensor designed for temperature monitoring of critical components in electric vehicle (EV) powertrain applications. These include inverters, motors, and heating, ventilation, and air conditioning (HVAC) systems.
(Source: Melexis)
The MLX90637 offers several advantages over negative temperature coefficient (NTC) thermistors that have traditionally been used in these systems, where speed and accuracy are critical, Melexis said.
These advantages include eliminating the need for manual labor associated with NTC solutions thanks to the SMD packaging, which supports automated PCB assembly and delivers cost savings. In addition, the FIR temperature sensor with non-contact measurement ensures intrinsic galvanic isolation that helps to enhance EV safety by separating high- and low-voltage circuits, while the inherent electromagnetic compatibility (EMC) eliminates typical noise challenges associated with NTC wires, the company said.
Key features include a 50° field of view, 0.02°C resolution, and fast response time, which are suited for applications such as inverter busbar monitoring where temperature must be carefully managed. Sleep current is less than 2.5 μA. and the ambient operating temperature range is -40°C to 125°C.
The MLX90637 also simplifies system integration with a 3.3-V supply, factory calibration (including post calibration), and an I2C interface for communication with a host microcontroller, including a software-definable I2C address via an external pin. The AEC-Q100-qualified sensor is housed in a 3 × 3-mm package.
The post FIR temperature sensor delivers high accuracy appeared first on EDN.
Accuracy loss from PWM sub-Vsense regulator programming

I’ve recently published Design Ideas (DIs) showing circuits for linear PWM programming of standard bucking-type regulators in applications requiring an output span that can swing below the regulator’s sense voltage (Vsense or Vs). For example: “Simple PWM interface can program regulators for Vout < Vsense.”
Wow the engineering world with your unique design: Design Ideas Submission Guide
Objections have been raised, however, that such circuits entail a significant loss of programming analog accuracy because they rely on adding a voltage term typically derived from an available voltage (e.g., logic rail) source. Therefore, they should be avoided.
The argument relies on the fact that such sources generally have accuracy and stability that are significantly worse (e.g., ±5%) than those of regulator internal references (e.g., ±1%).
But is this objection actually true, and if so, how serious is the problem? How much of an accuracy penalty is actually incurred? This DI addresses these questions.
Figure 1 shows a basic topology for sub-Vs regulator programming with current expressions as follows:
A = DpwmVs/R1
B = (1 – Dpwm)(Vl – Vs)/(R1 + R4)
Where A is the primary programming current and B is the sub-Vs programming current giving an output voltage:
Vout = R2(A + B) + Vs
Figure 1 Basic PWM regulator programming topology.
Inspection of the A and B current expressions shows that when the PWM duty factor (Dpwm) is set to full-scale 100% (Dpwm = 1), then B = 0. This is due to the (1 – Dpwm) term.
Therefore, there can be no error contribution from the logic rail Vl at full-scale.
At other Dpwm values, however, this happy circumstance no longer applies, and B becomes nonzero. Thus, Vl tolerance and noise degrade accuracy, at least to some extent. But, by how much?
The simplest way to address this crucial question is to evaluate it as a plausible example of Figure 1’s general topology. Figure 2 provides some concrete groundwork for that by adding some example values.

Figure 2 Putting some meat on Figure 1’s bare bones, adding example values to work with.
Assuming perfect resistors, nominal R1 currents are then:
A = Dpwm Vs/3300
B = (1 – Dpwm)(Vl – Vs)/123300
Vout = R2(A + B) + Vs = 75000(A + B) + 1.25
Then, making the (highly pessimistic) assumption that reference errors stack up as the sum of absolute values:
Aerr = Dpwm 1%Vs/3300 = Dpwm 3.8µA
Berr = (1 – Dpwm) (5% 3.3v + 1% 1.25v)/123300 = (1 – Dpwm) 1.44µA
Vout total error = 75000(Dpwm 3.8µA + (1 – Dpwm)1.44µA)) + 1% Vs
The resulting Vout error plots are shown in Figure 3.

Figure 3 Vout error plots where the x-axis is Dpwm and y-axis is Vout error. Black line is Vout = Vs at Dpwm = 0 and red line is Vout = 0 at Dpwm = 0.
Conclusion: Error does increase in the lower range of Vout when the Vout < Vsense feature is incorporated, but any difference completely disappears at the top end. So, the choice turns on the utility of Vout < Vsense.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Simple PWM interface can program regulators for Vout < Vsense
- Three discretes suffice to interface PWM to switching regulators
- Revisited: Three discretes suffice to interface PWM to switching regulators
- PWM nonlinearity that software can’t fix
- Another PWM controls a switching voltage regulator
The post Accuracy loss from PWM sub-Vsense regulator programming appeared first on EDN.



