EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 55 хв тому

Infused concrete yields greatly improved structural supercapacitor

Втр, 11/25/2025 - 22:01

A few years ago, a team at MIT researched and published a paper on using concrete as an energy-storage supercapacitor (MIT engineers create an energy-storing supercapacitor from ancient materials) (also called an ultracapacitor), which is a battery based on electric fields rather than electrochemical principles. Now, the same group has developed a battery with ten times the storage per volume of that earlier version, by using concrete infused with various materials and electrolytes such as (but not limited to) nano-carbon black.

Concrete is the world’s most common building material and has many virtues, including basic strength, ruggedness, and longevity, and few restrictions on final shape and form. The idea of also being able to use it as an almost-free energy storage system is very attractive.

By combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, their electron-conducting carbon concrete (EC3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy, Figure 1.

Figure 1 As with most batteries, schematic diagram and physical appearance are simple, and it’s the details that are the challenge. Source: Massachusetts Institute of Technology

This greatly improved energy density was made possible by their deeper understanding of how the nanocarbon black network inside EC3 functions and interacts with electrolytes, as determined using some sophisticated instrumentation. By using focused ion beams for the sequential removal of thin layers of the EC3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the joint EC³ Hub and MIT Concrete Sustainability Hub team was able to reconstruct the conductive nanonetwork at the highest resolution yet. The analysis showed that the network is essentially a fractal-like “web” that surrounds EC3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system. 

A cubic meter of this version of EC3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy, which is enough to power an actual modest-sized refrigerator for a day. Via extrapolation (always the tricky aspect of these investigations), they say that 45 cubic meters of EC3 with an energy density of 0.22 kWh/m3 – a typical house-sided foundation—would have enough capacity to store about 10 kilowatt-hours of energy, the average daily electricity usage for a household, Figure 2.

Figure 2 These are just a few of the many performance graphs that the team developed. Source: Massachusetts Institute of Technology

They achieved highest performance with organic electrolytes, especially those that combined quaternary ammonium salts—found in everyday products like disinfectants—with acetonitrile, a clear, conductive liquid often used in industry, Figure 3.

Figure 3 They also identified needed properties for the electrolyte and investigated many possibilities for this critical component. Source: Massachusetts Institute of Technology

If this all sounds only like speculation from a small-scale benchtop lab project, it is, and it isn’t. Much of the work was done in cooperation with the American Concrete Institute, a research and promotional organization that studies all aspects of concrete, including formulation, application, standardized tests, long-term performance, and more.

While the MIT team, perhaps not surprisingly, is positioning this development as the next great thing—and it certainly gets a lot of attention in the mainstream media due to its tantalizing keywords of “concrete” and “battery”—there are genuine long-term factors to evaluate related to scaling up to a foundation-sized mass:

  • Does the final form of the concrete matter, such a large cube versus flat walls?
  • What are the partial and large-scale failure modes?
  • What are the long-term effects of weather exposure, as this material is concrete (which is well understood) but with an additive?
  • What happens when an EC3 foundation degrades or fails—do you have to lift the house and replace the foundation?
  • What are the short and long-term influences on performance, and how does the formulation affect that performance?

The performance and properties of the many existing concrete formulations have been tested in the lab and in the field over decades, and “improvements” are not done casually, especially in consideration of the end application.

Since demonstrating this concrete battery in structural mode lacks visual impact, the MIT team built a more attention-grabbing demonstration battery of stacked cells to provide 12-V of power. They used this to operate a 12-V computer fan and a 5-V USB output (via a buck regulator) for a handheld gaming console, Figure 4.

Figure 4 A 12-V concrete battery powering a small fan and game console provides a visual image which is more dramatic and attention-grabbing. Source: Massachusetts Institute of Technology

The work is detailed in their paper “High energy density carbon–cement supercapacitors for architectural energy storage,” published in Proceedings of the National Academy of Sciences (PNAS). It’s behind a paywall, but there is a posted student thesis, “Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases.” Finally, there’s also a very informative 18-slide, 21-minute PowerPoint presentation  at YouTube (with audio), “Carbon-cement supercapacitors: A disruptive technology for renewable energy storage,” that was developed by the MIT team for the ACI.

What’s your view? Is this a truly disruptive energy-storage development? Or will the realities of scaling up in physical volume and long-term performance, as well as “replacement issues,” make this yet another interesting advance that falls short in the real world?

Check back in five to ten years to find out. If nothing else, this research reminds us that there is potential for progress in power and energy beyond the other approaches we hear so much about.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

The post Infused concrete yields greatly improved structural supercapacitor appeared first on EDN.

A simpler circuit for characterizing JFETs

Втр, 11/25/2025 - 15:00

The circuit presented by Cor Van Rij for characterizing JFETs is a clever solution. Noteworthy is the use of a five-pin test socket wired to accommodate all of the possible JFET pinout arrangements.

This idea uses that socket arrangement in a simpler circuit. The only requirement is the availability of two digital multimeters (DMMs), which add the benefit of having a hold function to the measurements. In addition to accuracy, the other goals in developing this tester were:

  • It must be simple enough to allow construction without a custom printed circuit board, as only one tester was required.
  • Use components on hand as much as possible.
  • Accommodate both N- and P-channel devices while using a single voltage supply.
  • Use a wide range of supply voltages.
  • Incorporate a current limit with LED indication when the limit is reached.
The circuit

The resulting circuit is shown in Figure 1.

Figure 1 Characterizing JFETs using a socket arrangement. The fixture requires the use of two DMMs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Q1, Q2, R1, R3, R5, D2, and TEST pushbutton S3 comprise the simple current limit circuit (R4 is a parasitic Q-killer).

S3 supplies power to S1, the polarity reversal switch, and S2 selects the measurement. J1 and J2 are banana jacks for the DMM set to read the drain current. J3 and J4 are banana jacks for the DMM set to read Vgs(off). 

Note the polarities of the DMM jacks. They are arranged so that the drain current and Vgs(off) read correctly for the type of JFET being tested—positive IDSS and negative Vgs(off) for N-channel devices and negative IDSS and positive Vgs(off) for P-channel devices.

R2 and D1 indicate the incoming power, while R6 provides a minimum load for the current limiter. Resistor R8 isolates the DUT from the effects of DMM-lead parasitics, and R9 provides a path to earth ground for static dissipation.

Testing JFETs

Figure 2 shows the tester setup measuring Vgs(off) and IDSS for an MPF102, an N-channel device. The specified values of this device are Vgs(off) of -8v maximum and IDSS of 2 to 20 mA. Note that the hold function of the meters was used to maintain the measurements for the photograph. The supply for this implementation is a nominal 12-volt “wall wart” salvaged from a defunct router. 

Figure 2 The test of an MPF302 N-Channel JFET using the JFET characterization circuit.

Figure 3 shows the current limit in action by setting the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102. The limit is 52.2 mA, and the I-LIMIT LED is brightly lit. 

Figure 3 The current limit test that sets the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102.

John L. Waugaman’s love of electronics began when I built a crystal set at age 10 with my father’s help. Earning a BSEE from Carnegie-Mellon University led to a 30-year career in industry designing product inspection equipment and four patents. After being RIF’d, I spent the next 20 years as a consultant specializing in analog design in industrial and military projects. Now I’m retired, sort of, but still designing.  It’s in my blood, I guess.

Related Content

The post A simpler circuit for characterizing JFETs appeared first on EDN.

Gold-plated PWM-control of linear and switching regulators

Втр, 11/25/2025 - 15:00
“Gold-plated” without the gold plating

Alright, I admit that the title is a bit over the top. So, what do I mean by it? I mean that:

(1) The application of PWM control to a regulator does not significantly degrade the inherent DC accuracy of its output voltage,

(2) Any ability of the regulator’s output voltage to reach below that of its internal reference is supported, and

(3) This is accomplished without the addition of a new reference voltage.

Refer to Figure 1.

Figure 1 This circuit meets the requirements of “Gold-Plated PWM control” as stated above.

Wow the engineering world with your unique design: Design Ideas Submission Guide

How it works

The values of components Cin, Cout, Cf, and L1 are obtained from the regulator’s datasheet. (Note that if the regulator is linear, L1 is replaced with a short.)

The datasheet typically specifies a preferred value of Rg, a single resistor between ground and the feedback pin FB. 

Taking the DC voltage VFB of the regulator’s FB pin into account, R3 is selected so that U2a supplies a V_sup voltage greater than or equal to 3.0 V. C7 and R3 ensure that the composite is non-oscillatory, even with decoupling capacitor C6 in place.C6 is required for the proper operation of the SN74AC04 IC U1.

The following equations govern the circuit’s performance, where Vmax is the desired maximum regulator output voltage:

R3   = ( Vsup / VFB – 1 ) · 10k
Rg1 = Rg / ( 1 – ( VFB / Vsup ) / ( 1 – VFB/Vmax ))
Rg2 = Rg · Rg1 / ( Rg1 – Rg )
Rf = Rg · ( Vmax / VFB – 1 )

They enable the regulator output to reach zero volts (if it is capable of such) when the PWM inputs are at their highest possible duty cycle. 

U1 is part of two separate PWMs whose composite output can provide up to 16 bits of resolution. Ra and Rb + Rc establish a factor of 256 for the relative significance of the PWMs.

If eight bits or less of resolution is required, Rb and Rc, and the least significant PWM, can be eliminated, and all six inverters can be paralleled.

The PWMs’ minimum frequency requirements shown are important because when those are met, the subsequent filter passes a peak-to-peak ripple less than 2-16 of the composite PWM’s full-scale range. This filter consists of Ra, Rb + Rc, R5 to R7, C3 to C5, and U2b.

Errors

The most stringent need to minimize errors comes from regulators with low and highly accurate reference voltages. Let’s consider 600 mV and 0.5% from which we arrive at a 3-mV output error maximum inherent to the regulator. (This is overly restrictive, of course, because it assumes zero-tolerance resistors to set the output voltage. If 0.1% resistors were considered, we’d add 0.2% to arrive at 0.7% and more than 4 mV.)

Broadly, errors come from imperfect resistor ratios and component tolerances, op-amp input offset voltages and bias currents, and non-linear SN74AC04 output resistances. The 0.1% resistors are reasonably cheap.

Resistor ratios

If nominally equal in value, such resistors, forming a ratio, contribute a worst-case error of ± 0.1%. For those of different values, the worst is ± 0.2%. Important ratios involve:

  • Rg1, Rg2, and Rf
  • R3 and R4
  • Ra and Rb + Rc

Various Rf, Rg ratios are inherent to regulator operation.

The Rg1, Rg2; R3, R4; and Ra, Rb + Rc pairs have been introduced as requirements for PWM control.

The Ra / (Rb + Rc) error is ± 0.2%, but since this involves a ratio of 8-bit PWMs at most, it incurs less than 1 least significant bit (LSbit) of error.

The Rg1, Rg2 pair introduces an error of ±0.2 % at most.

The R3, R4 pair is responsible for a worst-case ±0.2 %. All are less than the 0.5% mentioned earlier.

Temperature drift

The OPA2376 has a worst-case input offset voltage of 25 µV over temperature. Even if U2a has a gain of 5 to convert FB’s 600 mV to 3 V, this becomes only 125 µV.

Bias current is 10-pA maximum at 25°C, but we are given a typical value only at 125°C of 250 pA.

Of the two op-amps, U2b sees the higher input resistance. But its current would have to exceed 6 nA to produce even 1-mV of offset, so these op-amps are blameless.

To determine U1’s output resistance, its spec shows that its minimum logic high voltage for a 3-V supply is 2.46 V under a 12-mA load. This means that the maximum for each inverter is 45 Ω, which gives us 9 Ω for five in parallel. (The maximum voltage drop is lower for a logic low 12 mA, resulting in a lower resistance, but we don’t know how much lower, so we are forced to worst-case it at a ridiculous 0 V!)

Counting C3 as a short under dynamic conditions, the five inverters see a 35-kΩ load, leading to a less than 0.03% error.

Wrapping up

The regulator and its output range might need an even higher voltage, but the input voltage IN has been required to exceed 3.2 V. This is because U1 is spec’d to swing to no further than 80 mV from its supply rails under loads of 2 kΩ or more. (I’ve added some margin, but it’s needed only for the case of maximum output voltage.)

You should specify Vmax to be slightly higher than needed so that U2b needn’t swing all the way to ground. This means that a small negative supply for U2 is unnecessary. IN must also be less than 5.5 V to avoid exceeding U2’s spec. If a larger value of IN is required by the regulator, an inexpensive LDO can provide an appropriate U2 supply.

I grant that this design might be overkill, but I wanted to see what might be required to meet the goals I set. But who knows, someone might find it or some aspect of it useful.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Gold-plated PWM-control of linear and switching regulators appeared first on EDN.

The role of AI processor architecture in power consumption efficiency

Втр, 11/25/2025 - 10:27

From 2005 to 2017—the pre-AI era—the electricity flowing into U.S. data centers remained remarkably stable. This was true despite the explosive demand for cloud-based services. Social networks such as Facebook, Netflix, real-time collaboration tools, online commerce, and the mobile-app ecosystem all grew at unprecedented rates. Yet continual improvements in server efficiency kept total energy consumption essentially flat.

In 2017, AI deeply altered this course. The escalating adoption of deep learning triggered a shift in data-center design. Facilities began filling with power-hungry accelerators, mainly GPUs, for their ability to crank through massive tensor operations at extraordinary speed. As AI training and inference workloads proliferated across industries, energy demand surged.

By 2023, U.S. data centers had doubled their electricity consumption relative to a decade earlier with an estimated 4.4% of all U.S. electricity now feeding data-center racks, cooling systems, and power-delivery infrastructure.

According to the Berkeley Lab report, data-center load growth has tripled over the past decade and is projected to double or triple again by 2028. The report estimates that AI workloads alone could by that time consume as much electricity annually as 22% of all U.S. households—a scale comparable to powering tens of millions of homes.

Total U.S. data-center electricity consumption increased ten-fold from 2014 through 2028. Source: 2024 U.S. Data Center Energy Usage Report, Berkeley Lab

This trajectory raises a question: What makes modern AI processors so energy-intensive? Whether rooted in semiconductor physics, parallel-compute structures, memory-bandwidth bottlenecks, or data-movement inefficiencies, understanding the causes becomes a priority. Analyzing the architectural foundations of today’s AI hardware may lead to corrective strategies to ensure that computational progress does not come at the expense of unsustainable energy demand.

What’s driving energy consumption in AI processors

Unlike traditional software systems—where instructions execute in a largely sequential fashion, one clock cycle and one control-flow branch at a time—large language models (LLMs) demand massively parallel elaboration of multiple-dimensional tensors. Matrices many gigabytes in size must be fetched from memory, multiplied, accumulated, and written back at amazing rates. In state-of-the-art models, this process encompasses hundreds of billions to trillions of parameters, each of which must be evaluated repeatedly during training.

Training models at this scale require feeding enormous datasets through racks of GPU servers running continuously for weeks or even months. While the computational intensity is extreme, so is the energy footprint. For example, the training run for OpenAI’s GPT-4 is estimated to have consumed around 50 gigawatt-hours of electricity. That’s roughly equivalent to powering the entire city of San Francisco for three days.

This immense front-loaded investment in energy and capital defines the economic model of leading-edge AI. Model developers must absorb stunning training costs upfront, hoping to recover them later through the widespread use of the inferred model.

Profitability hinges on the efficiency of inference, the phase during which users interact with the model to generate answers, summaries, images, or decisions. “For any company to make money out of a model—that only happens on inference,” notes Esha Choukse, a Microsoft Azure researcher who investigates methods for improving the efficiency of large-scale AI inference systems. His quote appeared in the May 20, 2025, MIT Technology Review article “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.”

Indeed, experts across the industry consistently emphasize that inference not training is becoming the dominant driver of AI’s total energy consumption. This shift is driven by the proliferation of real-time AI services—millions of daily chat sessions, continuous content generation pipelines, AI copilots embedded into productivity tools, and ever-expanding recommender and ranking systems. Together, these workloads operate around the clock, in every region, across thousands of data centers.

As a result, it’s now estimated that 80–90% of all compute cycles serve AI inference. As models continue to grow, user demand accelerates, and applications diversify, further widening this imbalance. The challenge is no longer merely reducing the cost of training but fundamentally rethinking the processor architectures and memory systems that underpin inference at scale.

Deep dive into semiconductor engineering

Understanding energy consumption in modern AI processors requires examining two fundamental factors: data processing and data movement. In simple terms, this is the difference between computing data and transporting data across a chip and its surrounding memory hierarchy.

At first glance, the computational side seems conceptually straightforward. In any AI accelerator, sizeable arrays of digital logic—multipliers, adders, accumulators, activation units—are orchestrated to execute quadrillions of operations per second. Peak theoretical performance is now measured in petaFLOPS with major vendors pushing toward exaFLOP-class systems for AI training.

However, the true engineering challenge lies elsewhere. The overwhelming contributor to energy consumption is not arithmetic—it is the movement of data. Every time a processor must fetch a tensor from cache or DRAM, shuffle activations between compute clusters, or synchronize gradients across devices, it expends orders of magnitude more energy than performing the underlying math.

A foundational 2014 analysis by Professor Mark Horowitz at Stanford University quantified this imbalance with remarkable clarity. Basic Boolean operations require only tiny amounts of energy—on the order of picojoules (pJ). A 32-bit integer addition consumes roughly 0.1 pJ, while a 32-bit multiplication uses approximately 3 pJ.

By contrast, memory operations are dramatically more energy hungry. Reading or writing a single bit in a register costs around 6 pJ, and accessing 64 bits from DRAM can require roughly 2 nJ. This represents nearly a 10,000× energy differential between simple computation and off-chip memory access.

This discrepancy grows even more pronounced at scale. The deeper a memory request must travel—from L1 cache to L2, from L2 to L3, from L3 to high-bandwidth memory (HBM), and finally out to DRAM—the higher the energy cost per bit. For AI workloads, which depend on massive, bandwidth-intensive layers of tensor multiplications, the cumulative energy consumed by memory traffic considerably outstrips the energy spent on arithmetic.

In the transition from traditional, sequential instruction processing to today’s highly parallel, memory-dominated tensor operations, data movement—not computation—has emerged as the principal driver of power consumption in AI processors. This single fact shapes nearly every architectural decision in modern AI hardware, from enormous on-package HBM stacks to complex interconnect fabrics like NVLink, Infinity Fabric, and PCIe Gen5/Gen6.

Today’s computing horsepower: CPUs vs. GPUs

To gauge how these engineering principles affect real hardware, consider the two dominant processor classes in modern computing:

  • CPUs, the long-standing general-purpose engines of software execution
  • GPUs, the massively parallel accelerators that dominate AI training and inference today

A flagship CPU such as AMD’s Ryzen Threadripper PRO 9995WX (96 cores, 192 threads) consumes roughly 350 W under full load. These chips are engineered for versatility—branching logic, cache coherence, system-level control—not raw tensor throughput.

AI processors, in contrast, are in a different league. Nvidia’s latest B300 accelerator draws around 1.4 kW on its own. A full Nvidia DGX B300 rack unit, housing eight accelerators plus supporting infrastructure, can reach 14 kW. Even in the most favorable comparison, this represents a 4× increase in power consumption per chip—and when comparing full server configurations, the gap can expand to 40× or more.

Crucially, these raw power numbers are only part of the story. The dramatic increases in energy usage are multiplied by AI deployments in data centers where tens of thousands of such GPUs are running around the clock.

Yet hidden beneath these amazing numbers lies an even more consequential industry truth, rarely discussed in public and almost never disclosed by vendors.

The well-kept industry secret

To the best of my knowledge, no major GPU or AI accelerator vendor publishes the delivered compute efficiency of their processors defined as the ratio of actual throughput achieved during AI workloads to the chip’s theoretical peak FLOPS.

Vendors justify this absence by noting that efficiency depends heavily on the software workload; memory access patterns, model architecture, batch size, parallelization strategy, and kernel implementation can all impact utilization. This is true, and LLMs place extreme demands on memory bandwidth causing utilization to drop substantially.

Even acknowledging these complexities, vendors still refrain from providing any range, estimate, or context for typical real-world efficiency. The result is a landscape where theoretical performance is touted loudly, while effective performance remains opaque.

The reality, widely understood among system architects but seldom stated plainly is simple: “Modern GPUs deliver surprisingly low real-world utilization for AI workloads—often well below 10%.”

A processor advertised at 1 petaFLOP of peak AI compute may deliver only ~100 teraFLOPS of effective throughput when running a frontier-scale model such as GPT-4. The remaining 900 teraFLOPS are not simply unused—they are dissipated as heat requiring extensive cooling systems that further compound total energy consumption.

In effect, much of the silicon in today’s AI processors is idle most of the time, stalled on memory dependencies, synchronization barriers, or bandwidth bottlenecks rather than constrained by arithmetic capability.

This structural inefficiency is the direct consequence of the imbalance described earlier: arithmetic is cheap, but data movement is extraordinarily expensive. As models grow and memory footprints balloon, this imbalance worsens.

Without a fundamental rethinking of processor architecture—and especially of the memory hierarchy—the energy profile of AI systems will continue to scale unsustainably.

Rethinking AI processors

The implications of this analysis point to a clear conclusion: the architecture of AI processors must be fundamentally rethought. CPUs and GPUs each excel in their respective domains—CPUs in general-purpose control-heavy computation, GPUs in massively parallel numeric workloads. Neither was designed for the unprecedented data-movement demands imposed by modern large-scale AI.

Hierarchical memory caches, the cornerstone of traditional CPU design, were originally engineered as layers to mask the latency gap between fast compute units and slow external memory. They were never intended to support the terabyte-scale tensor operations that dominate today’s AI workloads.

GPUs inherited versions of these cache hierarchies and paired them with extremely wide compute arrays, but the underlying architectural mismatch remains. The compute units can generate far more demand for data than any cache hierarchy can realistically supply.

As a result, even the most advanced AI accelerators operate at embarrassingly low utilization. Their theoretical petaFLOP capabilities remain mostly unrealized—not because the math is difficult, but because the data simply cannot be delivered fast enough or close enough to the compute units.

What is required is not another incremental patch layered atop conventional designs. Instead, a new class of AI-oriented processor architecture must emerge, one that treats data movement as the primary design constraint rather than an afterthought. Such architecture must be built around the recognition that computation is cheap, but data movement is expensive by orders of magnitude.

Processors of the future will not be defined by the size of their multiplier arrays or peak FLOPS ratings, but by the efficiency of their data pathways.

Lauro Rizzatti is a business advisor at VSORA, a company offering silicon solutions for AI inference. He is a verification consultant and industry expert on hardware emulation.

Related Content

The post The role of AI processor architecture in power consumption efficiency appeared first on EDN.

Resonant inductors offer a wide inductance range

Пн, 11/24/2025 - 23:30
ITG Electronics' RL858583 Series of resonant inductors.

ITG Electronics launches the RL858583 Series of resonant inductors, delivering a wide inductance range, high current, and high efficiency in a compact DIP package. The family of ferrite-based, high-current inductors target demanding power electronics applications.

ITG Electronics' RL858583 Series of resonant inductors.(Source: ITG Electronics)

The RL858583 Series features an inductance range of 6.8 μH to 22.0 μH with a tight 5% tolerance. Custom inductance values are available.

The series supports currents up to 39 A, with approximately 30% roll-off, in a compact 21.5 × 21.0 × 21.5-mm footprint. This provides exceptional current handling in a compact DIP package, ITG said.

Designed for reliability in high-stress operating conditions, the inductors offer a rated voltage of 600 VAC/1,000 VDC and dielectric strength up to 4,500 VDC. The devices feature low DC resistance (DCR) from 3.94 mΩ to 17.40 mΩ and AC resistance (ACR) values from 70 mΩ to 200 mΩ, which helps to minimize power losses and to ensure high efficiency across a range of frequencies. The operating temperature ranges from -55℃ to 130℃.

The combination of high current capability, compact design, and customizable inductance options makes them suited for resonant converters, inverters, and other high-performance power applications, according to ITG Electronics. The RL858583 Series resonant inductors are RoHS-compliant and halogen-free.

The post Resonant inductors offer a wide inductance range appeared first on EDN.

Power resistors handle high-energy pulse applications

Пн, 11/24/2025 - 23:18
Bourns' Riedon BRF series power resistors.

Bourns, Inc. releases its Riedon BRF Series of precision power foil resistors for high-energy pulse applications. These power resistors offer power ratings up to 2,500 W and a temperature coefficient of resistance (TCR) as low as ±15 ppm/°C, making them suited as energy dissipation solutions for circuits that require high precision. Applications include current sensing, power management, industrial power control, and energy storage.

Bourns' Riedon BRF series power resistors.(Source: Bourns, Inc.)

The power resistor series is available in two- and four-terminal options with termination current ratings up to 150 A. This enables developers to tailor the resistors to their exact design requirements, Bourns said.

Other key specifications include a resistance range from 0.001 to 500 Ω, low inductance of <50 nH, and load stability to 0.1%. The operating temperature range is -40°C to 130°C.

The BRF Series of power resistors is built using metal foil technology housed in an aluminum heat sink and a low-profile package. These precision power resistors are designed to meet the rugged and space-constrained requirements of high-energy pulse applications such as power converters, battery energy storage systems, industrial power supplies, inverters, and motor drives.

Available now, the Riedon BRF series is RoHS compliant. Click here for Bourns’ portfolio of metal foil resistors.

The post Power resistors handle high-energy pulse applications appeared first on EDN.

The Linksys MX4200C: A retailer-branded router with memory deficiencies

Пн, 11/24/2025 - 15:00

How timely! My teardown of Linksys’ VLP01 router, submitted in late September, was published one day prior to when I started working on this write-up in late October.

What’s the significance, aside from the chronological cadence? Well, at the end of that earlier piece, I wrote:

There’s another surprise waiting in the wings, but I’ll save that for another teardown another (near-future, I promise) day.

That day is today. And if you’ve already read my earlier piece (which you have, right?), you know that I actually spent the first few hundred words of it talking about a different Linksys router, the LN1301, also known as the MX4300:

I bought a bunch of ‘em on closeout from Woot (yep, the same place that the refurbished VLP01 two-pack came from), and I even asked my wife to pick up one too, with the following rationale:

That’ll give me plenty of units for both my current four-node mesh topology and as-needed spares…and eventually I may decide to throw caution to the wind and redirect one of the spares to a (presumed destructive) teardown, too.

Last month’s bigger brother

Hold that thought. Today’s teardown victim was another refurbished Linksys router two-pack from Woot, purchased a few months later, this February to be exact. Woot promotion-titled the product page as a “Linksys AX4200 Velop Mesh Wi-Fi 6 System”, and the specs further indicated that it was a “Linksys MX8400-RM2 AX4200 Velop Mesh Wi-Fi 6 Router System 2-Pack”. It cost me $19.99 plus tax (with free shipping) after another $5 promotion-code discount, and I figured that, as with the two-VLP01 kit, I’d tear down one of the two routers for your enjoyment and hold onto the other for use as a mesh node. Here’s its stock image on Woot’s website:

Looks kinda like the MX4300, doesn’t it? I admittedly didn’t initially notice the physical similarity, in part because of the MX8400 product name replicated on the outer box label:

When I started working on the sticker holding the lid in place, I noticed a corner of a piece of literature sticking out, which turned out to be the warranty brochure. Nice packing job, Linksys!

Lifting the lid:

You’ll find both routers inside, along with two Ethernet cable strands rattling around loose. Underneath the thick blue cardstock piece labeled “Setup Guide” to the right:

are the two power supplies, along with…umm…the setup guide plus a support document:

Some shots of the wall wart follow:

including the specs:

and finally, our patient, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front view:

left side:

back, both an overview and a closeup of the various connectors: power, WAN, three LAN, and USB-A. Hmm…where have I seen that combo before?

right side:

top, complete with the status LED:

and…wait. What’s this?

More than one way to “skin a cat”

In addition to the always-informative K7S-03580 FCC ID, check out that MX4200C product name. When I saw it, I realized two key things:

  • Linksys was playing a similar naming game to what they’d done with the VLP01. Quoting from my earlier teardown: “…an outer box shot of what I got…which, I’ve just noticed, claims that it’s an AC2400 configuration 🤷‍♂️ (I’m guessing this is because Linksys is mesh-adding the two devices’ theoretical peak bandwidths together? Lame, Linksys, lame…)” This time, they seemingly added the numbers in the two MX4200 device names together to come up with the “bigger is better” MX8400 moniker.
  • The MX4200(C, in this case) is mighty close to MX4300. Now also realizing the physical similarity, I suspected I had a near-clone (and much less expensive, not to mention more widely available) sibling to the no-longer-available router I’d discussed a month earlier, which, being rare, I was therefore so reticent to (presumably destructively) disassemble.

Some background from my online research before proceeding:

  • The MX4200 came in two generational versions, both of them integrating 512 Mbytes of flash memory for firmware storage. V1 of the MX4200 included 512 Mbytes of RAM and had dimensions of 18.5cm (7.3 inches) high and 7.9cm (3.1 inches) wide. The larger, 24.3cm (9.57 inches) high and 11cm (4.45 inches) wide, V2 MX4200 also doubled the internal RAM capacity to 1 GByte.
  • This MX4200C is supposedly a Costco-only variant (meaning what beyond the custom bottom sticker? Dunno), conceptually reminiscent of the Walmart-only VLP01 I’d taken apart last month. I can’t find any specs on it, but given its dimensional commonality with the V2 MX4200, I’ll be curious to peer inside and see if it embeds 1 GByte of RAM, too.
  • And the MX4300? It’s also dimensionally reminiscent of the V2 MX4200. But this time, there are 2 GBytes of RAM inside it. Last month, I’d mentioned that the MX4300 also bumps up the flash memory to 1 GByte, but the online source I’d gotten that info from was apparently incorrect. It’s 512 GBytes, the same as in versions of the MX4200.
Diving in

Clearly, now that I’m aware of the commonality between this MX4200C and the MX4300, I’m going to be more careful (but still comprehensive) than I might otherwise be with my dissection, in the hope of a subsequent full resurrection. To wit, here we go, following the same initial steps I used for the much smaller VLP01 a month ago. The only top groove I was able to punch through was the back edge, and even then, I had to switch to a flat-head screwdriver to make tangible disassembly progress (without permanently creasing the spudger blade in the process):

Voila:

Next to go, again as before, are those four screws:

And now for a notable deviation from last month’s disassembly scheme. That time, there were also screws under the bottom rubber “feet” that needed to be removed before I could gain access to the insides. This time, conversely, when I picked up the assembly in preparation for turning it upside-down…

Alrighty, then!

 

Behold our first glimpses of the insides. Referencing the earlier outer case equivalents (with the qualifier that, visually obviously, the PCB is installed diagonally), here’s the front:

Left side:

Back, along with another accompanying connectors closeup (note, by the way, the two screws at the bottom of the exposed portion of the PCB):

And right side:

Let’s next get rid of the plastic shield around the connectors, which, as was the case last month, lifted away straightaway:

Heat-removal hardware removal

And next, the finned heatsink to its left (in the earlier photo) and the rear right half of the assemblage (when viewed from the front):

We have liftoff:

Oh, goodie, Faraday cages! Hold that thought:

Rotating the assemblage around exposes the other (front left) half and its metal plate, which, with the just-seen four heatsink screws also no longer holding it in place, lifts right off as well:

You probably already noticed the colored wires in the prior shots. Here are the up-top antennas and LED assembly where they end up:

And here’s where at least some of them originate:

Unhooking the wire harness running up the side of the assemblage, along with removing the two screws noted earlier at the bottom of the PCB, enables the board’s subsequent release:

Here’s what I’m calling the PCB backside (formerly in the rear right region) which the finned heatsink previously partially covered and which you’ve already seen:

And here’s the newly-exposed-to-view frontside (formerly front left, to be precise), with even more Faraday cages awaiting my pry-off attention:

Dissecting cage contents

I’m happy to oblige. Upper left corner first:

Temporarily (because, as previously mentioned, I aspire to put everything back together in functionally resurrected form later) bend the tab away, and with thanks to Google Image search results for the tip, a Silicon Labs EFR32MG21 Series 2 Multiprotocol Wireless SoC, supporting Bluetooth, Thread, and Zigbee mesh protocols, comes into view. The previously shown single-lead antenna connection on the other side of the PCB is presumably associated with it:

To its left, uncaged, is a Fidelix FMND4G08S3J-ID 512 Mbyte NAND flash memory, presumably for holding the system firmware.

Most of the rest of the cages’ contents are bland, unless you’re into lots of passives; as you’ll soon see, their associated ICs on the other side are more exciting:

Note in all these so-far cases, as well as the remainder, that thermal tape is employed for heat transfer purposes, not paste. Linksys’ decision not only makes it easier to see what’s underneath it will also increase the subsequent likelihood of tape-back-in-place reassembly functional success:

And after all those passives, the final cage at bottom left ended up being IC-inclusive again, this time containing a Qualcomm PMP8074 power management controller:

Now for a revisit of the other side of the PCB, starting with the top-most cage and working our way to the bottom. The first one, with two antenna connectors notably above it, encompasses a portion of the wireless networking subsystem and is based on two Qualcomm Wi-Fi SoCs, the QCN5024 for 2.4 GHz and QCN5054 for 5 GHz. Above the former are two Skyworks SKY85340-11 front-end modules (FEMs); the latter is topped off by two Skyworks SKY85755-11s:

C = Costco = capacity-reduced?

The next cage is for the processor, a quad-core 1.4 GHz Qualcomm IPQ8174, the same SoC and speed bin as in the Linksys MX4300 I discussed last month, and the volatile memory, two ESMT M15T2G16128A 2 Gbit DDR3-933 SDRAMs. I guess we now know how the MX4200C differs from the V2 MX4200; Linksys halved the RAM to 512 GBytes total, reminiscent of the V1 MX4200’s allocation, to come up with this Costco-special product spin.

The third one, this time with four antennae connectors below it, houses the remainder of the (5 GHz-only, in this case) Wi-Fi subsystem; four more Qualcomm QCN5054s, each with a mated Skyworks SKY85755-11 FEM:

And last but not least, at bottom right is the final cage, containing a Qualcomm QCA8075 five-port 10/100/1000 Mbps Ethernet transceiver, only four ports’ worth of which are seemingly leveraged in this design (one WAN, three LAN, if you’ll recall from earlier). Its function is unsurprising given its layout proximity to the two Botthand LG2P109RN dual-port magnetic transformers to its right:

And with that, I’ll wrap up for today. More info on the MX4200 (V1, to be precise) can be found at WikiDevi. Over to you for your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post The Linksys MX4200C: A retailer-branded router with memory deficiencies appeared first on EDN.

Understand quadrature encoders with a quick technical recap

Пн, 11/24/2025 - 14:26

An unexpected revisit to my earlier post on mouse encoder hacking sparked a timely opportunity to reexamine quadrature encoders, this time with a clearer lens and a more targeted focus on their signal dynamics and practical integration. So, let’s get a fresh restart and dive straight into the quadrature signal magic.

Starting with a flake of theory, a quadrature signal refers to a pair of sinusoidal waveforms—typically labeled I (in-phase) and Q (quadrature)—that share the same frequency but are offset by 90° in phase. These orthogonal signals do not interfere with each other and together form the foundation for representing complex signals in systems ranging from communications to control.

Figure 1 A visualization illustrates the idealized output from a quadrature encoder, highlighting the phase relationship. Source: Author

In the context of quadrature encoders, the term describes two square wave signals, known as A and B channels, which are also 90° out of phase. This phase offset enables the system to detect the direction of rotation, count discrete steps or pulses for accurate position tracking, and enhance resolution through edge detection techniques.

As you may already be aware, encoders are essential components in motion control systems and are generally classified into two primary types: incremental and absolute. A common configuration within incremental encoders is the quadrature encoder, which uses two output channels offset in phase to detect both direction and position with greater precision, making it ideal for tracking relative motion.

Standard incremental encoders also generate pulses as the shaft rotates, providing movement data; however, they lose positional reference when power is interrupted. In contrast, absolute encoders assign a unique digital code to each shaft position, allowing them to retain exact location information even after a power loss—making them well-suited for applications that demand high reliability and accuracy.

Note that while quadrature encoders are often mentioned alongside incremental and absolute types, they are technically a subtype of incremental encoders rather than a separate category.

Oh, I almost forgot: The Z output of an ABZ incremental encoder plays a crucial role in precision positioning. Unlike the A and B channels, which continuously pulse to indicate movement and direction, the Z channel—also known as the index or marker pulse—triggers just once per revolution.

This single pulse serves as a reference point, especially useful during initialization or calibration, allowing systems to accurately identify a home or zero position. That is to say, the index pulse lets you reset to a known position and count full rotations; it’s handy for multi-turn setups or recovery after power loss.

Figure 2 A sample drawing depicts the encoder signals, with the index pulse clearly marked. Source: Author

Hands-on with a real-world quadrature rotary encoder

A quadrature rotary encoder detects rotation and direction via two offset signals; it’s used in motors, knobs, and machines for fine-tuned control. Below is the circuit diagram of a quadrature encoder I designed for a recent project using a couple of optical sensors.

Figure 3 Circuit diagram shows a simple quadrature encoder setup that employs optical sensors. Source: Author

Before we proceed, it’s worth taking a moment to reflect on a few essential points.

  • A rotary encoder is an electromechanical device used to measure the rotational motion of a motor shaft or the position of a dial or knob. It commonly utilizes quadrature encoding, an incremental signaling technique that conveys both positional changes and the direction of rotation. On the other hand, linear encoder measures displacement along a straight path and is commonly used in applications requiring high-precision linear motion.
  • Quadrature encoders feature two output channels, typically designated as channel A and channel B. By monitoring the pulse count and identifying which channel leads, the encoder interface can determine both the distance and direction of rotation.
  • Many encoders also incorporate a third channel, known as the index channel (or Z channel), which emits a single pulse per full revolution. This pulse serves as a reference point, enabling the system to identify the encoder’s absolute position in addition to its relative movement.
  • Each complete cycle of the A and B channels in a quadrature encoder generates square wave signals that are offset by 90 degrees in phase. This cycle produces four distinct signal transitions—A rising, B rising, A falling, and B falling—allowing for higher resolution in position tracking. The direction of rotation is determined by the phase relationship between the channels: if channel A leads channel B, the rotation is typically clockwise; if B leads A, it indicates counterclockwise motion.
  • To interpret the pulse data generated by a quadrature encoder, it must be connected to an encoder interface. This interface translates the encoder’s output signals into a series of counts or cycles, which can then be converted into a number of rotations based on the encoder’s cycles per revolution (CPR) counts. Some manufacturers also specify pulses per revolution (PPR), which typically refers to the number of electrical pulses generated on a single channel per full rotation and may differ from CPR depending on the decoding method used.

Figure 4 The above diagram offers a concise summary of quadrature encoding basics. Source: Author

That’s all; now, back to the schematic diagram.

In the previously illustrated quadrature rotary encoder design, transmissive (through-beam) sensors work in tandem with a precisely engineered shaft encoder wheel to detect rotational movement. Once everything is correctly wired and tuned, your quadrature rotary encoder is ready for use. It outputs two phase-shifted signals, enabling direction and speed detection.

In practice, most quadrature encoders rely on one of three sensor technologies: optical, magnetic, or capacitive. Among these, optical encoders are the most commonly used. They operate by utilizing a light source and a photodetector array to detect the passage or reflection of light through an encoder disk.

A note for custom-built encoder wheels: When designing your own encoder wheel, precision is everything. Ensure the slot spacing and width are consistent and suited to your sensor’s resolution requirements. And do not overlook alignment; accurate positioning with the beam path is essential for generating clean, reliable signals.

Layers beneath the spin

So, once again we circled back to quadrature encoders—this time with a bit more intent and (hopefully) a deeper dive. Whether you are just starting to explore them or already knee-deep in decoding signals, it’s clear these seemingly simple components carry a surprising amount of complexity.

From pulse counting and direction sensing to the quirks of noisy environments, there is a whole layer of subtleties that often go unnoticed. And let us be honest—how often do we really consider debounce logic or phase shift errors until they show up mid-debug and throw everything off?

That is the beauty of it: the deeper you dig, the more layers you uncover.

If this stirred up curiosity or left you with more questions than answers, let us keep the momentum going. Share your thoughts, drop your toughest questions, or suggest what you would like to explore next. Whether it’s hardware oddities, decoding strategies, or real-world implementation hacks—we are all here to learn from each other.

Leave a comment below or reach out with your own encoder war stories. The conversation—and the learning—is far from over.

Let us keep pushing the boundaries of what we think we know, together.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Understand quadrature encoders with a quick technical recap appeared first on EDN.

Motor drivers advance with new features

Птн, 11/21/2025 - 16:00
The Würth Elektronik and Nexperia NEVB-MTR1-KIT1 motor drive evaluation kit.

Industrial automation, robotics, and electric mobility are increasingly driving demand for improved motor driver ICs as well as solutions that make it easier to design motor drives. With energy consumption being a key factor in these applications, developers are looking for motor drivers that offer higher efficiency and lower power consumption.

At the same time, integrating motor drivers into existing systems is becoming more challenging, as they need to work seamlessly with a variety of motors and control algorithms such as trapezoidal, sinusoidal, and field-oriented control (FOC), according to Global Market Insights Inc.

The average electric vehicle uses 15–20 motor drivers across a variety of systems, including traction motors, power steering, and brake systems, compared with eight to 12 units in internal-combustion-engine vehicles, and industrial robots typically use six to eight motor drivers for joint articulation, positioning, and end-effector control, according to Emergen Research.

The motor driver IC market is expected to grow at a compound annual growth rate of 6.8% from 2024 to 2034, according to Emergen Research, driven by industrial automation, EVs, and smart consumer electronics. Part of this growth is attributed to Industry 4.0 initiatives that drive the demand for more advanced motor control solutions, including the use of artificial intelligence and machine-learning algorithms in motor control systems.

Emergen Research also reports that silicon carbide and gallium nitride (GaN) materials are gaining traction in high-power applications thanks to their higher switching characteristics compared with silicon-based solutions.

Other trends include the growing demand for precise motor control, the integration of advanced sensorless control, and low electromagnetic interference (EMI), according to the market research firms.

Here are a few examples of new motor drivers for industrial and automotive applications, as well as development solutions such as software, reference designs, and evaluation kits that help ease the development of motor drives.

Motor drivers

Melexis recently launched the MLX81339, a configurable motor driver with a pulse-width modulation (PWM)/serial interface for a range of industrial applications. This motor driver IC is designed for compact, three-phase brushless DC (BLDC) and stepper motor control up to 40 W in industrial applications such as fans, pumps, and positioning systems.

The motor driver targets a range of markets, including smart industrial and consumer sectors, in applications such as positioning motors, thermal valves, robotic actuators, residential and industrial ventilation systems, and dishwashing pumps. The MLX81339 is also qualified for automotive fan and blower applications.

A key feature of this motor control IC is the programmable flash memory, which enables full application customization. Designed for three-phase BLDC or bipolar stepper motors, these advanced drivers use silent FOC. It delivers reliable startup, stopping, and precise speed control from low to maximum speed, Melexis said.

The MLX81339 motor driver supports control up to 20 W at 12 V and 40 W at 24 V, integrating a three-phase driver with a configurable current limit up to 3 A, as well as under-/overvoltage, overcurrent, and overtemperature protection. Other key specifications include a wide supply voltage range of 6 V to 26 V and an operating temperature range of –40°C to 125°C (junction temperature up to 150°C).

The MLX81339 also incorporates 8× general-purpose I/Os and several interfaces, including PWM/FG, I2C, UART, and SPI, for easy integration into both legacy and smart systems. It also supports both sensor-based and sensorless control.

Melexis offers the Melexis StartToRun web tool to accelerate motor driver prototyping, eliminating engineering tasks by generating configuration files based on simple user inputs. In addition to the motor and electrical parameters, the tool includes prefilled mechanical values.

The MLX81339, housed in QFN24 and SO8-EP packages, is available now. A code-free and configurable MLX80339 for rapid deployment will be released in the first quarter of 2026.

Melexis’s MLX81339 motor driver.Melexis’s MLX81339 motor driver (Source: Melexis)

Earlier this year, STMicroelectronics introduced the VNH9030AQ, an integrated full-bridge DC motor driver with high-side and low-side MOSFET gate drivers, real-time diagnostics, and protection against overvoltage transients, undervoltage, short-circuit conditions, and cross-conduction, aimed at reducing design complexity and cost. Delivering greater flexibility to system designers, the MOSFETs can be configured either in parallel or in series, allowing them to be used in systems with multiple motors or to meet other specific requirements.

The integrated non-dissipative current-sense circuitry monitors the current flowing through the device to distinguish each motor phase, contributing to the driver’s efficiency. The standby power consumption is very low over the full operating temperature range, easing use in zonal controller platforms, ST said.

This DC motor driver can be used in a range of automotive applications, including functional safety. The driver also provides a dedicated pin for real-time output status, easing the design into functional-safety and general-purpose low-/mid-power DC-motor-driven applications while reducing the requirements for external circuitry.

With an RDS(on) of 30 mΩ per leg, the VNH9030AQ can handle mid- and low-power DC-motor-driven applications such as door-control modules, washer pumps, powered lift gates, powered trunks, and seat adjusters.

The driver is part of a family of devices that leverage ST’s latest VIPower M0-9 technology, which permits monolithic integration of power and logic circuitry. All products, including the VNH9030AQ, are housed in a 6 × 6-mm, thermally enhanced triple-pad QFN package. The package is designed for optimal underside cooling and shares a common pinout to ease layout and software reuse.

The VNH9030AQ is available now. ST also offers a ready-to-use VNH9030AQ evaluation board and the TwisterSim dynamic electro-thermal simulator to simulate the motor driver’s behavior under various operating conditions, including electrical and thermal stresses.

STMicroelectronics’ VNH9030AQ half-bridge DC motor driver.STMicroelectronics’ VNH9030AQ half-bridge DC motor driver (Source: STMicroelectronics)

Targeting both automotive and industrial applications, the Qorvo Inc. 160-V three-phase BLDC motor driver also aims to reduce solution size, design time, and cost with an integrated power manager and configurable analog front end (AFE). The ACT72350 160-V gate driver can replace as many as 40 discrete components in a BLDC motor control system, and the configurable AFE enables designers to configure their exact sensing and position detection requirements.

The ACT72350 includes a configurable power manager with an internal DC/DC buck converter and LDOs to support internal components and serve as an optional supply for the host microcontroller (MCU). In addition, by offering a wide, 25-V to 160-V input range, designers can reuse the same design for a variety of battery-operated motor control applications, including power and garden tools, drones, EVs, and e-bikes.

The ACT72350 provides the analog circuitry needed to implement a BLDC motor control system and can be paired with a variety of MCUs, Qorvo said. It provides high efficiency via programmable propagation delay, precise current sensing, and BEMF feedback, as well as differentiated features for safety-critical applications.

The SOI-based motor driver is available now in a 9.0 × 9.0-mm, 57-pin QFN package. An evaluation kit is available, along with a model of the ACT72350 in Qorvo’s QSPICE circuit simulation software at www.qspice.com.

Qorvo’s ACT72350 three-phase BLDC motor driver.Qorvo’s ACT72350 three-phase BLDC motor driver (Source: Qorvo Inc.) Software, reference designs, and evaluation kits

Motor driver IC and power semiconductor manufacturers also deliver software suites, reference designs, and development kits to simplify motor drive design and development. A few examples include Power Integrations’ MotorXpert software, Efficient Power Conversion Corp.’s (EPC’s) GaN-based motor driver reference design, and a modular motor driver evaluation kit developed by Würth Elektronik and Nexperia.

Power Integrations continues to enhance its MotorXpert software for its BridgeSwitch and BridgeSwitch-2 half-bridge motor driver ICs. The latest version, MotorXpert v3.0, enables FOC without shunts and their associated sensors. It also adds support for advanced modulation schemes and features V/F and I/F control to ensure startup under any load condition.

Designed to simplify single- and three-phase sensorless motor drive designs, the v3.0 release adds a two-phase modulation scheme, suited for high-temperature environments, reducing inverter switching losses by 33%, according to the company. It allows developers to trade off the temperature of the inverter versus torque ripple, particularly useful in applications such as hot water circulation pumps, reducing heat-sink requirements and enclosure cost, the company said.

The software also delivers a five-fold improvement to the waveform visualization tool and an enhanced zoom function, providing more data for motor tuning and debugging. The host-side application includes a graphical user interface with Power Integrations’ digital oscilloscope visualization tool to make it easy to design and configure parameters and operation and to simplify debugging. Also easing development are parameter tool tips and a tuning assistant.

The software suite is MCU-agnostic and includes a porting guide to simplify deployment with a range of MCUs. It is implemented in the C language to MISRA standards.

Power Integrations said development time is greatly reduced by the included single- and three-phase code libraries with sensorless support, reference designs, and other tools such as a power supply design and analysis tool. Applications include air conditioning fans, refrigerator compressors, fluid pumps, washing machine and dryer drums, range hoods, industrial fans, and heat pumps.

Power Integrations’ MotorXpert software suite.Power Integrations’ MotorXpert software suite (Source: Power Integrations)

EPC claims the first GaN-based motor driver reference design for humanoid robots with the launch of the EPC91118 reference design for motor joints. The EPC91118 delivers up to 15 ARMS per phase from a wide input DC voltage, ranging from 15 V to 55 V, in an ultra-compact, circular form factor.

The reference design is optimized for space-constrained and weight-sensitive applications such as humanoid limbs and drone propulsion. It shrinks inverter size by 66% versus silicon, EPC said, and eliminates the need for electrolytic capacitors due to the GaN ICs and high-frequency operation. The high switching frequency instead allows the use of smaller MLCCs.

The reference design is centered around the EPC23104 ePower stage IC, a monolithic GaN IC that enables higher switching frequencies and reduced losses. The power stage is combined with current sensing, a rotor shaft magnetic encoder, an MCU, RS-485 communications, and 5-V and 3.3-V power supplies on a single board that fits within a 32-mm-diameter footprint (55-mm-diameter outer frame; 32-mm-diameter inverter).

EPC’s EPC91118 motor driver reference design.EPC’s EPC91118 motor driver reference design (Source: Efficient Power Conversion Corp.)

Aimed at faster development of motor controllers, Würth Elektronik and Nexperia have collaborated on the NEVB-MTR1-KIT1 modular motor driver evaluation kit. The kit can be configured for use in under two minutes and is powered via USB-C.

The companies highlight the modularity of the evaluation board that can be adapted to a wide range of motors, control algorithms, and test setups, enabling faster optimization as well as faster iterations and testing. With an open architecture, the kit enables MCUs and components to be easily exchanged, and the open-source firmware allows developers to quickly adapt and develop motor controllers under real-world conditions, according to the companies.

The kit includes a three-phase inverter board, a motor controller board, an MCU development board, pre-wired motor connections, and a BLDC motor. A key feature is the high-current connectors integrated by Würth Elektronik, which enable evaluations up to 1 kW at 48 V.

The demands on dynamics, fault tolerance, and energy efficiency in drive systems are rising steadily, resulting in increasingly more complex motor control system design, according to the companies. The selection of the right switches (MOSFETs and IGBTs), gate drivers, and protection circuits is critical to ensure lower switching losses, better thermal behavior, and stable dynamics.

The behavior of the components must be carefully validated under real-world conditions, taking into consideration factors such as parasitic elements, switching transients, and EMI, according to the companies. The modular kit helps with this by enabling different motors and control concepts to be evaluated.

The Würth Elektronik and Nexperia NEVB-MTR1-KIT1 motor drive evaluation kit.The Würth Elektronik and Nexperia NEVB-MTR1-KIT1 motor drive evaluation kit (Source: Würth Elektronik)

The post Motor drivers advance with new features appeared first on EDN.

A 0-20mA source current  to 4-20mA loop current converter

Птн, 11/21/2025 - 15:00

A 4 to 20 mA loop current is a popular terminology with Instrumentation/Electronics engineers in process industries. Field transmitters like pressure,temperature,flow, etc., give out 4 to 20 mA current signals corresponding to the respective process parameters.

Industrial equipment, such as plant control rooms (situated at a distance from the field), will house a distributed control system (DCS) or programmable logic controller (PLC) to monitor, record, and control these process parameters. This equipment will supply 24 VDC to a typical transmitter through one wire and receive current proportional to the process parameter through another wire.

Typically, two wires are needed to connect the supply voltage and ground, and two more wires are needed to connect the current signal. Thus, a two-wire system cuts cable cost by 50%. Hence, all field devices must conform to this two-wire system in process industries. DCS/PLC should receive a current in the range 4 to 20 mA. A current of zero indicates the cable has been cut.

Still, there is equipment, like gas analyzers, which give out a conventional 0 to 20 mA current output. These signals are to be converted into the 4 to 20 mA loop current format to feed the DCS/PLC in the control room.

Figure 1’s circuit does exactly this.

Figure 1 A 0 to 20 mA current source to a 4 to 20 mA loop current converter module circuit. The SPAN & ZERO potentiometers can be multiturn PCB mountable types for precision adjustment. Q1 should have a heatsink.

How it works

Connect the 24-V power supply, digital ammeter, and a load resistor to J2 as shown in Figure 1.

Then, connect a current generator to the J1 connector. This current flows through R3 and is converted to a voltage.

The output of U1B is this voltage multiplied by (1+(R10/R11)), which is nearly one. Let us call this Vspan. The output of U3 is Vreg.

There are three currents at pin3 of U1A. Let us analyze the basic equation of this circuit:

The third current through R4 is:

The total current at pin3 of U1A is:

In this circuit, R4/R6 is chosen to be 99; therefore:

Both U1A and Q1 adjust the current flow through R6, satisfying the above equation in closed-loop control. U3 generates 5 VDC from the 24 VDC input for circuit operation.

R12 loads the regulator to draw a small current. Q2 and R1 limit the output current to around 26 mA.

How to calibrate this circuit

Connect a 24 VDC power supply to J2, a load resistor of 200 Ω, and a digital ammeter

to J2 as shown in Figure 1. Connect a current generator to J1 as shown.

Keep the current as zero. Adjust Rzero until Ioutput reaches 4 mA.

Now, set the current generator to 20 mA. Adjust Rspan until Ioutput shows 20 mA.

Repeat this a few times to get the correct values. Now this current converter is calibrated.

How to improve accuracy

This circuit gives an accuracy of < 1%. To improve accuracy, select components with close tolerances.

You may introduce a 2.5-V reference IC after U3. Connect R2 and Rzero to this reference. In this case, R2 will be 50 KΩ and Rzero will be 20 KΩ.

Figure 2 illustrates how this current converter module is connected between the field transmitter and the control room’s DCS/PLC. Make sure to introduce a suitable surge suppressor in the line going to the field.

This module does not need a separate power supply. This can be kept in the field near the equipment giving out 0 to 20 mA.

Figure 2 A block diagram that shows the connection of the current converter in process industries.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post A 0-20mA source current  to 4-20mA loop current converter appeared first on EDN.

Top 10 AC/DC power supplies

Птн, 11/21/2025 - 00:13
XP Power’s HDA1500 series.

AC/DC power supply manufacturers have focused their latest designs on meeting the increased demand for higher efficiency and miniaturization in industrial and medical systems. A few of them are also leveraging wide-bandgap (WBG) technologies such as gallium nitride (GaN) and silicon carbide (SiC) to achieve gains in efficiency in their latest-generation power supplies.

It is understood that these power supplies need to meet a range of safety certifications for industrial and medical applications. They must also be rugged enough to operate in harsh environments.

Here are 10 top AC/DC power supplies introduced over the past year for industrial and medical applications. In some cases, these AC/DC power supplies meet certifications for both medical and industrial markets, allowing them to be used in both applications.

Medical and industrial power supplies

GaN technology is making its way into AC/DC power supplies for industrial and medical applications, helping to improve performance and shrink designs. Bel Fuse Inc. recently introduced its 65-W GaN-based AC/DC power supplies in a compact footprint. The latest additions to the Bel Power Solutions portfolio are the MDP65 for medical applications and the HDP65 for industrial and ITE, both offering up to 92% efficiency.

The series is available in two mechanical mount options: printed-circuit-board (PCB) mount or open frame. The compact package size of 1 × 3 inches offers 50% real-estate savings compared with 2 × 3-inch devices for increased power density in lower-power applications.

The MDP65 series is a cost-effective option for the medical market while providing critical safety. Suited for Type BF medical applications, it is compliant with the IEC/EN 60601-1 safety standard and features 2 × Means of Patient Protection (MOPP) isolation. The HDP65 devices meet safety standards IEC 62368-1, EN 62368-1, UL 62368-1, and C-UL (equivalent to CAN/CSA-C22.2 No.62368-1). Both series are safety-agency-certified, meeting the latest regulatory requirements with UL and Nemko approvals.

Both series output 65-W power, offer a universal, 90- to 264-VAC input voltage range, and deliver a high power density of 17.20 W/in.3. They also feature an operating temperature range of –20°C to 70°C, ensuring reliable performance even when incorporated into compact, sealed diagnostic or portable monitoring units where heat dissipation is a challenge, the company said.

Bel Fuse’s HDP65 and MDP65 power supplies.Bel Fuse’s HDP65 and MDP65 power supplies (Source: Bel Fuse Inc.)

Claiming to set new standards in power density and on-board intelligence, XP Power has introduced its FLXPro series of chassis-mount AC/DC power supplies to address space constraints and the need for increased power. The FLXPro series is also designed with SiC/GaN, achieving efficiencies up to 93%, which helps to reduce system operating costs, cooling requirements, and system size.

The FLX1K3 fully digital configurable modular power supply delivers power levels of 1.3 kW at high-line conditions and 1 kW at low-line conditions with a power density of up to 23.2 W/in.3. It is housed in a compact 1U form factor, measuring 254.0 × 88.9 × 40.6 mm (10.0 × 3.50 × 1.6 inches) and is designed to simplify power systems in healthcare, industrial, semiconductor manufacturing, analytical instrumentation, automation, renewable energy systems, and robotics applications.

The FLXPro design features up to four customer-selected, inherently flexible output modules with selectable outputs from 9 VDC to 66 VDC and a wide adjustment range (10% to –40%), which can be configured under live conditions to form part of a customer’s active control system, XP Power said. The output modules can be combined into multiple parallel and series configurations, and multiple FLXPro units can also be combined in parallel for higher-power applications.

XP Power said this flexibility optimizes application performance and control, addressing requirements for fixed and variable loads.

A unique feature of the FLXPro series is the fully digital architecture for both the input stage and output modules. It is the foundation for XP Power’s new iPSU Intelligent Power technology, which converts internal data into usable information for quick decisions that improve application safety and reduce operating costs.

The FLXPro series also provides extensive diagnostics, including a new Black Box Snapshot feature that reduces troubleshooting time after shutdown events by recording in-depth system status at, and prior to, shutdown; tri-color LEDs that indicate power supply health with a truth table incorporated on the chassis for simple interpretation without manuals or digital communications; and multiple internal temperature measurements for fast status checks through temperature diagnostics that drive intelligent fan control and overtemperature warnings and alarms.

FLXPro also features built-in user-defined digital controls, signals, alarms, and output controllability. Inputs, outputs, and firmware can be configured through the user interface or directly over direct digital communications. It supports ES1 isolated digital communications and uses PMBus over I2C for digital communications, enabling real-time control, monitoring, and data logging. The operating temperature range is –20°C to 70°C.

XP Power’s FLXPro series.XP Power’s FLXPro series (Source: XP Power)

Also addressing industrial and medical applications with an efficient and power-dense design is Murata Manufacturing Co. Ltd.’s PQC600 open-frame AC/DC power supplies. Target markets include hospital beds, dentist chairs, medical equipment, and industrial process machinery.

The industrial-grade PQC600 offers 600 W of power in a package that is less than 1U in height. It leverages the Murata Power Solutions transformer design with an optimized layout and package design. With a 600-W forced-air cooling design, it achieves an efficiency of 95% at full load. Key features include an optimized interleaved power-factor correction, back-end synchronous rectification, and a droop-current-sharing feature, enabling multiple units to be configured in parallel for greater power scalability.

The PQC600 is certified to the IEC 60601-1 Edition 3 medical safety standard, which includes 2 × MOPP from primary to secondary, 1 MOPP from the chassis to ground, and 1 MOPP from output to chassis. It also complies with the IEC 60601-1-2 4th Edition for electromagnetic compatibility (EMC) standards and is suitable for use with medical devices that have Type B or Type BF applied parts.

Also targeting the need for high efficiency and miniaturization is the NSP-75/100/150/200/320 series of AC/DC enclosed-type power supplies from Mean Well Enterprises Co. Ltd. The NSP series surpasses Mean Well’s RSP series, which has been on the market for over 10 years, with a higher cost-performance ratio. It offers a wider, 85- to 305-VAC input range; an extended temperature range of –40°C to 85°C with full load operation possible up to 60°C, making it suitable for harsher environments; and a smaller footprint, ranging from 28% to 46% smaller than the RSP series.

The NSP series offers high efficiency of up to 90% to 94.5% with low no-load power consumption (<0.3 W to 0.5 W), depending on the model, and 200% peak-power-output capability. Other features include short, overload, overvoltage, and overtemperature protection; programmable output voltage; ultra-low leakage of <350 µA; and operation at altitudes up to 5,000 meters.

The AC/DC power supplies also offer safety certifications in multiple industries, including ICT, industrial, medical, household, and green energy applications, and meet OVC III requirements. Safety certifications include CB/DEKRA/UL/RCM/BSMI/CCC/EAC/BIS/KC/CE/UKCA, and IEC/EN/UL 62368-1, 61010-1, 61558-1, 62477-1, and SEMI 47 for semiconductor equipment. They meet 2 × MOPP and medical BF-grade applications.

Mean Well’s NSP-320 power supply.Mean Well’s NSP-320 power supply (Source: Mean Well Enterprises Co. Ltd.) Medical power supplies

P-Duke Technology Co. Ltd. launched the MAD150 medical-grade AC/DC power supply series, capable of delivering up to 150 W of continuous output power and 200-W peak power for five seconds. The compact, 3 × 2-inch package is available in open-frame, enclosed, and DIN-rail options, with connection types including JST connectors, Molex connectors, and screw terminals.

Suited for most industries worldwide, the series features a universal input range from 85 to 264 VAC and supports DC input voltages from 88 to 370 VDC. The MAD150 series provides single-output options for medical devices at 12, 15, 18, 24, 28, 36, 48, and 54 VDC, with up to 7% output adjustability.

Designed for medical applications and suited for BF-type parts, it offers less than 100-μA patient leakage current, 2 × MOPP, and 4,000-VAC input-to-output isolation. Applications include portable medical devices, diagnostic equipment, monitoring equipment, hospital beds, and medical carts.

These devices reduce thermal generation, offer an extended temperature range of –40°C to 85°C, and provide a conversion efficiency up to 94%. It operates at altitudes up to 5,000 meters.

The MAD150 is certified to IEC/EN/ANSI/AAMI ES 60601-1 (Medical electrical equipment – Part 1: General requirements for basic safety and essential performance) and IEC/EN/UL 62368-1 (Audio/video, information and communication technology equipment – Part 1: Safety requirements).

Advanced Energy Industries Inc. has introduced the NCF425 series of 425-W cardiac floating (CF)-rated medical open-frame AC/DC power supplies with CF-level isolation and leakage current. These standard, off-the-shelf power supplies, simplifying isolation and speeding time to market, are certified to IEC 60601-1 and streamline critical medical device product development.

Advanced Energy said it is one of the few companies that provides standard, off-the-shelf CF-rated power products. The system-level CF rating is the most stringent medical device electrical safety classification, with certification needed for equipment that has direct contact with the heart or bloodstream, the company explained.

The company’s CF-rated portfolio was initially launched in September 2024 with the introduction of the NCF150, followed by the NCF250 and NCF600. The NCF series achieves a sub-10-µA leakage current and integrates the high levels of isolation required in critical medical devices.

This latest release offers additional options and helps reduce the number of isolation components required, translating into a smaller system size and lower cost.

The NCF family is designed to simplify thermal and electromagnetic interference (EMI) management, reduce system size and weight, and reduce the bill of materials. It also includes functionality typically provided at the system level, which reduces time and complexity in the development process, the company said.

The NCF425 is certified to the medical safety standard IEC 60601-1 and meets 2 × MOPP. Key features include a maximum output power of 425 W in a 3.5 × 6 × 1.5-inch form factor and a 5-kV defibrillator pulse protection. Applications include surgical generators, RF ablation, pulsed field ablation, cardiac-assist devices and monitors, and cardiac-mapping systems.

Advanced Energy’s NCF425 series.Advanced Energy’s NCF425 series (Source: Advanced Energy Industries Inc.) Industrial power supplies

Delivering a high level of programmability and flexibility, XP Power’s 1.5-kW HDA1500 series suits a variety of applications across a range of industries. For example, the HDA1500 can be used in applications such as robotics, lasers, LED heating, and semiconductor manufacturing, providing benefits in digital control, communication, and status LEDs.

Rated for 1.5 kW of power with no minimum load requirement, the HDA1500 power supplies offer efficiency up to 93%, allowing for a more compact form factor as well as reducing operating costs. The HDA1500 units can be operated in parallel with active current sharing when more power is required in a rack.

Advanced digital control in power solutions has not always been widely available, according to XP Power, with the HDA1500 offering precise digital adjustment of both output current and output voltage from 0% to 105% for greater user flexibility.

The standard advanced digital control is key to the flexibility of the HDA1500, the company said. Driven by a graphical user interface, the power supply can be adjusted via several digital protocols, including PMBus, RS-485/-232, Modbus, and Ethernet, which also allow for easy integration into more advanced power control schemes.

The HDA1500 units operate from a universal single-phase mains input (90 to 264 VAC) and are reported to offer one of the widest single-rail output selections on the market, covering popular voltages between 12 VDC and 400 VDC in a portfolio of 11 units. At low-line operation, the power supplies can deliver more power than many competitive offerings, the company said.

With an operating temperature range of –25°C to 60°C, the units require no derating below 50°C. Other features include built-in protection, including overtemperature, overload, overvoltage, and short-circuit; a 5-VDC/1-A standby supply rail that keeps external circuitry alive when the main supply is powered down; and remote sense, particularly for applications in which power cables are extended.

The power supplies meet a range of ITE-related approvals, including EN55032 Class A and EN61000-3-x for emissions, as well as EN61000-4-x for immunity. Safety approvals include IEC/UL/EN62368-1 as well as all applicable CE and UKCA directives. Applications include test and measurement, factory automation, process control, semiconductor fabrication, and renewable energy systems.

XP Power’s HDA1500 series.XP Power’s HDA1500 series (Source: XP Power)

Targeting space-constrained industrial applications is the CBM300S series of 300-W fanless AC/DC power supplies from Cincon Electronics Co. Ltd. The series is housed in a brick package that measures 106.7 × 85.0 mm (4.2 × 3.35 inches) with an ultra-slim profile of 19.7 mm (0.78 inches). The device delivers 300-W-rated power with a peak power capability of 360 W.

The CBM300S operates with an input range of 90 to 264 VAC and accepts DC input ranging from 120 to 370 VDC. Seven output voltage options are available: 12, 15, 24, 28, 36, 48, and 54 VDC, all classified as Class I.

The series comes with safety approvals for IEC/UL/EN 62368-1 3rd edition and is EMC-compliant with EN 55032 Class B and CISPR/FCC Class B standards.

A key feature of the CBM300S is its exceptionally low leakage current of 0.75 mA maximum. It also delivers efficiency of up to 94% and operates across a wide temperature range of –40°C to 90°C, making it suitable for harsh environments.

This power supply can function at altitudes up to 5,000 meters and maintains a low no-load input power consumption of less than 0.5 W. The MTBF is rated at 240,000 hours. It also offers protection features, including output overcurrent, output overvoltage, overtemperature, and continuous short-circuit protections.

The CBM300S power supplies can be used in a variety of industrial/ITE applications, including automation equipment, test and measurement instruments, commercial equipment, telecom and network devices, and other industrial applications.

Recom Power GmbH introduced a series of flexible and highly efficient AC/DC power supplies in a small form factor for new energy applications. Applications include energy management and monitoring and powering actuators, as well as general-purpose applications.

The 20-W RAC20NE-K/277 series is available in board-mount or open-frame options. The board-mount, encapsulated power supplies measure 52.5 × 27.6 × 23.0 mm, and the open-frame devices with Molex connections measure 80.0 × 23.8 × 22.5 mm.

AC/DC power supplies increasingly must operate over nominal supply values from 100 VAC to 277 VAC, Recom said, and the RAC20NE-K/277 matches this requirement with 20 W available at optional 12-, 24-, or 36-VDC outputs. This series is available with encapsulated versions with constant-voltage- or constant-current-limiting characteristics and a constant-voltage open-frame type with 12- or 24-VDC output.

The RAC20NE-K/277 series is highly efficient, Recom said, allowing reliable operation at full load to 60°C ambient and to 85°C with derating. It also offers <100-mW no-load power consumption.

The parts are Class II–insulated and OVC III–rated up to 5,000 meters and meet EN 55032 Class B EMC requirements with a floating or grounded output. Standby and no-load power dissipation meet eco-design requirements.

Recom’s RAC20NE-K/277.Recom’s RAC20NE-K/277 (Source: Recom Power GmbH)

If you’re looking for greater flexibility with more options, TDK Corp.’s ZWS-C series of 10- to 50-W industrial power supplies offers new mounting and protection options. The TDK-Lambda brand ZWS-C series of 10-, 15-, 30-, and 50-W-rated industrial AC/DC power supplies was initially launched in an open-frame configuration. Four additional options are now available: a metal L-bracket (with or without a cover), pins for PCB mounting, and two-sided board coating for all voltage and power levels.

These options can provide additional operator protection, lower the cost of wiring harnesses, or reduce the impact of dust and contamination in harsh environments, TDK said.

The ZWS-C series is available with 5-, 12-, 15-, 24-, and 48-V (50 W only) output voltages. The ZWS10C and ZWS15C models measure 63.5 × 45.7 × 22.1 mm, the ZWS30C package measures 76.2 × 50.8 × 24.2 mm, and the ZWS50C footprint measures 76.2 × 50.8 × 26.7 mm. The operating temperature with convection cooling and standard mounting ranges from –10°C to 70°C, derating linearly to 50% load between 50°C and 70°C.

The power supplies can operate at full load with an external airflow of 0.8 m/s, and no-load power consumption is typically less than 0.3 W. Other features include a 3-kVAC input-to-output, 2-kVAC input-to-ground, and 750-VAC output-to-ground (Class I) isolation. The models meet EN55011/EN55032-B conducted and radiated EMI in either Class I or Class II (double-insulated) construction, without the need for external filtering or shielding.

All models are also certified to the IEC/UL/CSA/EN62368-1 for AV, information, and communication equipment standards; EN60335-1 for household electrical equipment; IEC/EN61558-1; and IEC/EN61558-2-16. They also comply with IEC 61000-3-2 (harmonics) and IEC 61000-4 (immunity) and carry the CE and UKCA marks for the Low Voltage, EMC, and RoHS Directives.

Thanks to electrolytic capacitor lifetimes of up to 15 years, the ZWS-C models can be used in factory automation, robotics, semiconductor fabrication manufacturing, and test and measurement equipment.

TDK’s ZWS15C model.TDK’s ZWS15C model (Source: TDK Corp.)

The post Top 10 AC/DC power supplies appeared first on EDN.

SiC power modules gain low-resistance options

Чтв, 11/20/2025 - 19:40

SemiQ expands its 1200-V Gen3 SiC MOSFET family with SOT-227 modules offering on-resistance values of 7.4 mΩ, 14.5 mΩ, and 34 mΩ. GCMS models are co-packaged with a Schottky barrier diode (SBD), while GCMX types rely on the intrinsic body diode.

The modules are designed for medium-voltage, high-power systems such as battery chargers, photovoltaic inverters, server power supplies, and energy storage units. Each device undergoes wafer-level gate-oxide burn-in testing above 1400 V and avalanche testing to 800 mJ (330 mJ for 34-mΩ types).

The 7.4-mΩ GCMX007C120S1-E1 reduces switching losses to 4.66 mJ (3.72 mJ turn-on, 0.94 mJ turn-off) and features a body-diode reverse-recovery charge of 593 nC. Junction-to-case thermal resistance ranges from 0.23 °C/W for the 7.4-mΩ device to 0.70 °C/W for the 34-mΩ module.

All models have a rugged, isolated backplate for direct heat-sink mounting. Samples and volume pricing are available upon request. For more information about the 1200-V Gen3 SiC MOSFET modules, click here.

SemiQ

The post SiC power modules gain low-resistance options appeared first on EDN.

SiC modules boost power cycling performance

Чтв, 11/20/2025 - 19:39

Wolfspeed’s YM 1200-V six-pack power modules deliver up to 3× the power cycling capability of comparable devices in the same industry-standard footprint. The company reports that the modules also provide 15% higher inverter current.

Built with Gen 4 SiC MOSFETs, the modules are suited for e-mobility propulsion systems, automotive traction inverters, and hybrid electric vehicles. Their YM package incorporates a direct-cooled pin fin baseplate, sintered die attach, hard epoxy encapsulant, and copper clip interconnects. An optimized power terminal layout minimizes package inductance, reducing overshoot voltage and lowering switching losses.

In addition to their 1200-V blocking voltage, YM module variants offer current ratings of 700 A, 540 A, and 390 A, with corresponding RDS(on) values at 25°C of 1.6 mΩ, 2.1 mΩ, and 3.1 mΩ. According to Wolfspeed, the modules achieve a 22% improvement in RDS(on) at 125°C over the previous generation and reduce turn-on energy by roughly 60% across operating temperatures. An integrated soft-body diode further cuts switching losses by 30% and VDS overshoot by 50% during reverse recovery compared to the prior generation.

The 1200‑V SiC six‑pack power modules are now available for customer sampling and will reach full distributor availability in early 2026.

Wolfspeed

The post SiC modules boost power cycling performance appeared first on EDN.

Power switch offers smart overload control

Чтв, 11/20/2025 - 19:39

Joining ST’s lineup of safety switches, the IPS1050LQ is a low-side switch featuring smart overload protection with configurable inrush and current limits. Three pins allow selection between static and dynamic modes and set the operating current limit. In dynamic mode, connecting a capacitor enables an initial inrush of up to 25 A, which then steps down in stages to the programmed limit.

The output stage of the IPS1050LQ supports up to 65 V, making it suitable for industrial equipment such as PLCs and CNC machines. Its typical on-resistance of just 25 mΩ ensures energy-efficient switching for resistive, capacitive, or inductive loads, with active clamping enabling fast demagnetization of inductive loads at turn-off. Comprehensive safety features include undervoltage, overvoltage, overload, short-circuit, ground disconnection, VCC disconnection, and an overtemperature indicator pin that provides thermal protection.

Now in production, the IPS1050LQ in a 6×6-mm QFN32L package starts at $2.19 each in 1000-unit quantities.

IPS1050LQ product page

STMicroelectronics

The post Power switch offers smart overload control appeared first on EDN.

Rad-tolerant MCUs cut space-grade costs

Чтв, 11/20/2025 - 19:39

Vorago has announced four rad-tolerant MCUs for LEO missions, which it says cost far less than conventional space-grade components. Part of the VA4 series of rad-hardened MCUs, these new chips provide an economical alternative to high-risk upscreened COTS components.

Based on Arm Cortex-M4 processors, the Radiation-Tolerant by Design (RTbD) MCUs are priced nearly 75% lower than Vorago’s HARDSIL radiation-hardened products. The RTbD lineup includes the extended-mission VA42620 and VA42630, as well as the cost-optimized VA42628 and VA42629 for short- or lower-orbit missions. By embedding radiation protection directly into the silicon, these MCUs tackle the reliability challenges of satellite constellations and provide a more efficient solution than conventional multi-chip redundancy approaches.

All four MCUs provide >30 krad(Si) TID tolerance, with the VA42630 integrating 256 KB of nonvolatile memory. Extended-mission devices are designed for harsher obits and primary flight control, while cost-optimized MCUs target thermal regulation and localized power management. These chips can be dropped into existing architectures with no redesign, enabling rapid deployment.

Vorago will begin shipping its first rad-tolerant chips in early Q1 2026.

VA4 product page

Vorago Technologies 

The post Rad-tolerant MCUs cut space-grade costs appeared first on EDN.

Module streamlines smart home device connectivity

Чтв, 11/20/2025 - 19:39

The KGM133S, the first in a range of Matter over Thread modules from Quectel, enables seamless interoperability for smart home devices like door locks, sensors, and lighting. Powered by Silicon Labs’ EFR32MG24 wireless chip, the module uses Matter 1.4 to connect devices across multiple ecosystems, including Apple Home, Google Home, Amazon Alexa, and Samsung SmartThings. Thread 1.4 support ensures compatibility with IPv6 addressing.

The KGM133S features an Arm Cortex-M33 processor running at up to 78 MHz, with 256 KB of SRAM and up to 3.5 MB of flash memory. With a receive sensitivity better than -105 dB and a maximum transmit power of 19.5 dBm, the module ensures reliable signal transmission. In addition to Matter over Thread, the KGM133S also supports Zigbee 3.0 and Bluetooth LE 6.0 connectivity.

Two LGA packaging options are available for the KGM133S to accommodate both compact and slim terminal designs. The first option (12.5×13.2×2.2 mm) features a fourth-generation IPEX or pin antenna, while the second option (12.5×16.6×2.2 mm) comes with an onboard PCB antenna.

A timeline for availability of the KGM133S wireless module was not disclosed at the time of this announcement.

KGM133S product page  

Quectel

The post Module streamlines smart home device connectivity appeared first on EDN.

Compute: Powering the transition from Industry 4.0 to 5.0

Чтв, 11/20/2025 - 16:00
Chip design illustration.

Industry 4.0 has transformed manufacturing, connecting machines, automating processes, and changing how factories think and operate. But its success has revealed a new constraint: compute. As automation, AI, and data-driven decision-making scale exponentially, the world’s factories are facing a compute challenge that extends far beyond performance. The next industrial era—Industry 5.0—will bring even more compute demand as it builds on the IoT to improve collaboration between humans and machines, industry, and the environment.

Progress in this next wave of industrial development is dependent on advances at the semiconductor level. Advances in chip design, materials science, and process innovation are essential. Alongside this, there needs to be a reimagining of how we power industrial intelligence, not just in terms of the processing capability but in how that capability is designed, sourced, and sustained.

Rethinking compute for a connected future

The exponential rise of data and compute has placed intense pressure on the chips that drive industrial automation. AI-enabled systems, predictive maintenance, and real-time digital twins all require compute to move closer to where data is created: at the edge. However, edge environments come with tight energy, size, and cooling constraints, creating a growing imbalance between compute demand and power availability.

AI and digital triplets, which build on traditional digital twin models by leveraging agentic AI to continuously learn and analyze data in the field, have moved the requirement for processing to be closer to where the data is created. In use cases such as edge computing, where computing takes place within sensing and measuring devices directly, this can be intensive. That decentralization introduces new power and efficiency pressures on infrastructure that wasn’t designed for such intensity.

The result is a growing imbalance between performance and the limitations of semiconductor manufacturing. Businesses must have broader thinking around energy consumption, heat management, power balance, and raw materials sourcing. Sustainability can no longer be treated as an unwarranted cost or compliance exercise; it’s becoming a new indicator of competitiveness, where energy-efficient, low-emission compute enables manufacturers to meet growing data reliance without exceeding environmental limits.

Businesses must take these challenges seriously, as the demand for compute will only escalate with Industry 5.0. AI will become more embedded, and the data it relies on will grow in scale and sophistication.

If manufacturing designers dismiss these issues, they run the risk of bottlenecking their productivity with poor efficiency and sustainability. This means that when chip designers optimize for Industry 5.0 applications, they should consider responsibility, efficiency, and longevity alongside performance and cost. The challenge is no longer just “can we build faster systems?” It’s now “can we build systems that endure environmentally, economically, and geopolitically?”

Innovation starts at the material level

The semiconductor revolution of Industry 5.0 won’t be defined solely by faster chips but by the science and sustainability embedded in how those chips are made. For decades, semiconductor progress has been measured in nanometers; the next leap forward will be measured in materials. Advances in compounds such as silicon carbide and gallium nitride are improving chip performance and transforming how the industry approaches sustainability, supply chain resilience, and sovereignty.

Chip design illustration.Advances in chip design, materials science, and process innovation are essential in the next wave of industrial development. (Source: Adobe Stock)

These materials allow for higher power efficiency and longer lifespans, reducing energy consumption across industrial systems. Combined with cleaner fabrication techniques such as ambient temperature processing and hydrogen-based chemistries, they mark a significant step toward sustainable compute. The result is a new paradigm where sustainability no longer comes at an artificial premium but is an inherent feature of technological progress.

Process innovations, such as ambient temperature fabrication and green hydrogen, offer new ways to reduce environmental footprint while improving yield and reliability. Beyond the technology itself and material innovations, more focus should be placed on decentralization and alternative sources of raw materials. This will empower businesses and the countries they operate in to navigate geopolitical and supply chain challenges.

Collaboration is the new competitive edge

The compute challenge that Industry 5.0 presents isn’t an isolated problem to solve. The demand and responsibility for change doesn’t lie with a single company, government or research body. It requires an ecosystem mindset, where collaboration is encouraged, replacing competition in key areas of innovation and infrastructure.

Collaboration between semiconductor manufacturers, industrial original equipment manufacturers, policymakers, and researchers is important to accelerate energy-efficient design and responsible sourcing. Interconnected and shared platforms within the semiconductor ecosystem de-risk tech investments. This ensures the collective benefits of sustainability and resilience benefit entire industrial innovation, not just individual players.

The next era of industrial progress will see the most competitive organizations collaborate and work together, with the goal of shared innovation and progress.

Powering compute in the Industry 5.0 transition

The evolution from Industry 4.0 to Industry 5.0 is more than a technological upgrade; it represents a change in attitude around how digital transformation is approached in industrial settings. This new era will see new approaches to technological sustainability, sovereignty, and collaboration that prioritize productivity and speed. Compute will be the central driver of this transition. Materials, processes, and partnerships will determine whether the industrial sector can grow without outpacing its own energy and sustainability limits.

Industry 5.0 presents a vision of industrialization that gives back more than it takes, amplifying both productivity and possibility. The transition is already underway. Now, businesses need to ensure innovation, efficiency, and resilience evolve together to power a truly sustainable era of compute.

The post Compute: Powering the transition from Industry 4.0 to 5.0 appeared first on EDN.

A holiday shopping guide for engineers: 2025 edition

Чтв, 11/20/2025 - 15:00

As of this year, EDN has consecutively published my odes to holiday-excused consumerism for more than a half-decade straight (and intentionally ahead of Black Friday, if you hadn’t already deduced), now nearing ten editions in total. Here are the 2019, 2020, 2021, 2022, 2023, and 2024 versions; I skipped a few years between 2014 and its successors.

As usual, I’ve included up-front links to prior-year versions of the Holiday Shopping Guide for Engineers because I’ve done my best here to not regurgitate any past recommendations; the stuff I’ve previously suggested largely remains valid, after all. That said, it gets increasingly difficult each year not to repeat myself! And as such, I’ve “thrown in the towel” this year, at least to some degree…you’ll find a few repeat categories this time, albeit with new product suggestions within them.

Without any further ado, and as usual, ordered solely in the order in which they initially came out of my cranium…

A Windows 11-compatible (or alternative O/S-based) computer

Microsoft’s general support for Windows 10 ended nearly a month ago (on October 14, to be exact) as I’m writing these words. For you Windows users out there, options exist for extending Windows 10 support updates (ESUs) for another year on consumer-licensed systems, both paid (spending $30 or redeeming 1,000 Microsoft Rewards points, with both ESU options covering up to 10 devices) and free (after syncing your PC settings).

If you’re an IT admin, the corporate license ESU program specifics are different; see here. And, as I covered in hands-on detail a few months back, (unsanctioned) options also exist for upgrading officially unsupported systems to Windows 11, although I don’t recommend relying on them for long-term use (assuming the hardware-hack attempt is successful at all, that is). As I wrote back in June:

The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.

You could also convert your existing PC over to run a different O/S, such as ChromeOS Flex (originally Neverware’s CloudReady, then acquired and now maintained by Google) or a Linux distro of your preference. For that matter, you could also just “get a Mac”. That said, any of these options will likely also compel conversions to new apps for the new O/S foundation. The aggregate learning curve from all these software transitions can end up being a “bridge too far”.

Instead, I’d suggest you just “bite the bullet” and buy a new PC for yourself and/or others for the holidays, before CPUs, DRAM, SSDs, and other building block components become even more supply-constrained and tariff-encumbered than they are now, and to ease the inevitable eventual transition to Windows 11.

Then donate your old hardware to charity for someone else to O/S-convert and extend its useful life. That’s what I’ll be doing, for example, with my wife’s Dell Inspiron 5570, which, as it turns out, wasn’t Windows 11-upgradeable after all.

Between now and next October, when the Windows 10 ESU runs out (unless the deadline gets extended again), we’ll replace it with the Dell 16 Plus (formerly Inspiron 16 Plus) in the above photo.

An AI-enhanced mobile device

The new Dell laptop I just mentioned, which we’d bought earlier this summer (ironically just prior to Microsoft’s unveiling of the free Windows 10 ESU option), is compatible with Microsoft’s Copilot+ specifications for AI-enhanced PCs by virtue of the system’s Intel Core Ultra 7 256V CPU with an integrated 47 TOPS NPU.

That said, although its support for local (vs conventional cloud) AI inference is nice from a future-proofing standpoint, there’s not much evidence of compelling on-client AI benefits at this early stage, save perhaps for low-latency voice interface capabilities (not to mention broader uninterrupted AI-based functionality when broadband goes down).

The current situation is very different when it comes to fully mobile devices. Yes, I know, laptops also have built-in batteries, but they often still spend much of their operating life AC-tethered, and anyway, their battery packs are much beefier than the ones in the smartphones and tablets I’m talking about here.

Local AI processing is not only faster than to-and-back-from-cloud roundtrip delays (particularly lengthy over cellular networks), but it also doesn’t gobble up precious limited-monthly-allocation data. Then there’s the locally stored-and-processed data enhanced privacy factor to consider, along with the oft-substantial power saving accrued by not needing to constantly leverage the mobile device’s Wi-Fi and cellular data subsystems.

You may indeed believe (as, full disclosure, I do) that AI features are of limited-at-best benefit at the moment, at least for the masses. But I think we can also agree that ongoing widespread-and-expanding and intense industry attention on AI will sooner or later cultivate compelling capabilities.

Therefore, I’ve showcased mobile devices’ AI attributes in recent years’ announcement coverage (such as that of Google’s Pixel 10 series shown in the photo above) and why I recommend them, again from a future-proofing angle if nothing else, if you’re (and/or yours are) due for a gadget upgrade this year. Meanwhile, I’ll soldier on with my Pixel 7s

Audio education resources

As regular readers likely already realize, audio has received particular showcase attention in my blog posts and teardowns this past year-plus (a trend which will admittedly also likely extend into at least next year). This provided, among other things, an opportunity for me to refresh and expand my intellectual understanding of the topic.

I kept coming across references to Bob Cordell, mentioning both his informative website and his classic tomes, Designing Audio Power Amplifiers (make sure you purchase the latest 2nd edition, published in 2019, whose front cover is shown above) and the newer Designing Audio Circuits and Systems, released just last year.

Fair warning: neither book is inexpensive, especially in hardback, but even in paperback, and neither is available in a lower-priced Kindle version, either. That said, per both reviews I’ve seen from others and my own impressions, they’re well worth the investments.

Another worthwhile read, this time complete with plenty of humor scattered throughout, is Schiit Happened: The Story of the World’s Most Improbable Start-Up, in this case available in both inexpensive paperback and even more cost-effective Kindle formats. Written by Jason Stoddard and Mike Moffat, the founders of Schiit Audio, whom I’ve already mentioned several times this year, it’s also available for free on the Head-Fi Forum, where Jason has continued his writing. But c’mon, folks, drop $14.99 (or $4.99) to support a scrappy U.S. audio success story.

As far as audio-related magazines go, I first off highly recommend a subscription to audioXpress. Generalist electronics design publications like EDN are great, of course, but topic-focused coverage like that offered by audioXpress for audio design makes for an effective information companion.

On the other end of the product development chain, where gear is purchased and used by owners, there’s Stereophile, for which I’ve also been a faithful reader for more years than I care to remember. And as for the creation, capture, mastering, and duplication of the music played on those systems, I highly recommend subscriptions to Sound on Sound and, if your budget allows for a second publication, Recording. Consistently great stuff, all of it.

Finally, as an analogy to my earlier EDN-plus-audioXpress pairing, back in 2021 I recommended memberships to generalist ACM and/or IEEE professional societies. This time, I’ll supplement that suggestion with an audio-focused companion, the AES (Audio Engineering Society).

Back when I was a full-time press guy with EDN, I used to be able to snag complimentary admission to the twice-yearly AES conventions along with other organization events, which were always rich sources of information and networking connection cultivation.

To my dying day, I will always remember one particularly fascinating lecture, which correlated Ludwig van Beethoven’s progressive hearing degradation and its (presenter-presumed) emotional and psychological effects to the evolution of the music styles that he composed over time. Then there were the folks from Fraunhofer that I first-time met at an AES convention, kicking off a longstanding professional collaboration. And…

Audio gear

For a number of years, my Drop- (formerly Massdrop)-sourced combo of the x Grace Design Standard DAC and Objective 2 Headphone Amp Desktop Edition afforded me with a sonically enhanced alternative to my computer’s built-in DAC and amp for listening to music over plugged-in headphones and powered speakers:

As I’ve “teased” in a recent writeup, however, I recently upgraded this unbalanced-connection setup to a four-component Schiit stack, complete with a snazzy aluminum-and-acrylic rack:

Why?

Part of the reason is that I wanted to sonically experience a tube-based headphone amp for myself, both in an absolute sense and relative to solid-state Schiit amplifiers also in my possession.

Part of it is that all these Schiit-sourced amps also integrate preamp outputs for alternative-listening connection to an external power amp-plus-passive speaker set:

Another part of the reason is that I’ve now got a hardware equalizer as an alternative to software EQ, the latter (obviously) only relevant for computer-sourced audio. And relatedly, part of it is that I’ve also now got a hardware-based input switcher, enabling me to listen to audio coming not only from my PC but also from another external source. What source, you might ask?

Why, one of the several turntables that I also acquired and more broadly pressed into service this past year, of course!

I’ve really enjoyed reconnecting with vinyl and accumulating a LP collection (although my wallet has admittedly taken a beating in the process), and encourage you (and yours) to do the same. Stand by for a more detailed description of my expanded office audio setup, including its balanced “stack” counterpart, in an upcoming dedicated topic to be published shortly.

For sonically enhancing the rest of the house, where a computer isn’t the primary audio source, companies such as Bluesound and WiiM sell various all-in-one audio streamers, both power amplifier-inclusive (for use with traditional passive speakers) and amp-less (for pairing with powered speakers or intermediary connection to a standalone external amp).

A Bluesound Node N130, for example, has long resided at the “man cave” half of my office:

And the class D amplifier inside the “Pro” version of the WiiM Amp, which I plan to press into service soon in my living room, even supports the PFFB feature I recently discussed:

(Apple-reminiscent Space Gray shown and self-owned; Dark Gray and Silver also available)

More developer hardware

Here’s the other area where, as I alluded to in the intro, I’m going to overlap a bit with a past-year Holiday Shopping Guide. Two years ago, I recommended some developer kits from both the Raspberry Pi Foundation and NVIDIA, including the latter’s then-$499 Jetson Orin Nano:

It’s subsequently been “replaced”, as well as notably priced-decreased, by the Orin Nano Super Developer Kit at $249.

Why the quotes around “replaced”? That’s because, as good news for anyone who acted on my earlier recommendation, the hardware’s exactly the same as before: “Super” is solely reflective of an enhanced software suite delivering claimed generative AI performance gains of up to 1.7x, and freely available to existing Jetson Orin Nano owners.

More recently, last month, NVIDIA unveiled the diminutive $3,999 DGX Spark:

with compelling potential, both per company claims and initial hands-on experiences:

As a new class of computer, DGX Spark delivers a petaflop of AI performance and 128GB of unified memory in a compact desktop form factor, giving developers the power to run inference on AI models with up to 200 billion parameters and fine-tune models of up to 70 billion parameters locally. In addition, DGX Spark lets developers create AI agents and run advanced software stacks locally.

albeit along with, it should also be noted, an irregular development history and some troubling early reviews. The system was initially referred to as Project DIGITS when unveiled publicly at the January 2025 CES. Its application processor, originally referred to as the N1X, is now renamed the GB10. Co-developed by NVIDIA (who contributed the Grace Blackwell GPU subsystem) and MediaTek (who supplied the multi-core CPU cluster and reportedly also handled full SoC integration duties), it was originally intended for—and may eventually still show up in—Arm-based Windows PCs.

But repeated development hurdles have (reportedly) delayed the actualization of both SoC and system shipment aspirations, and lingering functional bugs preclude Windows compatibility (therefore explaining the DGX Spark’s Linux O/S foundation).

More generally, just a few days ago as I write these words, MAKE Magazine’s latest issue showed up in my mailbox, containing the most recent iteration of the publication’s yearly “Guide to Boards” insert. Check it out for more hardware ideas for your upcoming projects.

A smart ring

Regular readers have likely also noticed my recent series of writeups on smart rings, comprising both an initial overview and subsequent reviews based on fingers-on evaluations.

As I write these words in mid-November, Ultrahuman’s products have been pulled from the U.S. market due to patent-infringement rulings, although they’re still available elsewhere in the world. RingConn conversely concluded a last-minute licensing agreement, enabling ongoing sales of its devices worldwide, including in the United States.

And as for the instigator of the patent infringement actions, market leader Oura, my review of the company’s Gen3 smart ring will appear at EDN shortly after you read these words, with my eval of the latest-generation Ring 4 (shown above) to follow next month.

Smart rings’ Li-ion batteries, like those of any device with fully integrated cells, won’t last forever, so you need to go into your experience with one of them eyes-open to the reality that it’ll ultimately be disposable (or, in my case, transform into a teardown project).

That said, the technology is sufficiently mature at this point that I feel comfortable recommending them to the masses. They provide useful health insights, even though they tend to notably overstate step counts for those who use computer keyboards a lot. And unlike a smart watch or other wrist-based fitness tracker, you don’t need to worry (so much, at least) about color- and style-coordinating a smart ring with the rest of your outfit ensemble.

(Not yet a) pair of smart glasses

Conversely, alas, I still can’t yet recommend smart glasses to anyone but early adopters (like me; see above). Meta’s latest announced device suite, along with various products from numerous (and a growing list of) competitors, suggests that this product category is still relatively immature, therefore dynamic in its evolutionary nature. I’d hate to suggest something for you to buy for others that’ll be obsolete in short order. For power users like you, on the other hand…

Happy holidays!

And with that, having just passed through 2,500 words, I’ll close here. Upside: plenty of additional presents-to-others-and/or-self ideas are now littering the cutting-room floor, so I’ve already got no shortage of topics for next year’s edition! Until then, sound off in the comments, and happy holidays!

 Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A holiday shopping guide for engineers: 2025 edition appeared first on EDN.

Pulse-density modulation (PDM) audio explained in a quick primer

Чтв, 11/20/2025 - 09:57

Pulse-density modulation (PDM) is a compact digital audio format used in devices like MEMS microphones and embedded systems. This compact primer eases you into the essentials of PDM audio.

Let’s begin by revisiting a ubiquitous PDM MEMS microphone module based on MP34DT01-M—an omnidirectional digital MEMS audio sensor that continues to serve as a reliable benchmark in embedded audio design.

Figure 1 A MEMS microphone mounted on a minuscule module detects sound and produces a 1-bit PDM signal. Source: Author

When properly implemented, PDM can digitally encode high-quality audio while remaining cost-effective and easy to integrate. As a result, PDM streams are now widely adopted as the standard data output format for MEMS microphones.

On paper, the anatomy of a PDM microphone boils down to a few essential building blocks like:

  • MEMS microphone element, typically a capacitive MEMS structure, unlike the electret capsules found in analog microphones.
  • Analog preamplifier boosts the low-level signal from the MEMS element for further processing.
  • PDM modulator converts the analog signal into a high-frequency, 1-bit pulse-density modulated stream, effectively acting as an integrated ADC.
  • Digital interface logic handles timing, clock synchronization, and data output to the host system.

Next is the function block diagram of T3902, a digital MEMS microphone that integrates a microphone element, impedance converter amplifier, and fourth-order sigma-delta (Σ-Δ) modulator. Its PDM interface enables time multiplexing of two microphones on a single data line, synchronized by a shared clock.

Figure 2 Functional block diagram outlines the internal segments of the T3902 digital MEMS microphone. Source: TDK

The analog signal generated by the MEMS sensing element in a PDM microphone—sometimes referred to as a digital microphone—is first amplified by an internal analog preamplifier. This amplified signal is then sampled at a high rate and quantized by the PDM modulator, which combines the processes of quantization and noise shaping. The result is a single-bit output stream at the system’s sampling rate.

Noise shaping plays a critical role by pushing quantization noise out of the audible frequency range, concentrating it at higher frequencies where it can be more easily filtered out. This ensures relatively low noise within the audio band and higher noise outside it.

The microphone’s interface logic accepts a master clock signal from the host device—typically a microcontroller (MCU) or a digital signal processor (DSP)—and uses it to drive the sampling and bitstream transmission. The master clock determines both the sampling rate and the bit transmission rate on the data line.

Each 1-bit sample is asserted on the data line at either the rising or falling edge of the master clock. Most PDM microphones support stereo operation by using edge-based multiplexing: one microphone transmits data on the rising edge, while the other transmits on the falling edge.

During the opposite edge, the data output enters a high-impedance state, allowing both microphones to share a single data line. The PDM receiver is then responsible for demultiplexing the combined stream and separating the two channels.

As a side note, the aforesaid microphone module is hardwired to treat data as valid when the clock signal is low.

The magic behind 1-bit audio streams

Now, back in the driveway. PDM is a clever way to represent a sampled signal using just a stream of single bits. It relies on delta-sigma conversion, also known as sigma-delta, and it’s the core technology behind many oversampling ADCs and DACs.

At first glance, a one-bit stream seems hopelessly noisy. But here is the trick: by sampling at very high rates and applying noise-shaping techniques, most of that noise is pushed out of the audible range—above 20 kHz—where it no longer interferes with the listening experience. That is how PDM preserves audio fidelity despite its minimalist encoding.

There is a catch, though. You cannot properly dither a 1-bit stream, which means a small amount of distortion from quantization error is always present. Still, for many applications, the trade-off is worth it.

Diving into PDM conversion and reconstruction, we begin with the direct sampling of an analog signal at a high rate—typically several megahertz or more. This produces a pulse-density modulation stream, where the density of 1s and 0s reflects the amplitude of the original signal.

Figure 3 An example that renders a single cycle of a sine wave as a digital signal using pulse density modulation. Source: Author

Naturally, the encoding relies on 1-bit delta-sigma modulation: a process that uses a one-bit quantizer to output either a 1 or a 0 depending on the instantaneous amplitude. A 1 represents a signal driven fully high, while a 0 corresponds to fully low.

And, because the audio frequencies of interest are much lower than the PDM’s sampling rate, reconstruction is straightforward. Passing the PDM stream through a low-pass filter (LPF) effectively restores the analog waveform. This works because the delta-sigma modulator shapes quantization noise into higher frequencies, which the low-pass filter attenuates, preserving the desired low-frequency content.

Inside digital audio: Formats at a glance

It goes without saying that in digital audio systems, PCM, I²S, PWM, and PDM each serve distinct roles tailored to specific needs. Pulse code modulation (PCM) remains the most widely used format for representing audio signals as discrete amplitude samples. Inter-IC Sound (I²S) excels in precise, low-latency audio data transmission and supports flexible stereo and multichannel configurations, making it a popular choice for inter-device communication.

Though not typically used for audio signal representation, pulse width modulation (PWM) plays a vital role in audio amplification—especially in Class D amplifiers—by encoding amplitude through duty cycle variation, enabling efficient speaker control with minimal power loss.

On a side note, you can convert a PCM signal to PDM by first increasing its sample rate (interpolation), then reducing its resolution to a single bit. Conversely, a PDM signal can be converted back to PCM by reducing its sampling rate (decimation) and increasing its word length. In both cases, the ratio of the PDM bit rate to the PCM sample rate is known as the oversampling ratio (OSR).

Crisp audio for makers: PDM to power simplified

Cheerfully compact and maker-friendly PDM input Class D audio power amplifier ICs simplify the path from microphone to speaker. By accepting digital PDM signals directly—often from MEMS mics—they scale down both complexity and component count. Their efficient Class D architecture keeps the power draw low and heat minimal, which is ideal for battery-powered builds.

That is to say, audio ICs like MAX98358 require minimal external components, making prototyping a pleasure. With filterless Class D output and built-in features, they simplify audio design, freeing makers to focus on creativity rather than complexity.

Sidewalk: For those eager to experiment, ample example code is available online for SoCs like the ESP32-S3, which use a sigma-delta driver to produce modulated output on a GPIO pin. Then with a passive or active low-pass filter, this output can be shaped into clean, sensible analog signal.

Well, the blueprint below shows an active low-pass filter using the Sallen & Key topology, arguably the simplest active two-pole filter configuration you will find.

Figure 4 Circuit blueprint outlines a simple active low-pass filter. Source: Author

Echoes and endings

As usual, I feel there is so much more to cover, but let’s jump to a quick wrap-up.

Whether you are decoding microphone specs or sketching out a signal chain, understanding PDM is a quiet superpower. It is not just about 1-bit streams; it’s about how digital sound travels, transforms, and finds its voice in your design. If this primer helped demystify the basics, you are already one step closer to building smarter, cleaner audio systems.

Let’s keep listening, learning, and simplifying.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Pulse-density modulation (PDM) audio explained in a quick primer appeared first on EDN.

MES meets the future

Срд, 11/19/2025 - 23:08
A data-centric MES.

Industry 4.0 focuses on how automation and connectivity could transform the manufacturing canvas. Manufacturing execution systems (MES) with strong automation and connectivity capabilities thrived under the Industry 4.0 umbrella. With the recent expansion of AI usage through large language models (LLMs), Model Context Protocol, agentic AI, etc., we are facing a new era where MES and automation are no longer enough. Data produced on the shop floor can provide insights and lead to better decisions, and patterns can be analyzed and used as suggestions to overcome issues.

As factories become smarter, more connected, and increasingly autonomous, the intersection of MES, digital twins, AI-enabled robotics, and other innovations will reshape how operations are designed and optimized. This convergence is not just a technological evolution but a strategic inflection point. MES, once seen as the transactional layer of production, is transforming into the intelligence core of digital manufacturing, orchestrating every aspect of the shop floor.

MES as the digital backbone of smart manufacturing

Traditionally, MES is the operational execution king: tracking production orders, managing work in progress, and ensuring compliance and traceability. But today’s factories demand more. Static, transactional systems no longer suffice when decisions are required in near-real time and production lines operate with little margin for error.

The modern MES is evolving and assuming a role as an intelligent orchestrator, connecting data from machines, people, and processes. It is not just about tracking what happened; it can explain why it happened and provide recommendations on what to do next.

Modern MES ecosystems will become the digital nervous system of the enterprise, combining physical and digital worlds and handling and contextualizing massive streams of shop-floor data. Advanced technologies such as digital twins, AI robotics, and LLMs can thrive by having the new MES capabilities as a foundation.

A data-centric MES.A data-centric MES delivers contextualized information critical for digital twins to operate, and together, they enable instant visibility of changes in production, equipment conditions, or environmental parameters, contributing to smarter factories. (Source: Critical Manufacturing) Digital twins: the virtual mirror of the factory

A digital twin is more than a 3D model; it is a dynamic, data-driven representation of the real-world factory, continuously synchronized with live operational data. It enables users to simulate scenarios and test improvements before they ever touch the physical production line. It’s easy to understand how dependent on meaningful data these systems are.

Performing simulations of complex systems as a production line is an impossible task when relying on poor or, even worse, unreliable data. This is where a data-driven MES comes to the rescue. MES sits at the crossroads of every operational transaction: It knows what’s being produced, where, when, and by whom. It integrates human activities, machine telemetry, quality data, and performance metrics into one consistent operational narrative. A data-centric MES is the epitome of abundance of contextualized information crucial for digital twins to operate.

Several key elements made it possible for the MES ecosystems to evolve beyond their transactional heritage into a data-centric architecture built for interoperability and analytics. These include:

  • Unified/canonical data model: MES consolidates and contextualizes data from diverse systems (ERP, SCADA, quality, maintenance) into a single model, maintaining consistency and traceability. This common model ensures that the digital twin always reflects accurate, harmonized information.
  • Event-driven data streaming: Real-time updates are critical. An event-driven MES architecture continuously streams data to the digital twin, enabling instant visibility of changes in production, equipment conditions, or environmental parameters.
  • Edge and cloud integration: MES acts as the intelligent gateway between the edge (where data is generated) and the cloud (where digital twins and analytics reside). Edge nodes pre-process data for latency-sensitive scenarios, while MES ensures that only contextual, high-value data is passed to higher layers for simulation and visualization.
  • API-first and semantic connectivity: Modern MES systems expose data through well-defined APIs and semantic frameworks, allowing digital twin tools to query MES data dynamically. This flexibility provides the capability to “ask questions,” such as machine utilization trends or product genealogy, and receive meaningful answers in a timely manner.
Robotics: from automation to autonomous optimization

It is an established fact that automation is crucial for manufacturing optimization. However, AI is bringing automation to a new level. Robotics is no longer limited to executing predefined movements; now, capable robots may learn and adapt their behavior through data.

Traditional industrial robots operate within rigidly predefined boundaries. Their movements, cycles, and tolerances are programmed in advance, and deviations are handled manually. Robots can deliver precision, but they lack adaptability: A robot cannot determine why a deviation occurs or how to overcome it. Cameras, sensors, and built-in machine-learning models provide robots with capabilities to detect anomalies in early stages, interpret visual cues, provide recommendations, or even act autonomously. This represents a shift from reactive quality control to proactive process optimization.

But for that intelligence to drive improvement at scale, it must be based on operational context. And that’s precisely where MES comes in. As in the case of digital twins, AI-enabled robots are highly dependent on “good” data, i.e., operational context. A data-centric MES ecosystem provides the context and coordination that AI alone cannot. This functionality includes:

  • Operational context: MES can provide information such as the product, batch, production order, process parameters, and their tolerances to the robot. All of this information provides the required context for better decisions, aligned with process definition and rules.
  • Real-time feedback: Robots send performance data back to the MES, validating it against known thresholds, and log results for traceability and future usage.
  • Closed-loop control: MES can authorize adaptive changes (speed, temperature, or torque) based on recommendations inferred from past patterns while maintaining compliance.
  • Human collaboration: Through MES dashboards and alerts, operators can monitor and oversee AI recommendations, combining human judgment with machine precision.

For this synergy to work, modern MES ecosystems must support:

  • High-volume data ingestion from sensors and vision systems
  • Edge analytics to pre-process robotic data close to the source
  • API-based communication for real-time interaction between control systems and enterprise layers
  • Centralized and contextualized data lakes storing both structured and unstructured contextualized information essential for AI model training
MES in the center of innovation

Every day, we see how incredibly fast technology evolves and how instantly its applications reshape entire industries. The wave of innovation fueled by AI, LLMs, and agentic systems is redefining the boundaries of manufacturing.

MES, digital twins, and robotics can be better interconnected, contributing to smarter factories. There is no crystal ball to predict where this transformation will lead, but one thing is undeniable: Data sits at the heart of it all—not just raw data but meaningful, contextualized, and structured information. On the shop floor, this kind of data is pure gold.

MES, by its very nature, occupies a privileged position: It is becoming the bridge between operations, intelligence, and strategy. Yet to leverage from that position, the modern MES must evolve beyond its transactional roots to become a true, data-driven ecosystem: open, scalable, intelligent, and adaptive. It must interpret context, enable real-time decisions, augment human expertise, and serve as the foundation upon which digital twins simulate, AI algorithms learn, and autonomous systems act.

This is not about replacing people with technology. When an MES provides workers with AI-driven insights grounded in operational reality, and when it translates strategic intent into executable actions, it amplifies human judgment rather than diminishing it.

The convergence is here. Technology is maturing. The competitive pressure is mounting. Manufacturers now face a defining choice: Evolve the MES into the intelligent heart of their operations or risk obsolescence as smarter, more agile competitors pull ahead.

Those who make this leap, recognizing that the future belongs to factories where human ingenuity and AI work as a team, will not just modernize their operations; they will secure their place in the future of manufacturing.

The post MES meets the future appeared first on EDN.

Сторінки