Українською
  In English
EDN Network
SiC modules boost power cycling performance

Wolfspeed’s YM 1200-V six-pack power modules deliver up to 3× the power cycling capability of comparable devices in the same industry-standard footprint. The company reports that the modules also provide 15% higher inverter current.

Built with Gen 4 SiC MOSFETs, the modules are suited for e-mobility propulsion systems, automotive traction inverters, and hybrid electric vehicles. Their YM package incorporates a direct-cooled pin fin baseplate, sintered die attach, hard epoxy encapsulant, and copper clip interconnects. An optimized power terminal layout minimizes package inductance, reducing overshoot voltage and lowering switching losses.
In addition to their 1200-V blocking voltage, YM module variants offer current ratings of 700 A, 540 A, and 390 A, with corresponding RDS(on) values at 25°C of 1.6 mΩ, 2.1 mΩ, and 3.1 mΩ. According to Wolfspeed, the modules achieve a 22% improvement in RDS(on) at 125°C over the previous generation and reduce turn-on energy by roughly 60% across operating temperatures. An integrated soft-body diode further cuts switching losses by 30% and VDS overshoot by 50% during reverse recovery compared to the prior generation.
The 1200‑V SiC six‑pack power modules are now available for customer sampling and will reach full distributor availability in early 2026.
The post SiC modules boost power cycling performance appeared first on EDN.
Power switch offers smart overload control

Joining ST’s lineup of safety switches, the IPS1050LQ is a low-side switch featuring smart overload protection with configurable inrush and current limits. Three pins allow selection between static and dynamic modes and set the operating current limit. In dynamic mode, connecting a capacitor enables an initial inrush of up to 25 A, which then steps down in stages to the programmed limit.

The output stage of the IPS1050LQ supports up to 65 V, making it suitable for industrial equipment such as PLCs and CNC machines. Its typical on-resistance of just 25 mΩ ensures energy-efficient switching for resistive, capacitive, or inductive loads, with active clamping enabling fast demagnetization of inductive loads at turn-off. Comprehensive safety features include undervoltage, overvoltage, overload, short-circuit, ground disconnection, VCC disconnection, and an overtemperature indicator pin that provides thermal protection.
Now in production, the IPS1050LQ in a 6×6-mm QFN32L package starts at $2.19 each in 1000-unit quantities.
The post Power switch offers smart overload control appeared first on EDN.
Rad-tolerant MCUs cut space-grade costs

Vorago has announced four rad-tolerant MCUs for LEO missions, which it says cost far less than conventional space-grade components. Part of the VA4 series of rad-hardened MCUs, these new chips provide an economical alternative to high-risk upscreened COTS components.

Based on Arm Cortex-M4 processors, the Radiation-Tolerant by Design (RTbD) MCUs are priced nearly 75% lower than Vorago’s HARDSIL radiation-hardened products. The RTbD lineup includes the extended-mission VA42620 and VA42630, as well as the cost-optimized VA42628 and VA42629 for short- or lower-orbit missions. By embedding radiation protection directly into the silicon, these MCUs tackle the reliability challenges of satellite constellations and provide a more efficient solution than conventional multi-chip redundancy approaches.
All four MCUs provide >30 krad(Si) TID tolerance, with the VA42630 integrating 256 KB of nonvolatile memory. Extended-mission devices are designed for harsher obits and primary flight control, while cost-optimized MCUs target thermal regulation and localized power management. These chips can be dropped into existing architectures with no redesign, enabling rapid deployment.
Vorago will begin shipping its first rad-tolerant chips in early Q1 2026.
The post Rad-tolerant MCUs cut space-grade costs appeared first on EDN.
Module streamlines smart home device connectivity

The KGM133S, the first in a range of Matter over Thread modules from Quectel, enables seamless interoperability for smart home devices like door locks, sensors, and lighting. Powered by Silicon Labs’ EFR32MG24 wireless chip, the module uses Matter 1.4 to connect devices across multiple ecosystems, including Apple Home, Google Home, Amazon Alexa, and Samsung SmartThings. Thread 1.4 support ensures compatibility with IPv6 addressing.

The KGM133S features an Arm Cortex-M33 processor running at up to 78 MHz, with 256 KB of SRAM and up to 3.5 MB of flash memory. With a receive sensitivity better than -105 dB and a maximum transmit power of 19.5 dBm, the module ensures reliable signal transmission. In addition to Matter over Thread, the KGM133S also supports Zigbee 3.0 and Bluetooth LE 6.0 connectivity.
Two LGA packaging options are available for the KGM133S to accommodate both compact and slim terminal designs. The first option (12.5×13.2×2.2 mm) features a fourth-generation IPEX or pin antenna, while the second option (12.5×16.6×2.2 mm) comes with an onboard PCB antenna.
A timeline for availability of the KGM133S wireless module was not disclosed at the time of this announcement.
The post Module streamlines smart home device connectivity appeared first on EDN.
Compute: Powering the transition from Industry 4.0 to 5.0

Industry 4.0 has transformed manufacturing, connecting machines, automating processes, and changing how factories think and operate. But its success has revealed a new constraint: compute. As automation, AI, and data-driven decision-making scale exponentially, the world’s factories are facing a compute challenge that extends far beyond performance. The next industrial era—Industry 5.0—will bring even more compute demand as it builds on the IoT to improve collaboration between humans and machines, industry, and the environment.
Progress in this next wave of industrial development is dependent on advances at the semiconductor level. Advances in chip design, materials science, and process innovation are essential. Alongside this, there needs to be a reimagining of how we power industrial intelligence, not just in terms of the processing capability but in how that capability is designed, sourced, and sustained.
Rethinking compute for a connected futureThe exponential rise of data and compute has placed intense pressure on the chips that drive industrial automation. AI-enabled systems, predictive maintenance, and real-time digital twins all require compute to move closer to where data is created: at the edge. However, edge environments come with tight energy, size, and cooling constraints, creating a growing imbalance between compute demand and power availability.
AI and digital triplets, which build on traditional digital twin models by leveraging agentic AI to continuously learn and analyze data in the field, have moved the requirement for processing to be closer to where the data is created. In use cases such as edge computing, where computing takes place within sensing and measuring devices directly, this can be intensive. That decentralization introduces new power and efficiency pressures on infrastructure that wasn’t designed for such intensity.
The result is a growing imbalance between performance and the limitations of semiconductor manufacturing. Businesses must have broader thinking around energy consumption, heat management, power balance, and raw materials sourcing. Sustainability can no longer be treated as an unwarranted cost or compliance exercise; it’s becoming a new indicator of competitiveness, where energy-efficient, low-emission compute enables manufacturers to meet growing data reliance without exceeding environmental limits.
Businesses must take these challenges seriously, as the demand for compute will only escalate with Industry 5.0. AI will become more embedded, and the data it relies on will grow in scale and sophistication.
If manufacturing designers dismiss these issues, they run the risk of bottlenecking their productivity with poor efficiency and sustainability. This means that when chip designers optimize for Industry 5.0 applications, they should consider responsibility, efficiency, and longevity alongside performance and cost. The challenge is no longer just “can we build faster systems?” It’s now “can we build systems that endure environmentally, economically, and geopolitically?”
Innovation starts at the material levelThe semiconductor revolution of Industry 5.0 won’t be defined solely by faster chips but by the science and sustainability embedded in how those chips are made. For decades, semiconductor progress has been measured in nanometers; the next leap forward will be measured in materials. Advances in compounds such as silicon carbide and gallium nitride are improving chip performance and transforming how the industry approaches sustainability, supply chain resilience, and sovereignty.
Advances in chip design, materials science, and process innovation are essential in the next wave of industrial development. (Source: Adobe Stock)
These materials allow for higher power efficiency and longer lifespans, reducing energy consumption across industrial systems. Combined with cleaner fabrication techniques such as ambient temperature processing and hydrogen-based chemistries, they mark a significant step toward sustainable compute. The result is a new paradigm where sustainability no longer comes at an artificial premium but is an inherent feature of technological progress.
Process innovations, such as ambient temperature fabrication and green hydrogen, offer new ways to reduce environmental footprint while improving yield and reliability. Beyond the technology itself and material innovations, more focus should be placed on decentralization and alternative sources of raw materials. This will empower businesses and the countries they operate in to navigate geopolitical and supply chain challenges.
Collaboration is the new competitive edgeThe compute challenge that Industry 5.0 presents isn’t an isolated problem to solve. The demand and responsibility for change doesn’t lie with a single company, government or research body. It requires an ecosystem mindset, where collaboration is encouraged, replacing competition in key areas of innovation and infrastructure.
Collaboration between semiconductor manufacturers, industrial original equipment manufacturers, policymakers, and researchers is important to accelerate energy-efficient design and responsible sourcing. Interconnected and shared platforms within the semiconductor ecosystem de-risk tech investments. This ensures the collective benefits of sustainability and resilience benefit entire industrial innovation, not just individual players.
The next era of industrial progress will see the most competitive organizations collaborate and work together, with the goal of shared innovation and progress.
Powering compute in the Industry 5.0 transitionThe evolution from Industry 4.0 to Industry 5.0 is more than a technological upgrade; it represents a change in attitude around how digital transformation is approached in industrial settings. This new era will see new approaches to technological sustainability, sovereignty, and collaboration that prioritize productivity and speed. Compute will be the central driver of this transition. Materials, processes, and partnerships will determine whether the industrial sector can grow without outpacing its own energy and sustainability limits.
Industry 5.0 presents a vision of industrialization that gives back more than it takes, amplifying both productivity and possibility. The transition is already underway. Now, businesses need to ensure innovation, efficiency, and resilience evolve together to power a truly sustainable era of compute.
The post Compute: Powering the transition from Industry 4.0 to 5.0 appeared first on EDN.
A holiday shopping guide for engineers: 2025 edition

As of this year, EDN has consecutively published my odes to holiday-excused consumerism for more than a half-decade straight (and intentionally ahead of Black Friday, if you hadn’t already deduced), now nearing ten editions in total. Here are the 2019, 2020, 2021, 2022, 2023, and 2024 versions; I skipped a few years between 2014 and its successors.
As usual, I’ve included up-front links to prior-year versions of the Holiday Shopping Guide for Engineers because I’ve done my best here to not regurgitate any past recommendations; the stuff I’ve previously suggested largely remains valid, after all. That said, it gets increasingly difficult each year not to repeat myself! And as such, I’ve “thrown in the towel” this year, at least to some degree…you’ll find a few repeat categories this time, albeit with new product suggestions within them.
Without any further ado, and as usual, ordered solely in the order in which they initially came out of my cranium…
A Windows 11-compatible (or alternative O/S-based) computerMicrosoft’s general support for Windows 10 ended nearly a month ago (on October 14, to be exact) as I’m writing these words. For you Windows users out there, options exist for extending Windows 10 support updates (ESUs) for another year on consumer-licensed systems, both paid (spending $30 or redeeming 1,000 Microsoft Rewards points, with both ESU options covering up to 10 devices) and free (after syncing your PC settings).
If you’re an IT admin, the corporate license ESU program specifics are different; see here. And, as I covered in hands-on detail a few months back, (unsanctioned) options also exist for upgrading officially unsupported systems to Windows 11, although I don’t recommend relying on them for long-term use (assuming the hardware-hack attempt is successful at all, that is). As I wrote back in June:
The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.
You could also convert your existing PC over to run a different O/S, such as ChromeOS Flex (originally Neverware’s CloudReady, then acquired and now maintained by Google) or a Linux distro of your preference. For that matter, you could also just “get a Mac”. That said, any of these options will likely also compel conversions to new apps for the new O/S foundation. The aggregate learning curve from all these software transitions can end up being a “bridge too far”.

Instead, I’d suggest you just “bite the bullet” and buy a new PC for yourself and/or others for the holidays, before CPUs, DRAM, SSDs, and other building block components become even more supply-constrained and tariff-encumbered than they are now, and to ease the inevitable eventual transition to Windows 11.
Then donate your old hardware to charity for someone else to O/S-convert and extend its useful life. That’s what I’ll be doing, for example, with my wife’s Dell Inspiron 5570, which, as it turns out, wasn’t Windows 11-upgradeable after all.
Between now and next October, when the Windows 10 ESU runs out (unless the deadline gets extended again), we’ll replace it with the Dell 16 Plus (formerly Inspiron 16 Plus) in the above photo.
An AI-enhanced mobile deviceThe new Dell laptop I just mentioned, which we’d bought earlier this summer (ironically just prior to Microsoft’s unveiling of the free Windows 10 ESU option), is compatible with Microsoft’s Copilot+ specifications for AI-enhanced PCs by virtue of the system’s Intel Core Ultra 7 256V CPU with an integrated 47 TOPS NPU.
That said, although its support for local (vs conventional cloud) AI inference is nice from a future-proofing standpoint, there’s not much evidence of compelling on-client AI benefits at this early stage, save perhaps for low-latency voice interface capabilities (not to mention broader uninterrupted AI-based functionality when broadband goes down).
The current situation is very different when it comes to fully mobile devices. Yes, I know, laptops also have built-in batteries, but they often still spend much of their operating life AC-tethered, and anyway, their battery packs are much beefier than the ones in the smartphones and tablets I’m talking about here.
Local AI processing is not only faster than to-and-back-from-cloud roundtrip delays (particularly lengthy over cellular networks), but it also doesn’t gobble up precious limited-monthly-allocation data. Then there’s the locally stored-and-processed data enhanced privacy factor to consider, along with the oft-substantial power saving accrued by not needing to constantly leverage the mobile device’s Wi-Fi and cellular data subsystems.
![]()
You may indeed believe (as, full disclosure, I do) that AI features are of limited-at-best benefit at the moment, at least for the masses. But I think we can also agree that ongoing widespread-and-expanding and intense industry attention on AI will sooner or later cultivate compelling capabilities.
Therefore, I’ve showcased mobile devices’ AI attributes in recent years’ announcement coverage (such as that of Google’s Pixel 10 series shown in the photo above) and why I recommend them, again from a future-proofing angle if nothing else, if you’re (and/or yours are) due for a gadget upgrade this year. Meanwhile, I’ll soldier on with my Pixel 7s…
Audio education resources
As regular readers likely already realize, audio has received particular showcase attention in my blog posts and teardowns this past year-plus (a trend which will admittedly also likely extend into at least next year). This provided, among other things, an opportunity for me to refresh and expand my intellectual understanding of the topic.
I kept coming across references to Bob Cordell, mentioning both his informative website and his classic tomes, Designing Audio Power Amplifiers (make sure you purchase the latest 2nd edition, published in 2019, whose front cover is shown above) and the newer Designing Audio Circuits and Systems, released just last year.
Fair warning: neither book is inexpensive, especially in hardback, but even in paperback, and neither is available in a lower-priced Kindle version, either. That said, per both reviews I’ve seen from others and my own impressions, they’re well worth the investments.
Another worthwhile read, this time complete with plenty of humor scattered throughout, is Schiit Happened: The Story of the World’s Most Improbable Start-Up, in this case available in both inexpensive paperback and even more cost-effective Kindle formats. Written by Jason Stoddard and Mike Moffat, the founders of Schiit Audio, whom I’ve already mentioned several times this year, it’s also available for free on the Head-Fi Forum, where Jason has continued his writing. But c’mon, folks, drop $14.99 (or $4.99) to support a scrappy U.S. audio success story.
As far as audio-related magazines go, I first off highly recommend a subscription to audioXpress. Generalist electronics design publications like EDN are great, of course, but topic-focused coverage like that offered by audioXpress for audio design makes for an effective information companion.
On the other end of the product development chain, where gear is purchased and used by owners, there’s Stereophile, for which I’ve also been a faithful reader for more years than I care to remember. And as for the creation, capture, mastering, and duplication of the music played on those systems, I highly recommend subscriptions to Sound on Sound and, if your budget allows for a second publication, Recording. Consistently great stuff, all of it.
Finally, as an analogy to my earlier EDN-plus-audioXpress pairing, back in 2021 I recommended memberships to generalist ACM and/or IEEE professional societies. This time, I’ll supplement that suggestion with an audio-focused companion, the AES (Audio Engineering Society).
Back when I was a full-time press guy with EDN, I used to be able to snag complimentary admission to the twice-yearly AES conventions along with other organization events, which were always rich sources of information and networking connection cultivation.
To my dying day, I will always remember one particularly fascinating lecture, which correlated Ludwig van Beethoven’s progressive hearing degradation and its (presenter-presumed) emotional and psychological effects to the evolution of the music styles that he composed over time. Then there were the folks from Fraunhofer that I first-time met at an AES convention, kicking off a longstanding professional collaboration. And…
Audio gearFor a number of years, my Drop- (formerly Massdrop)-sourced combo of the x Grace Design Standard DAC and Objective 2 Headphone Amp Desktop Edition afforded me with a sonically enhanced alternative to my computer’s built-in DAC and amp for listening to music over plugged-in headphones and powered speakers:

As I’ve “teased” in a recent writeup, however, I recently upgraded this unbalanced-connection setup to a four-component Schiit stack, complete with a snazzy aluminum-and-acrylic rack:

Why?
Part of the reason is that I wanted to sonically experience a tube-based headphone amp for myself, both in an absolute sense and relative to solid-state Schiit amplifiers also in my possession.
Part of it is that all these Schiit-sourced amps also integrate preamp outputs for alternative-listening connection to an external power amp-plus-passive speaker set:

Another part of the reason is that I’ve now got a hardware equalizer as an alternative to software EQ, the latter (obviously) only relevant for computer-sourced audio. And relatedly, part of it is that I’ve also now got a hardware-based input switcher, enabling me to listen to audio coming not only from my PC but also from another external source. What source, you might ask?
Why, one of the several turntables that I also acquired and more broadly pressed into service this past year, of course!

I’ve really enjoyed reconnecting with vinyl and accumulating a LP collection (although my wallet has admittedly taken a beating in the process), and encourage you (and yours) to do the same. Stand by for a more detailed description of my expanded office audio setup, including its balanced “stack” counterpart, in an upcoming dedicated topic to be published shortly.
For sonically enhancing the rest of the house, where a computer isn’t the primary audio source, companies such as Bluesound and WiiM sell various all-in-one audio streamers, both power amplifier-inclusive (for use with traditional passive speakers) and amp-less (for pairing with powered speakers or intermediary connection to a standalone external amp).
A Bluesound Node N130, for example, has long resided at the “man cave” half of my office:

And the class D amplifier inside the “Pro” version of the WiiM Amp, which I plan to press into service soon in my living room, even supports the PFFB feature I recently discussed:

(Apple-reminiscent Space Gray shown and self-owned; Dark Gray and Silver also available)
More developer hardwareHere’s the other area where, as I alluded to in the intro, I’m going to overlap a bit with a past-year Holiday Shopping Guide. Two years ago, I recommended some developer kits from both the Raspberry Pi Foundation and NVIDIA, including the latter’s then-$499 Jetson Orin Nano:

It’s subsequently been “replaced”, as well as notably priced-decreased, by the Orin Nano Super Developer Kit at $249.
Why the quotes around “replaced”? That’s because, as good news for anyone who acted on my earlier recommendation, the hardware’s exactly the same as before: “Super” is solely reflective of an enhanced software suite delivering claimed generative AI performance gains of up to 1.7x, and freely available to existing Jetson Orin Nano owners.
More recently, last month, NVIDIA unveiled the diminutive $3,999 DGX Spark:

with compelling potential, both per company claims and initial hands-on experiences:
As a new class of computer, DGX Spark delivers a petaflop of AI performance and 128GB of unified memory in a compact desktop form factor, giving developers the power to run inference on AI models with up to 200 billion parameters and fine-tune models of up to 70 billion parameters locally. In addition, DGX Spark lets developers create AI agents and run advanced software stacks locally.
albeit along with, it should also be noted, an irregular development history and some troubling early reviews. The system was initially referred to as Project DIGITS when unveiled publicly at the January 2025 CES. Its application processor, originally referred to as the N1X, is now renamed the GB10. Co-developed by NVIDIA (who contributed the Grace Blackwell GPU subsystem) and MediaTek (who supplied the multi-core CPU cluster and reportedly also handled full SoC integration duties), it was originally intended for—and may eventually still show up in—Arm-based Windows PCs.
But repeated development hurdles have (reportedly) delayed the actualization of both SoC and system shipment aspirations, and lingering functional bugs preclude Windows compatibility (therefore explaining the DGX Spark’s Linux O/S foundation).
More generally, just a few days ago as I write these words, MAKE Magazine’s latest issue showed up in my mailbox, containing the most recent iteration of the publication’s yearly “Guide to Boards” insert. Check it out for more hardware ideas for your upcoming projects.
A smart ring
Regular readers have likely also noticed my recent series of writeups on smart rings, comprising both an initial overview and subsequent reviews based on fingers-on evaluations.
As I write these words in mid-November, Ultrahuman’s products have been pulled from the U.S. market due to patent-infringement rulings, although they’re still available elsewhere in the world. RingConn conversely concluded a last-minute licensing agreement, enabling ongoing sales of its devices worldwide, including in the United States.
And as for the instigator of the patent infringement actions, market leader Oura, my review of the company’s Gen3 smart ring will appear at EDN shortly after you read these words, with my eval of the latest-generation Ring 4 (shown above) to follow next month.
Smart rings’ Li-ion batteries, like those of any device with fully integrated cells, won’t last forever, so you need to go into your experience with one of them eyes-open to the reality that it’ll ultimately be disposable (or, in my case, transform into a teardown project).
That said, the technology is sufficiently mature at this point that I feel comfortable recommending them to the masses. They provide useful health insights, even though they tend to notably overstate step counts for those who use computer keyboards a lot. And unlike a smart watch or other wrist-based fitness tracker, you don’t need to worry (so much, at least) about color- and style-coordinating a smart ring with the rest of your outfit ensemble.
(Not yet a) pair of smart glasses
Conversely, alas, I still can’t yet recommend smart glasses to anyone but early adopters (like me; see above). Meta’s latest announced device suite, along with various products from numerous (and a growing list of) competitors, suggests that this product category is still relatively immature, therefore dynamic in its evolutionary nature. I’d hate to suggest something for you to buy for others that’ll be obsolete in short order. For power users like you, on the other hand…
Happy holidays!And with that, having just passed through 2,500 words, I’ll close here. Upside: plenty of additional presents-to-others-and/or-self ideas are now littering the cutting-room floor, so I’ve already got no shortage of topics for next year’s edition! Until then, sound off in the comments, and happy holidays!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- A holiday shopping guide for consumer tech
- A holiday gift wish list for 2020
- A holiday shopping guide for engineers: 2021 edition
- A holiday shopping guide for engineers: 2022 edition
- A holiday shopping guide for engineers: 2023 edition
- A holiday shopping guide for engineers: 2024 edition
The post A holiday shopping guide for engineers: 2025 edition appeared first on EDN.
Pulse-density modulation (PDM) audio explained in a quick primer

Pulse-density modulation (PDM) is a compact digital audio format used in devices like MEMS microphones and embedded systems. This compact primer eases you into the essentials of PDM audio.
Let’s begin by revisiting a ubiquitous PDM MEMS microphone module based on MP34DT01-M—an omnidirectional digital MEMS audio sensor that continues to serve as a reliable benchmark in embedded audio design.

Figure 1 A MEMS microphone mounted on a minuscule module detects sound and produces a 1-bit PDM signal. Source: Author
When properly implemented, PDM can digitally encode high-quality audio while remaining cost-effective and easy to integrate. As a result, PDM streams are now widely adopted as the standard data output format for MEMS microphones.
On paper, the anatomy of a PDM microphone boils down to a few essential building blocks like:
- MEMS microphone element, typically a capacitive MEMS structure, unlike the electret capsules found in analog microphones.
- Analog preamplifier boosts the low-level signal from the MEMS element for further processing.
- PDM modulator converts the analog signal into a high-frequency, 1-bit pulse-density modulated stream, effectively acting as an integrated ADC.
- Digital interface logic handles timing, clock synchronization, and data output to the host system.
Next is the function block diagram of T3902, a digital MEMS microphone that integrates a microphone element, impedance converter amplifier, and fourth-order sigma-delta (Σ-Δ) modulator. Its PDM interface enables time multiplexing of two microphones on a single data line, synchronized by a shared clock.

Figure 2 Functional block diagram outlines the internal segments of the T3902 digital MEMS microphone. Source: TDK
The analog signal generated by the MEMS sensing element in a PDM microphone—sometimes referred to as a digital microphone—is first amplified by an internal analog preamplifier. This amplified signal is then sampled at a high rate and quantized by the PDM modulator, which combines the processes of quantization and noise shaping. The result is a single-bit output stream at the system’s sampling rate.
Noise shaping plays a critical role by pushing quantization noise out of the audible frequency range, concentrating it at higher frequencies where it can be more easily filtered out. This ensures relatively low noise within the audio band and higher noise outside it.
The microphone’s interface logic accepts a master clock signal from the host device—typically a microcontroller (MCU) or a digital signal processor (DSP)—and uses it to drive the sampling and bitstream transmission. The master clock determines both the sampling rate and the bit transmission rate on the data line.
Each 1-bit sample is asserted on the data line at either the rising or falling edge of the master clock. Most PDM microphones support stereo operation by using edge-based multiplexing: one microphone transmits data on the rising edge, while the other transmits on the falling edge.
During the opposite edge, the data output enters a high-impedance state, allowing both microphones to share a single data line. The PDM receiver is then responsible for demultiplexing the combined stream and separating the two channels.
As a side note, the aforesaid microphone module is hardwired to treat data as valid when the clock signal is low.
The magic behind 1-bit audio streams
Now, back in the driveway. PDM is a clever way to represent a sampled signal using just a stream of single bits. It relies on delta-sigma conversion, also known as sigma-delta, and it’s the core technology behind many oversampling ADCs and DACs.
At first glance, a one-bit stream seems hopelessly noisy. But here is the trick: by sampling at very high rates and applying noise-shaping techniques, most of that noise is pushed out of the audible range—above 20 kHz—where it no longer interferes with the listening experience. That is how PDM preserves audio fidelity despite its minimalist encoding.
There is a catch, though. You cannot properly dither a 1-bit stream, which means a small amount of distortion from quantization error is always present. Still, for many applications, the trade-off is worth it.
Diving into PDM conversion and reconstruction, we begin with the direct sampling of an analog signal at a high rate—typically several megahertz or more. This produces a pulse-density modulation stream, where the density of 1s and 0s reflects the amplitude of the original signal.

Figure 3 An example that renders a single cycle of a sine wave as a digital signal using pulse density modulation. Source: Author
Naturally, the encoding relies on 1-bit delta-sigma modulation: a process that uses a one-bit quantizer to output either a 1 or a 0 depending on the instantaneous amplitude. A 1 represents a signal driven fully high, while a 0 corresponds to fully low.
And, because the audio frequencies of interest are much lower than the PDM’s sampling rate, reconstruction is straightforward. Passing the PDM stream through a low-pass filter (LPF) effectively restores the analog waveform. This works because the delta-sigma modulator shapes quantization noise into higher frequencies, which the low-pass filter attenuates, preserving the desired low-frequency content.
Inside digital audio: Formats at a glance
It goes without saying that in digital audio systems, PCM, I²S, PWM, and PDM each serve distinct roles tailored to specific needs. Pulse code modulation (PCM) remains the most widely used format for representing audio signals as discrete amplitude samples. Inter-IC Sound (I²S) excels in precise, low-latency audio data transmission and supports flexible stereo and multichannel configurations, making it a popular choice for inter-device communication.
Though not typically used for audio signal representation, pulse width modulation (PWM) plays a vital role in audio amplification—especially in Class D amplifiers—by encoding amplitude through duty cycle variation, enabling efficient speaker control with minimal power loss.
On a side note, you can convert a PCM signal to PDM by first increasing its sample rate (interpolation), then reducing its resolution to a single bit. Conversely, a PDM signal can be converted back to PCM by reducing its sampling rate (decimation) and increasing its word length. In both cases, the ratio of the PDM bit rate to the PCM sample rate is known as the oversampling ratio (OSR).
Crisp audio for makers: PDM to power simplified
Cheerfully compact and maker-friendly PDM input Class D audio power amplifier ICs simplify the path from microphone to speaker. By accepting digital PDM signals directly—often from MEMS mics—they scale down both complexity and component count. Their efficient Class D architecture keeps the power draw low and heat minimal, which is ideal for battery-powered builds.
That is to say, audio ICs like MAX98358 require minimal external components, making prototyping a pleasure. With filterless Class D output and built-in features, they simplify audio design, freeing makers to focus on creativity rather than complexity.
Sidewalk: For those eager to experiment, ample example code is available online for SoCs like the ESP32-S3, which use a sigma-delta driver to produce modulated output on a GPIO pin. Then with a passive or active low-pass filter, this output can be shaped into clean, sensible analog signal.
Well, the blueprint below shows an active low-pass filter using the Sallen & Key topology, arguably the simplest active two-pole filter configuration you will find.

Figure 4 Circuit blueprint outlines a simple active low-pass filter. Source: Author
Echoes and endings
As usual, I feel there is so much more to cover, but let’s jump to a quick wrap-up.
Whether you are decoding microphone specs or sketching out a signal chain, understanding PDM is a quiet superpower. It is not just about 1-bit streams; it’s about how digital sound travels, transforms, and finds its voice in your design. If this primer helped demystify the basics, you are already one step closer to building smarter, cleaner audio systems.
Let’s keep listening, learning, and simplifying.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Fundamentals of USB Audio
- Audio design by graphical tool
- Hands-on review: Is a premium digital audio player worth the price?
- Understanding superior professional audio design: A block-by-block approach
- Edge AI Game Changer: Actions Technology Is Redefining the Future of Audio Chips
The post Pulse-density modulation (PDM) audio explained in a quick primer appeared first on EDN.
MES meets the future

Industry 4.0 focuses on how automation and connectivity could transform the manufacturing canvas. Manufacturing execution systems (MES) with strong automation and connectivity capabilities thrived under the Industry 4.0 umbrella. With the recent expansion of AI usage through large language models (LLMs), Model Context Protocol, agentic AI, etc., we are facing a new era where MES and automation are no longer enough. Data produced on the shop floor can provide insights and lead to better decisions, and patterns can be analyzed and used as suggestions to overcome issues.
As factories become smarter, more connected, and increasingly autonomous, the intersection of MES, digital twins, AI-enabled robotics, and other innovations will reshape how operations are designed and optimized. This convergence is not just a technological evolution but a strategic inflection point. MES, once seen as the transactional layer of production, is transforming into the intelligence core of digital manufacturing, orchestrating every aspect of the shop floor.
MES as the digital backbone of smart manufacturingTraditionally, MES is the operational execution king: tracking production orders, managing work in progress, and ensuring compliance and traceability. But today’s factories demand more. Static, transactional systems no longer suffice when decisions are required in near-real time and production lines operate with little margin for error.
The modern MES is evolving and assuming a role as an intelligent orchestrator, connecting data from machines, people, and processes. It is not just about tracking what happened; it can explain why it happened and provide recommendations on what to do next.
Modern MES ecosystems will become the digital nervous system of the enterprise, combining physical and digital worlds and handling and contextualizing massive streams of shop-floor data. Advanced technologies such as digital twins, AI robotics, and LLMs can thrive by having the new MES capabilities as a foundation.
A data-centric MES delivers contextualized information critical for digital twins to operate, and together, they enable instant visibility of changes in production, equipment conditions, or environmental parameters, contributing to smarter factories. (Source: Critical Manufacturing)
Digital twins: the virtual mirror of the factory
A digital twin is more than a 3D model; it is a dynamic, data-driven representation of the real-world factory, continuously synchronized with live operational data. It enables users to simulate scenarios and test improvements before they ever touch the physical production line. It’s easy to understand how dependent on meaningful data these systems are.
Performing simulations of complex systems as a production line is an impossible task when relying on poor or, even worse, unreliable data. This is where a data-driven MES comes to the rescue. MES sits at the crossroads of every operational transaction: It knows what’s being produced, where, when, and by whom. It integrates human activities, machine telemetry, quality data, and performance metrics into one consistent operational narrative. A data-centric MES is the epitome of abundance of contextualized information crucial for digital twins to operate.
Several key elements made it possible for the MES ecosystems to evolve beyond their transactional heritage into a data-centric architecture built for interoperability and analytics. These include:
- Unified/canonical data model: MES consolidates and contextualizes data from diverse systems (ERP, SCADA, quality, maintenance) into a single model, maintaining consistency and traceability. This common model ensures that the digital twin always reflects accurate, harmonized information.
- Event-driven data streaming: Real-time updates are critical. An event-driven MES architecture continuously streams data to the digital twin, enabling instant visibility of changes in production, equipment conditions, or environmental parameters.
- Edge and cloud integration: MES acts as the intelligent gateway between the edge (where data is generated) and the cloud (where digital twins and analytics reside). Edge nodes pre-process data for latency-sensitive scenarios, while MES ensures that only contextual, high-value data is passed to higher layers for simulation and visualization.
- API-first and semantic connectivity: Modern MES systems expose data through well-defined APIs and semantic frameworks, allowing digital twin tools to query MES data dynamically. This flexibility provides the capability to “ask questions,” such as machine utilization trends or product genealogy, and receive meaningful answers in a timely manner.
It is an established fact that automation is crucial for manufacturing optimization. However, AI is bringing automation to a new level. Robotics is no longer limited to executing predefined movements; now, capable robots may learn and adapt their behavior through data.
Traditional industrial robots operate within rigidly predefined boundaries. Their movements, cycles, and tolerances are programmed in advance, and deviations are handled manually. Robots can deliver precision, but they lack adaptability: A robot cannot determine why a deviation occurs or how to overcome it. Cameras, sensors, and built-in machine-learning models provide robots with capabilities to detect anomalies in early stages, interpret visual cues, provide recommendations, or even act autonomously. This represents a shift from reactive quality control to proactive process optimization.
But for that intelligence to drive improvement at scale, it must be based on operational context. And that’s precisely where MES comes in. As in the case of digital twins, AI-enabled robots are highly dependent on “good” data, i.e., operational context. A data-centric MES ecosystem provides the context and coordination that AI alone cannot. This functionality includes:
- Operational context: MES can provide information such as the product, batch, production order, process parameters, and their tolerances to the robot. All of this information provides the required context for better decisions, aligned with process definition and rules.
- Real-time feedback: Robots send performance data back to the MES, validating it against known thresholds, and log results for traceability and future usage.
- Closed-loop control: MES can authorize adaptive changes (speed, temperature, or torque) based on recommendations inferred from past patterns while maintaining compliance.
- Human collaboration: Through MES dashboards and alerts, operators can monitor and oversee AI recommendations, combining human judgment with machine precision.
For this synergy to work, modern MES ecosystems must support:
- High-volume data ingestion from sensors and vision systems
- Edge analytics to pre-process robotic data close to the source
- API-based communication for real-time interaction between control systems and enterprise layers
- Centralized and contextualized data lakes storing both structured and unstructured contextualized information essential for AI model training
Every day, we see how incredibly fast technology evolves and how instantly its applications reshape entire industries. The wave of innovation fueled by AI, LLMs, and agentic systems is redefining the boundaries of manufacturing.
MES, digital twins, and robotics can be better interconnected, contributing to smarter factories. There is no crystal ball to predict where this transformation will lead, but one thing is undeniable: Data sits at the heart of it all—not just raw data but meaningful, contextualized, and structured information. On the shop floor, this kind of data is pure gold.
MES, by its very nature, occupies a privileged position: It is becoming the bridge between operations, intelligence, and strategy. Yet to leverage from that position, the modern MES must evolve beyond its transactional roots to become a true, data-driven ecosystem: open, scalable, intelligent, and adaptive. It must interpret context, enable real-time decisions, augment human expertise, and serve as the foundation upon which digital twins simulate, AI algorithms learn, and autonomous systems act.
This is not about replacing people with technology. When an MES provides workers with AI-driven insights grounded in operational reality, and when it translates strategic intent into executable actions, it amplifies human judgment rather than diminishing it.
The convergence is here. Technology is maturing. The competitive pressure is mounting. Manufacturers now face a defining choice: Evolve the MES into the intelligent heart of their operations or risk obsolescence as smarter, more agile competitors pull ahead.
Those who make this leap, recognizing that the future belongs to factories where human ingenuity and AI work as a team, will not just modernize their operations; they will secure their place in the future of manufacturing.
The post MES meets the future appeared first on EDN.
How to design a digital-controlled PFC, Part 1
Shifting from analog to digital control
An AC/DC power supply with input power greater than 75 W requires power factor correction (PFC) to:
- Take the universal AC input (90 V to 264 V) and rectify that input to a DC voltage.
- Maintain the output voltage at a constant level (usually 400 V) with a voltage control loop.
- Force the input current to follow the input voltage such that the electronics load appears to be a pure resistor with a current control loop.
Designing an analog-controlled PFC is relatively easy because the voltage and current control loops are already built into the controller, making it almost plug-and-play. The power-supply industry is currently transitioning from analog control to digital control, especially in high-performance power-supply design. In fact, nearly all newly designed power supplies in data centers use digital control.
Compared to analog control, digital-controlled PFC provides lower total harmonic distortion (THD), a better power factor, and higher efficiency, along with integrated housekeeping functions.
Switching from analog control to digital control is not easy; however, you will face new challenges where continuous signals are represented in a discrete format. And unlike an analog controller, the MCU used in digital control is essentially a “blank” chip; you must write firmware to implement the control algorithms.
Writing the correct firmware can be a headache for someone who has never done this before. To help you learn digital control, in this article series, I’ll provide a step-by-step guide on how to design a digital-controlled PFC, using totem-pole bridgeless PFC as a design example to illustrate the advantages of digital control.
A digital-controlled PFC systemAmong all PFC topologies, totem-pole bridgeless PFC provides the best efficiency. Figure 1 shows a typical totem-pole bridgeless PFC structure.
Figure 1 Totem-pole bridgeless PFC where Q1 and Q2 are high-frequency switches and will work as either a PFC boost switch or synchronous switch based on the VAC polarity. Source: Texas Instruments
Q1 and Q2 are high-frequency switches. Based on VAC polarity, Q1 and Q2 work as a PFC boost switch or synchronous switch, alternatively.
At a positive AC cycle (where the AC line is higher than neutral), Q2 is the boost switch, while Q1 works as a synchronous switch. The pulse-width modulation (PWM) signal for Q1 and Q2 are complementary: Q2 is controlled by D (the duty cycle from the control loop), while Q1 is controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle.
At a negative AC cycle (where the AC neutral is higher than line), the functionality of Q1 and Q2 swaps: Q1 becomes the boost switch, while Q2 works as a synchronous switch. The PWM signal for Q1 and Q2 are still complementary, but D now controls Q1 and 1-D controls Q2. Q3 remains on and Q4 remains off for the whole negative AC half cycle.
Figure 2 shows a typical digital-controlled PFC system block diagram with three major function blocks:
- An ADC to sense the VAC voltage, VOUT voltage, and inductor current for conversion into digital signals.
- A firmware-based average current-mode controller.
- A digital PWM generator.

Figure 2 Block diagram of a typical digital-controlled PFC system with three major function blocks. Source: Texas Instruments
I’ll introduce these function blocks one by one.
The ADCAn ADC is the fundamental element for an MCU; it senses an analog input signal and converts it to a digital signal. For a 12-bit ADC with a 3.3-V reference, Equation 1 expresses the ADC result for a given input signal Vin as:
Conversely, based on a given ADC conversion result, Equation 2 expresses the corresponding analog input signal as:

To obtain an accurate measurement, the ADC sampling rate must follow the Nyquist theorem, which states that a continuous analog signal can be perfectly reconstructed from its samples if the signal is sampled at a rate greater than twice its highest frequency component.
This minimum sampling rate, known as the Nyquist rate, prevents aliasing, a phenomenon where higher frequencies appear as lower frequencies after sampling, thus losing information about the original signal. For this reason, the ADC sampling rate is set at a much higher rate (tens of kilohertz) than the AC frequency (50 or 60 Hz).
Input AC voltage sensingThe AC input is high voltage; it cannot connect to the ADC pin directly. You must use a voltage divider, as shown in Figure 3, to reduce the AC input magnitude.

Figure 3 Input voltage sensing that allows you to connect the high AC input voltage to the ADC pin. Source: Texas Instruments
The input signal to the ADC pin should be within the measurement range of the ADC (0 V to 3.3 V). But to obtain a better signal-to-noise ratio, the input signal should be as big as possible. Hence, the voltage divider for VAC should follow Equation 3:

where VAC_MAX is the peak value of the maximum VAC voltage that you want to measure.
Adding a small capacitor (C) with low equivalent series resistance (ESR) in the voltage divider can remove any potential high-frequency noise; however, you should place C as close as possible to the ADC pin.
Two ADCs measure the AC line and neutral voltages; subtracting the two readings using firmware will obtain the VAC signal.
Output voltage sensingSimilarly, resistor dividers will attenuate the output voltage, as shown in Figure 4, then connect to an ADC pin. Again, adding C with low ESR in the voltage divider removes any potential high-frequency noise, with C placed as close as possible to the ADC pin.

Figure 4 Resistor divider for output voltage sensing, where C removes any potential high-frequency noise. Source: Texas Instruments
To fully harness the ADC measurement range, the voltage divider for VOUT should follow Equation 4:

where VOUT_OVP is the output overvoltage protection threshold.
AC current sensingIn a totem-pole bridgeless PFC, the inductor current is bidirectional, requiring a bidirectional current sensor such as a Hall-effect sensor. With a Hall-effect sensor, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 5.

Figure 5 The bidirectional hall-effect current sensor output is a sine wave with a DC offset when the input is a sine wave. Source: Texas Instruments
The Hall-effect sensor you use may have an output range that is less than what the ADC can measure. Scaling the Hall-effect sensor output to match the ADC measurement range using the circuit shown in Figure 6 will fully harness the ADC measurement range.
Figure 6 Hall-effect sensor output amplifier used to scale the Hall-effect sensor output to match the ADC measurement range. Source: Texas Instruments
Equation 5 expresses the amplification of the Hall-effect sensor output:
![]()
As I mentioned earlier, because the digital controller MCU is a blank chip, you must write firmware to mimic the PFC control algorithm used in the analog controller. This includes voltage loop implementation, current reference generation, current loop implementation, and system protection. I’ll go over these implementations in Part 2 of this article series.
Digital compensatorIn Figure 7, GV and GI are compensators for the voltage loop and current loop. One difference between analog control and digital control is that in analog control, the compensator is usually implemented through an operational amplifier, whereas digital control uses a firmware-based proportional-integral-derivative (PID) compensator.
For PFC, its small-signal model is a first-order system; therefore, a proportional-integral (PI) compensator is enough to obtain good bandwidth and phase margin. Figure 7 shows a typical digital PI controller structure.

Figure 7 A digital PI compensator where r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. Source: Texas Instruments
In Figure 7, r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. The compensator output, u(k), clamps to a specific range. The compensator also contains an anti-windup reset logic that allows the integral path to recover from saturation.
Figure 8 shows a C code implementation example for this digital PI compensator.

Figure 8 C code example for a digital PI compensator. Source: Texas Instruments
For other digital compensators such as PID, nonlinear PID, and first-, second-, and third-order compensators, see reference [1].
S/Z domain conversionIf you have an analog compensator that works well, and you want to use the same compensator in digital-controlled PFC, you can convert it through S/Z domain conversion. Assume that you have a type II compensator, as shown in Equation 6:
Replace s with bilinear transformation (Equation 7):
![]()
where Ts is the ADC sampling period.
Then H(s) is converted to H(z), as shown in Equation 8:

Rewrite Equation 8 as Equation 9:
![]()
To implement Equation 9 in a digital controller, store two previous control output variables: un-1, un-2, and two previous error histories: en-1, en-2. Then use current error en and Equation 9 to calculate the current control output, un.
Digital PWM generationA digital controller generates a PWM signal much like an analog controller, with the exception that a clock counter generates the RAMP signal; therefore, the PWM signal has limited resolution. The RAMP counter is configurable as up count, down count, or up-down count.
Figure 9 shows the generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation.

Figure 9 Generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation. Source: Texas Instruments
Programming the PERIOD resistor of the PWM generator will determine the switching frequency. For up-count and down-count mode, Equation 10 calculates the PERIOD register value as:

where fclk is the counter clock frequency and fsw is the desired switching frequency.
For the up-down count mode, Equation 11 calculates the PERIOD register value as:

Figure 10 shows an example of using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC.

Figure 10 Using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC. Source: Texas Instruments
Equation 12 shows that the COMP equals the current loop GI output multiplied by the switching period:
![]()
The higher the COMP value, the bigger the D.
To prevent short through between the top switch and the bottom switch, adding a delay on the rising edge of PWMA and the rising edge of PWMB inserts dead time between PWMA and PWMB. This delay is programmable, which means that it’s possible to dynamically adjust the dead time to optimize performance.
Blocks in digital-controlled PFCNow that you have learned about the blocks used in digital-controlled PFC, it’s time to close the control loop. In the next installment, I’ll discuss how to write firmware to implement an average current-mode controller.
Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.
Reference
- “C2000
Digital Control Library User’s Guide.” TI literature No. SPRUID3, January 2017.
Related Content
- Digital control for power factor correction
- Digital control unveils a new epoch in PFC design
- Power Tips #124: How to improve the power factor of a PFC
- Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC
- Power Tips #116: How to reduce THD of a PFC
The post How to design a digital-controlled PFC, Part 1 appeared first on EDN.
Optical combs yield extreme-accuracy gigahertz RF oscillator

It may seem at times that there is a divide between the optical/photonic domain and the RF one, with the terahertz zone between them as a demarcation. If you need to make a transition between the photonic and RF words, you use electrooptical devices such as LEDs and photodetectors of various types. Now, all or most optical systems are being used to perform functions in the optical band where electric comments can’t fulfill the needs, even pushing electronic approaches out of the picture.
In recent years, this divide has also been bridged by newer, advanced technologies such as integrated photonics where optical functions such as lasers, waveguides, tunable elements, filters, and splitters are fabricated on an optically friendly substrate such as lithium niobate (LiNbO3). There are even on-chip integrated transceivers and interconnects such as the ones being developed by Ayar Labs. The capabilities of some of these single- or stacked-chip electro-optical devices are very impressive.
However, there is another way in which electronics and optics are working together with a synergistic outcome. The optical frequency comb (OFC), also called optical comb, was originally developed about 25 years ago—for which John Hall and Theodor Hänsch received the 2005 Nobel Prize in Physics—to count the cycles from optical atomic clocks and for precision laser-based spectroscopy.
It has since found many other uses, of course, as it offers outstanding phase stability at optical frequencies for tuning or as a local oscillator (LO). Some of the diverse applications include X-ray and attosecond pulse generation, trace gas sensing in the oil and gas industry, tests of fundamental physics with atomic clocks, long-range optical links, calibration of atomic spectrographs, precision time/frequency transfer over fiber and through free space, and precision ranging.
Use of optical components is not limited to the optical-only domain. In the last few years, researchers have devised ways to use the incredible precision of the OFC to generate highly stable RF carriers in the 10-GHz range. Phase jitter in the optical signal is actually reduced as part of the down-conversion process, so the RF local oscillator has better performance than its source comb.
This is not an intuitive down-conversion scheme (Figure 1).

Figure 1 Two semiconductor lasers are injection-locked to chip-based spiral resonators. The optical modes of the spiral resonators are aligned, using temperature control, to the modes of the high-finesse Fabry-Perot (F-P) cavity for Pound–Drever–Hall (PDH) locking (a). A microcomb is generated in a coupled dual-ring resonator and is heterodyned with the two stabilized lasers. The beat notes are mixed to produce an intermediate frequency, fIF, which is phase-locked by feedback to the current supply of the microcomb seed laser (b). A modified uni-traveling carrier (MUTC) photodetector chip is used to convert the microcomb’s optical output to a 20-GHz microwave signal; a MUTC photodetector has response to hundreds of GHz (c). Source: Nature
But this simplified schematic diagram does not reveal the true complexity and sophistication of the approach, which is illustrated in Figure 2.

Figure 2 Two distributed-feedback (DFB) lasers at 1557.3 and 562.5 nm are self-injection-locked (SIL) to Si3N4 spiral resonators, amplified and locked to the same miniature F-P cavity. A 6-nm broad-frequency comb with an approximately 20 GHz repetition rate is generated in a coupled-ring resonator. The microcomb is seeded by an integrated DFB laser, which is self-injection-locked to the coupled-ring microresonator. The frequency comb passes through a notch filter to suppress the central line and is then amplified to 60 mW total optical power. The frequency comb is split to beat with each of the PDH-locked SIL continuous wave references. Two beat notes are amplified, filtered and then mixed to produce fIF, which is phase-locked to a reference frequency. The feedback for microcomb stabilization is provided to the current supply of the microcomb seed laser. Lastly, part of the generated microcomb is detected in an MUTC detector to extract the low-noise 20-GHz RF signal. Source: Nature
At present, this is not implemented as a single-chip device or even as a system with just a few discrete optical components; many of the needed precision functions are only available on individual substrates. A complete high-performance system takes a rack-sized chassis fitting in a single-height bay.
However, there has been significant progress on putting multiple functional locks into single-chip substrate, so it wouldn’t be surprising to see a monolithic (or nearly so) device within a decade or perhaps just a few years.
What sort of performance can such a system deliver? There are lots of numbers and perspectives to consider, and testing these systems—at these levels of performance—to assess their capabilities is as much of a challenge as fabricating them. It’s the metrology dilemma: how do you test a precision device? And how do you validate the testing arrangement itself?
One test result indicates that for a 10-GHz carrier, the phase noise is −102 dBc/Hz at 100 Hz offset and decreases to −141 dBc/Hz at 10 kHz offset. Another characterization compares this performance to that of other available techniques (Figure 3).

Figure 3 The platforms are all scaled to 10-GHz carrier and categorized based on the integration capability of the microcomb generator and the reference laser source, excluding the interconnecting optical/electrical parts. Filled (blank) squares are based on the optical frequency division (OFD) standalone microcomb approach: 22-GHz silica microcomb (i); 5-GHz Si3N4 microcomb (ii); 10.8-GHz Si3N4 microcomb (iii) ; 22-GHz microcomb (iv); MgF2 microcomb (v); 100-GHz Si3N4 microcomb (vi); 22-GHz fiber-stabilized SiO2 microcomb (vii); MgF2 microcomb (viii); 14-GHz MgF2 microcomb pumped by an ultrastable laser (ix); and 14-GHz microcomb-based transfer oscillator (x). Source: Nature
There are many good online resources available that explain in detail the use of optical combs for RF-carrier generation. Among these are “Photonic chip-based low-noise microwave oscillator” (Nature); “Compact and ultrastable photonic microwave oscillator” (Optics Letters via ResearchGate); and “Photonic Microwave Sources Divide Noise and Shift Paradigms” (Photonics Spectra).
In some ways, it seems there’s a “frenemy” relationship between today’s advanced photonics and the conventional world of RF-based signal processing. But as has usually been the case, the best technology will win out, and it will borrow from and collaborate with others. Photonics and electronics each have their unique attributes and bring something to the party, while their integrated pairing will undoubtedly enable functions we can’t fully envision—at least not yet.
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- Is Optical Computing in Our Future?
- Use optical fiber as an isolated current sensor?
- Analog Optical Fiber Forges RF-Link Alternative
- Silicon yields phased-arrays for optics, not just RF
- Attosecond laser pulses drive petahertz optical transistor switching
The post Optical combs yield extreme-accuracy gigahertz RF oscillator appeared first on EDN.
High-performance MCUs target industrial applications

STMicroelectronics raises the performance bar for embedded edge AI and industrial applications with the new STM32V8 high-performance microcontrollers (MCUs) for demanding industrial applications such as factory automation, motor control, and robotics. It is the first MCU built on ST’s 18-nm silicon-on-insulator (FD-SOI) process technology with embedded phase-change memory (PCM).
The STM32V8’s phase-change non-volatile memory (PCM) claims the smallest cell size on the market, enabling 4 MB of embedded non-volatile memory (NVM).
(Source: STMicroelectronics)
In addition, the STM32V8 is ST’s fastest STM32 MCU to date, designed for high reliability and harsh environments in embedded and edge AI applications, and can handle complex applications and maintain high energy efficiency. The STM32V8 achieves clock speeds of up to 800 MHz, thanks to the Arm Cortex-M85 core and the 18-nm FD-SOI process with embedded PCM. The FD-SOI technology delivers high energy efficiency and supports a maximum junction temperature of up to 140°C.
The MCU integrates special accelerators, including graphic, crypto/hash, and comes with a large selection of IP, including 1-Gb Ethernet, digital interfaces (FD-CAN, octo/hexa xSPI, I2C, UART/USART, and USB), analog peripherals, and timers. It also features state-of-the-art security with the STM32 Trust framework and the latest cryptographic algorithms and lifecycle management standards. It targets PSA Certified Level 3 and SESIP certification to meet compliance with the upcoming Cyber-Resilience Act (CRA).
The STM32V8 has been selected for the SpaceX Starlink constellation, using it in a mini laser system that connects the satellites traveling at extremely high speeds in low Earth orbit (LEO), ST said. This is thanks in part to the 18-nm FD-SOI technology that provides a higher level of reliability and robustness.
The STM32V8 supports bare-metal or RTOS-based development. It is supported by ST’s development resources, including STM32Cube software development and turnkey hardware including Discovery kits and Nucleo evaluation boards.
The STM32V8 is in early-stage access for selected customers. Key OEM availability will start in the first quarter of 2026, followed by broader availability.
The post High-performance MCUs target industrial applications appeared first on EDN.
FIR temperature sensor delivers high accuracy

Melexis claims the first automotive-grade surface-mount (SMD) far-infrared (FIR) temperature sensor designed for temperature monitoring of critical components in electric vehicle (EV) powertrain applications. These include inverters, motors, and heating, ventilation, and air conditioning (HVAC) systems.
(Source: Melexis)
The MLX90637 offers several advantages over negative temperature coefficient (NTC) thermistors that have traditionally been used in these systems, where speed and accuracy are critical, Melexis said.
These advantages include eliminating the need for manual labor associated with NTC solutions thanks to the SMD packaging, which supports automated PCB assembly and delivers cost savings. In addition, the FIR temperature sensor with non-contact measurement ensures intrinsic galvanic isolation that helps to enhance EV safety by separating high- and low-voltage circuits, while the inherent electromagnetic compatibility (EMC) eliminates typical noise challenges associated with NTC wires, the company said.
Key features include a 50° field of view, 0.02°C resolution, and fast response time, which are suited for applications such as inverter busbar monitoring where temperature must be carefully managed. Sleep current is less than 2.5 μA. and the ambient operating temperature range is -40°C to 125°C.
The MLX90637 also simplifies system integration with a 3.3-V supply, factory calibration (including post calibration), and an I2C interface for communication with a host microcontroller, including a software-definable I2C address via an external pin. The AEC-Q100-qualified sensor is housed in a 3 × 3-mm package.
The post FIR temperature sensor delivers high accuracy appeared first on EDN.
Accuracy loss from PWM sub-Vsense regulator programming

I’ve recently published Design Ideas (DIs) showing circuits for linear PWM programming of standard bucking-type regulators in applications requiring an output span that can swing below the regulator’s sense voltage (Vsense or Vs). For example: “Simple PWM interface can program regulators for Vout < Vsense.”
Wow the engineering world with your unique design: Design Ideas Submission Guide
Objections have been raised, however, that such circuits entail a significant loss of programming analog accuracy because they rely on adding a voltage term typically derived from an available voltage (e.g., logic rail) source. Therefore, they should be avoided.
The argument relies on the fact that such sources generally have accuracy and stability that are significantly worse (e.g., ±5%) than those of regulator internal references (e.g., ±1%).
But is this objection actually true, and if so, how serious is the problem? How much of an accuracy penalty is actually incurred? This DI addresses these questions.
Figure 1 shows a basic topology for sub-Vs regulator programming with current expressions as follows:
A = DpwmVs/R1
B = (1 – Dpwm)(Vl – Vs)/(R1 + R4)
Where A is the primary programming current and B is the sub-Vs programming current giving an output voltage:
Vout = R2(A + B) + Vs
Figure 1 Basic PWM regulator programming topology.
Inspection of the A and B current expressions shows that when the PWM duty factor (Dpwm) is set to full-scale 100% (Dpwm = 1), then B = 0. This is due to the (1 – Dpwm) term.
Therefore, there can be no error contribution from the logic rail Vl at full-scale.
At other Dpwm values, however, this happy circumstance no longer applies, and B becomes nonzero. Thus, Vl tolerance and noise degrade accuracy, at least to some extent. But, by how much?
The simplest way to address this crucial question is to evaluate it as a plausible example of Figure 1’s general topology. Figure 2 provides some concrete groundwork for that by adding some example values.

Figure 2 Putting some meat on Figure 1’s bare bones, adding example values to work with.
Assuming perfect resistors, nominal R1 currents are then:
A = Dpwm Vs/3300
B = (1 – Dpwm)(Vl – Vs)/123300
Vout = R2(A + B) + Vs = 75000(A + B) + 1.25
Then, making the (highly pessimistic) assumption that reference errors stack up as the sum of absolute values:
Aerr = Dpwm 1%Vs/3300 = Dpwm 3.8µA
Berr = (1 – Dpwm) (5% 3.3v + 1% 1.25v)/123300 = (1 – Dpwm) 1.44µA
Vout total error = 75000(Dpwm 3.8µA + (1 – Dpwm)1.44µA)) + 1% Vs
The resulting Vout error plots are shown in Figure 3.

Figure 3 Vout error plots where the x-axis is Dpwm and y-axis is Vout error. Black line is Vout = Vs at Dpwm = 0 and red line is Vout = 0 at Dpwm = 0.
Conclusion: Error does increase in the lower range of Vout when the Vout < Vsense feature is incorporated, but any difference completely disappears at the top end. So, the choice turns on the utility of Vout < Vsense.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Simple PWM interface can program regulators for Vout < Vsense
- Three discretes suffice to interface PWM to switching regulators
- Revisited: Three discretes suffice to interface PWM to switching regulators
- PWM nonlinearity that software can’t fix
- Another PWM controls a switching voltage regulator
The post Accuracy loss from PWM sub-Vsense regulator programming appeared first on EDN.
Signal integrity and power integrity analysis in 3D IC design

The relentless pursuit of higher performance and greater functionality has propelled the semiconductor industry through several transformative eras. The most recent shift is from traditional monolithic SoCs to heterogeneous integrated advanced package ICs, including 3D integrated circuits (3D ICs). This emerging technology promises to help semiconductor companies sustain Moore’s Law.
However, these advancements bring increasingly complex challenges, particularly in power integrity (PI) and signal integrity (SI). Once secondary, SI/PI have become critical disciplines in modern semiconductor development. As data rates ascend into multiple gigabits per second and power requirements become more stringent, error margins shrink dramatically, making SI/PI expertise indispensable. The fundamental challenge lies in ensuring clean and reliable signal transmissions and stable power delivery across intricate systems.

Figure 1 The above diagram highlights the basic signal integrity (SI) issues. Source: Siemens EDA
This article explains the unique SI/PI challenges in 3D IC designs by contrasting them with traditional SoCs. We will then explore a progressive verification strategy to address these complexities, examine the roles and interdependencies of stakeholders in the 3D IC ecosystem, and illustrate these concepts through a real-world success story. Finally, we will discuss how these innovations drive the future of semiconductor design.
Traditional SI/PI versus 3D IC approaches
In traditional SoC components destined for a PCB system, SI and PI analysis typically validates individual components before system integration. This often treats SoCs, packages, and PCBs as distinct entities, allowing sequential analysis and optimization. For instance, component-level power demand analysis can be performed on the monolithic SoC and its package, while signal integrity analysis validates individual channels.
The design process is often split between separate packaging and PCB teams working in parallel. These teams eventually collaborate to manage design trade-offs such as allocating timing or voltage margins between the package and PCB to accommodate routing constraints. While effective for traditional designs, this compartmentalized approach is inadequate for the inherent complexities of 3D ICs.
A 3D IC’s architecture is not merely a collection of components but a highly condensed system of mini subsystems, characterized by the vertical stacking of multiple dies. Inter-die interfaces, through-silicon vias (TSVs), and microbumps create a dense, highly interactive electrical environment where power and signal integrity issues are deeply intertwined and can propagate across multiple layers.
The tight integration and proximity of the dies introduce novel coupling mechanisms and power delivery challenges that cannot be effectively addressed by sequential, isolated analyses. Therefore, unlike a traditional flow, 3D ICs demand holistic, parallel validation from the outset, with SI and PI analyses commencing early and encompassing all constituent parts concurrently.
Progressive verification
To navigate the intricate landscape of 3D IC design, a progressive verification strategy is paramount. This principle acknowledges that design information is sparse in early stages and becomes progressively detailed.
The core idea behind progressive verification is to initiate analysis as early as possible with available inputs, guiding the design onto the correct path and transforming the final verification step into confirmation rather than a discovery of fundamental issues. Different analysis requirements are addressed as details become available, starting with minimal inputs and gradually incorporating more specific data.

Figure 2 Here is a view of a progressive verification flow. Source: Siemens EDA
Let’s summarize the various analyses involved and their timing in the design flow.
Early architectural feasibility and pre-layout analysis
At the initial design phase, before detailed layout information is available, the focus is on architectural feasibility studies. This involves estimating power budgets and defining high-level interfaces. Even with rough inputs, early analysis can commence. For instance, pre-layout signal integrity analysis can model representative interconnect structures, such as an interposer bridge.
By defining an “envelope” of achievable performance based on preliminary dimensions, designers can establish realistic expectations and guidelines for subsequent layout stages. This proactive approach helps identify potential bottlenecks and ensures a robust electrical foundation.
Floorplanning and implementation-driven analysis
As the design progresses to floorplanning and initial implementation, guidelines from early analysis are translated into a physical layout. At this stage, more in-depth analyses become possible. This includes detailed power delivery network (PDN) analysis to verify power distribution across stacked dies and the substrate.
Signal path verification with actual component interconnections can also begin, enabling early identification and optimization of critical signal routes. This iterative process of layout and analysis enables continuous refinement, ensuring physical implementation aligns with electrical performance targets.
Detailed electrical analysis with vendor-specific IP
The final stage of progressive verification involves comprehensive electrical analysis utilizing actual vendor-specific intellectual property (IP) models. Given the nascent state of 3D IC die-to-die standards—for instance UCIe, BoW, and AIB, which are less mature than established protocols like DDR or PCIe—this detailed analysis is even more critical.
Designers perform in-depth S-parameter modeling of impedance networks, feeding these models with precise current values obtained from die designers and other stakeholders. This granular analysis provides full closure on the design’s electrical performance, ensuring all critical signal paths and power delivery mechanisms meet specifications under real-world operating conditions.
The 3D IC ecosystem
The complexity of 3D IC designs necessitates a highly collaborative environment involving diverse stakeholders, each with unique perspectives and challenges. Effective communication and early engagement among these teams are crucial for successful integration.
- System architects are responsible for the high-level floorplanning, determining the number of chiplets, baseband dies, and the communication channels required between them. Their challenge lies in optimizing the overall system architecture for performance, power, and area, while considering the physical constraints imposed by 3D integration.
- Die designers focus on individual die architectures and oversee I/O planning and internal power distribution. They must communicate their power requirements and I/O characteristics accurately to ensure compatibility within the stacked system. Their primary challenge is to optimize the die-level performance while adhering to system-level constraints and ensuring robust power and signal delivery across the interfaces.
- Layout teams are responsible for the physical implementation, encompassing die-level layout, substrate layout, and silicon interconnects like interposers and bridges. Often different layout teams may handle different aspects of the implementation, requiring meticulous coordination. Their challenges include managing extreme density, minimizing parasitic effects, and ensuring manufacturability across multiple layers.
- SI/PI and verification teams act as technical consultants, providing guidelines and feedback at every level. They advise system architects on bump-out strategies for die floorplans and work with die designers to optimize power and ground bump counts. Their role is to proactively identify and mitigate potential SI/PI issues throughout the design cycle, ensuring that the electrical performance targets are met.
- Mechanical and thermal teams ensure structural integrity and manage heat dissipation, respectively. Both are critical for the long-term reliability and performance of designs, as beyond electrical considerations, 3D ICs introduce significant mechanical and thermal challenges. For example, the close proximity of die can lead to localized hotspots and mechanical stresses due to differing coefficients of thermal expansion.
By employing a progressive verification methodology, these diverse stakeholders can engage in early and continuous communication, fostering a collaborative environment that makes it significantly easier to build a functional and reliable 3D IC design.
Chipletz’s proof of concept
The efficacy of a progressive verification strategy and collaborative ecosystem is best illustrated through real-world applications. Chipletz, a fabless substrate startup, exemplifies successful navigation of 3D IC design complexities in collaboration with an EDA partner. Chipletz is working closely with Siemens EDA for its Smart Substrate products, utilizing tools capable of supporting advanced 3D IC design requirements.

Figure 3 Smart Substrate uses cutting-edge chiplet integration technology that eliminates an interposer. Source: Siemens EDA
At the time, many industry-standard EDA tools were primarily tailored for traditional package and PCB architectures. Chipletz presented a formidable challenge: its designs featured massive floorplans with up to 50 million pin counts, demanding analysis tools with unprecedented capacity and layout tools capable of handling such intricate structures.
Siemens responded by engaging its R&D teams to enhance tool capacities and capabilities. This collaboration demonstrated not only the ability to handle these complex architectures but also to perform meaningful electrical analyses on such large designs. Initial efforts focused on fundamental aspects such as direct current (DC) IR drop analysis across the substrate and early PDN analysis.
Through these foundational steps, Siemens demonstrated its tools’ capabilities and, crucially, its commitment to working alongside Chipletz to overcome challenging roadblocks. This partnership enabled Chipletz to successfully tape out its initial demonstration vehicle, and it’s now progressing to the second revision of its design. This underscores the importance of adaptable EDA tools and strong collaboration in pushing the boundaries of 3D IC innovation.
Driving 3D IC innovation
3D ICs are unequivocally here to stay, with major semiconductor companies increasingly incorporating various forms of 3D packaging into their product roadmaps. This transition signifies a fundamental shift in how the industry approaches system design and integration. As the industry continues to embrace 3D IC integration as a key enabler for next-generation systems, the methodologies and collaborative approaches outlined in this article for SI and PI will only grow in importance.
The progressive verification strategy, coupled with close collaboration among diverse stakeholders, offers a robust framework for navigating the complex challenges inherent in 3D IC design. Companies and individuals who master these techniques will be exceptionally well-positioned to lead the next wave of semiconductor innovation, creating the high-performance, energy-efficient systems that will power our increasingly digital world.
Todd Burkholder is a senior editor at Siemens DISW. For over 25 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.
John Caka is a signal and power integrity applications engineer with over a decade of experience in high-speed digital design, modeling, and simulation. He earned his B.S. in electrical engineering from the University of Utah in 2013 and an MBA from the Quantic School of Business and Technology in 2024.
Related Content
- Putting 3D IC to work for you
- Making your architecture ready for 3D IC
- The multiphysics challenges of 3D IC designs
- Mastering multi-physics effects in 3D IC design
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
- Automating FOWLP design: A comprehensive framework for next-generation integration
The post Signal integrity and power integrity analysis in 3D IC design appeared first on EDN.
Norton amplifiers: Precision and power, the analog way we remember

The Norton amplifier topology brings back the essence of analog design by using clever circuit techniques to deliver strong performance with minimal components. It is not about a brand name—it’s about a timeless analog philosophy that continues to inspire engineers and hobbyists today. This approach shows why analog circuits remain powerful and relevant, even in our digital age.
In electronics, a Norton amplifier—also known as a current differencing amplifier (CDA)—is a specialized analog circuit that functions as a current-controlled voltage source. Its output voltage is directly proportional to the difference between two input currents, making it ideal for applications requiring precise current-mode signal processing.
Conceptually, it serves as the dual of an operational transconductance amplifier (OTA), offering a complementary approach to analog design and expanding the toolkit for engineers working with current-driven systems.
So, while most amplifier discussions orbit op-amps and voltage feedback, the Norton amplifier offers a subtler, current-mode alternative—elegant in its simplicity and quietly powerful in its departure from the norm. Let us go further.
Norton amplifier’s analog elegance
As shown in the LM2900 IC equivalent circuit below, the internal architecture is refreshingly straightforward. The most striking departure from a conventional op-amp—typically centered around a voltage-mode differential pair—lies in the input stage. Rather than employing the familiar long-tailed pair, this Norton amplifier features a current mirror followed by a common-emitter amplifier.

Figure 1 Equivalent circuit highlights the minimalist internal structure of the LM2900 Norton amplifier IC. Source: Texas Instruments
These devices have been around for decades, and they clearly continue to intrigue analog enthusiasts. Just recently, I picked up a batch of LM3900-HLF ICs from an online seller. The LM3900-HLF appears to be a Chinese-sourced variant of the classic LM3900—a quad Norton operational amplifier recognized for its current-differencing input and quietly unconventional topology. These low-cost quads are now widely used across analog systems, especially in medium-frequency and single-supply AC applications.

Figure 2 Pin connections of the LM3900-HLF support easy adoption in practical circuits. Source: HLF
In my view, the LM2900 and LM3900 series are more than just relics—they are reminders of a time when analog design embraced cleverness over conformity. Their current differencing architecture, once a quiet alternative to voltage-mode orthodoxy, still finds relevance in industrial signal chains where noise rejection, single-supply operation, and low-impedance interfacing matter.
You will not see these chips headlining new designs, but the principles they embody—robust, elegant, and quietly efficient—continue to shape sensor front-ends, motor drives, and telemetry systems. The ICs may have faded, but the technique endures, humming beneath the surface of modern infrastructure.
And, while it’s not as widely romanticized as the LM3900, the LM359 Norton amplifier remains a quietly powerful choice for analog enthusiasts who value speed with elegance. Purpose-built for video and fast analog signal processing, it stepped in with serious bandwidth and slewing muscle. As a dual high-speed Norton amplifier, it handles wideband signals with slew rates up to 60 V/μs and gain-bandwidth products reaching 400 MHz—a clear leap beyond its older cousins.
In industrial and instrumentation circles, LM359’s current-differencing input stage still commands respect for its low input bias, fast settling, and graceful handling of capacitive loads. Its legacy lives on in video distribution, pulse amplification, and high-speed analog comparators—especially in designs that prioritize speed and stability over rail-to-rail swing.
Wrapping up with a whiff of flux
There is not much more to say about Norton amplifiers for now, so we will wrap up this slightly off-the-beaten-path blog post here. As a parting gift, here is a practical LM3900-based circuit—just enough to satisfy those who find joy in the scent of solder smoke.

Figure 3 Bring this LM3900-based triangle/square waveform generator circuit to life and trace its quiet Norton-style elegance. Source: Author
Triangle waveforms are usually generated by an integrator, which receives first a positive DC input voltage, and then a negative DC input voltage. The LM3900 Norton amplifier facilitates this operation within systems powered by a single supply voltage, thanks to the current mirror present at its non-inverting (+) input. This feature enables triangle waveform generation without the need for a negative DC input.
In the above schematic diagram, amplifier IC1D functions as an integrator. It first operates with the current through R1 to generate a negative output voltage slope. When the IC1C amplifier—the Schmitt trigger—switches high, the current through R5 causes the output voltage to rise.
For optimal waveform symmetry, R1 should be set to twice the value of R5 (and here R1=1 MΩ and R5=470 KΩ, which is close enough). Note that the Schmitt circuit also provides a square wave output at the same frequency.
Feeling inspired? Fire up your breadboard, test the circuit, or share your own twist. Whether you are a seasoned tinkerer or just rediscovering the joy of analog, let this be your spark to keep exploring.
Finally, I hope this odd topic sparked some interest. If I have misunderstood anything—or if there is a better way to approach it—please feel free to chime in with corrections or suggestions in the comments. Exploring new ground always comes with the risk of missteps, and I welcome the chance to learn and improve.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Op amps: the 10,000-foot view
- Op-Amp Circuits, Configurations, and Schematics
- A generalized amplifier and the Miller-effect paradox
- New op amps address new—and old—design challenges
- Introduction to Operational Amplifier Applications, Op-amp Basics
The post Norton amplifiers: Precision and power, the analog way we remember appeared first on EDN.
The Fosi Audio V3 Mono: A compelling power amp with a tendency to blow

One of the PFFB (Post-Filter Feedback Technology)-based Class D audio amplifiers showcased in a recent writeup of mine was Fosi Audio’s V3 Mono, which will get sole billing today:

It interestingly (at least to me) originally launched as a Kickstarter project in April 2024:
As the name implies, it’s a monoblock unit, intended to drive only a single speaker, with both single-channel XLR balanced and RCA unbalanced input options.
I own four functional (for now, at least) devices, plus the nonfunctional one whose insides we’ll be seeing today. Why four? It’s not because I plan on driving both front left and right main speakers and a center speaker and subwoofer, or for that matter, the two main transducers plus two surrounds. Instead, it’s for spares, notably ones obtained pre-higher tariffs, and specifically to do with that dead fifth amp.
Design evolution, manufacturing, and reliability issuesBefore I go all Debbie Downer on you, I’ll begin with the good news. The V3 Mono is highly reviewer-rated (see, for example, the write-up from my long-time tech compatriot Amir Majidimehr) and has also garnered no shortage of enthusiastic feedback from owners like Tim Bray, who had heard about it from Archimago (here’s part 2). Alas, amidst all that positive press are also a notable number of complaints from folks whose units let the magic smoke escape, sometimes on just the first use, or whose amplifiers had more modest but still annoying issues.
Mis-wired connectionsI’ll start with the most innocuous quirk and end with the worst. Initial units were mis-wired from the PCB to the speaker banana plugs (due, I actually suspect, to a fundamental PCB trace layout issue) in such a way that they ended up with inverted-polarity outputs, i.e., signals being 180° out of phase from how they should be.
This wasn’t particularly a problem if all the units in your setup exhibited the issue, because at least then the phase was consistently inverted. However, if one (or some, depending on your setup complexity) of them were in phase and other(s) were out of phase, the inconsistency resulted in a collapsed stereo image and overall decreased volume due to destructive interference between the in- and out-of-phase speakers.
The same goes if you mixed-and-combined out-of-phase V3 Monos with in-phase other devices, whether from other manufacturers or even from Fosi Audio itself. The fix is pretty easy; connect the red speaker wire to the black speaker terminal of the affected V3 Mono instead, and vice versa, to externally reinvert the phase back to how it should be. But from my experience with these units, it’s not possible to discern if a particular device is wired correctly without disassembling it; this guy’s sticker-based methodology, for example, didn’t pan out for me:
As commenter @TheirryG01210 wrote in response to the above video, “A better way to figure out if phase is correct is to check that cables are cross-connected (left solder pads cable goes to the right banana socket and vice versa).”
That’s spot-on advice. Here, for example, is one of my functional units, which has the wires un-crossed, therefore operating in an inverted-output fashion. That said, this approach looks like how it should be wired, right? Therefore, my conjecture that this actually is inherently a PCB layout issue, with wire-swapping the cheaper, easier workaround to the alternative costlier and otherwise more complicated board “turn”.

My photo also matches one of the two in this Audio Science Review discussion thread post:

The other picture in that post shows the wires crossed; it’s not clear to me whether this is something that the owner did post-purchase with a soldering iron or if Fosi Audio revamped units still in its inventory, after discovering the problem and prior to shipping them out:

Conceptually, it matches the from-factory crossed wiring of my other three functional devices, along with today’s teardown victim, although the wire colors are also swapped with my units:

But color doesn’t matter. A crossed-wires configuration is what’s key to a correct-phase output.
The next, more recently introduced issue involves gain-setting inconsistency. Look at the most recent version of the “stock” image for the product on Amazon’s website, for example:

And you’ll see that the two gain-switch options supported for the RCA input (the switch doesn’t affect the XLR input) are 19 dB and 25 dB. That said, the gain options shown in the online user manual are instead 25 dB and 31 dB, which match the original units, including all of mine:

Here’s the key excerpt from an email by Fosi Audio quoted in a relevant Audio Science Review post (bolded emphasis is mine):
We would like to confirm whether your V3 mono gain is the old version or the new version. Since V3mono does not have a volume adjustment knob. It has already obtained a large power output when it is turned on, so we have reduced the gain of 31db to 25db, and 25db to 19db in the new version, which can effectively ensure the stable output of V3mono, safe use and extend the service life.
Loud “pop” soundWhich leads to my last, and the most concerning, issue. After a seemingly random duration of operation, but sometimes just the first use, judging from comments I’ve seen on Audio Science Review, Amazon, Fosi’s online store, and elsewhere, the amplifier emits a loud “pop” and the sound disappears, never to return.
The front panel light still glows, and you can still hear the “click” when the amp initially turns on or transitions out of standby in response to sensing an active input source (or when you transition from one input to another, for that matter), but as for the output…nothing but the sound(s) of silence. This very issue happened with one of the devices I purchased brand new, fortunately, within the return-for-full-refund period.
Several of the other V3 Monos I acquired “open box” off eBay also arrived already DOA. In one particularly mind (and amp)-blowing case, I bought a single-box two-device set. When I opened it up, one of the amps had a piece of blue tape stuck to the top with the word “good” scribbled on it. Yep, the other one was not “good”.
What the eBay seller explained to me in the process of issuing a ship-back-for-full-refund is that when large retailers get a return, they sometimes just turn around and resell it discounted to eBay sellers like her, apparently without remembering to test it first (or, more cynically, maybe just not caring about its current condition).
A blown-output case studyToday’s victim (1,000+ words in) was another eBay-DOA example. In this case, the seller didn’t ask me to return it prior to issuing a refund, and it therefore became a teardown candidate, hopefully enabling me to discern just where the Achilles’ Heel in this design is.
To Fosi Audio’s credit, by the way, the pace of complaints for this particular issue seems to have slowed down dramatically of late. When I first looked at the customer feedback on Amazon, etc., earlier this year, comments were overwhelmingly negative. Now, revisiting various feedback forums, I see the mix has notably shifted in the positive-percentage direction. That said, my cynical side wonders if Fosi and Amazon might just now be nuking negative posts, but hope springs eternal…
I’ll start with some overview shots of our patient, one of which you’ve already seen, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the V3 Mono, not including its bulbous suite of external power supply options, has dimensions of 105 x 35 x 142 mm and weighs 480 grams).




Remove the two side screws from the back panel:

And the front panel slides right out:



The “Aesthetic and Practical Dust-Proof Filter Screens” (I’m quoting from Fosi Audio’s website, though I concur that they both look cool and act cooling) also then slide right out if you wish:

Removing two more screws on the bottom:


Now allows for the extraction of the internal assembly (here again, you saw one photo already):



The front and back halves of the “Sturdy and Durable All-Aluminum Alloy Chassis” are identical (and an aside: pretty snazzy shots, eh?):

Returning to the PCB topside (with still-attached back panel), let’s take a closer look:

One thing I didn’t notice at first is that none of the ICs are PCB-silkscreened as to their type (R for resistor, for example, C for capacitor, L for inductor, U for IC, etc…), far from their specific-device identifying number (R1, C3, L5, U2…). Along the left side, top to bottom, are:
- The three-position switch for on, auto, and off operating modes
- The power status LED
- The two-position XLR-vs-RCA input selector switch, and
- A nifty two-contact spring-loaded switch that’s depressed when the front panel is in place. I suspect, but didn’t test for myself, that it prevents amplifier operation whenever the front panel is removed.
Note, too, the four screw heads in between the two multi-position switches, along with the ribbon cable. Look closely and you’ll realize that the first three items mentioned are actually located on a separate mini-PCB, connected to the main one mechanically via the screws (which, as you’ll see shortly, are actually bolts) and electrically via the ribbon cable.
And in fact, the silkscreen marking on the mini-PCB says (among other things) “SW PCB” (SW meaning switch, I assume) while the main PCB silkscreen in the lower left corner says…drumroll…”MAIN PCB”.
Why Fosi Audio went this multi-PCB route is frankly a mystery to me. Until I noticed the labeled silkscreen markings (admittedly just now, as I was writing this section) I’d thought that perhaps the main board was common to multiple amplifier product proliferations, with the front panel switches, etc. differentiating between them. But given that both boards’ silkscreens also say “Fosi Audio V3 MONO” on them, I can now toss that theory out the window. Readers’ ideas are welcome in the comments!
In the middle of the photo are two 8-pin DIP socketed chips, op-amps in fact, Texas Instruments NE5532P dual low-noise operational amplifiers to be precise.
They’re socketed because, as Fosi Audio promotes on the product page and akin to the two Douk Audio amplifiers I showcased in my prior coverage, too, they’re intended to be user-swappable, analogous to the “tube rolling” done by vacuum tube-based audio equipment enthusiasts.
Numerous (Elna) electrolytic and surface-mount capacitors (along with other SMD passives) dot the landscape, which is dominated by two massive Nichicon 63V/2200μF electrolytic filtering capacitors (explicitly identified as such, along with the Elna ones, by visual and text shout-outs on the V3 Mono product page, believe it or not). And one other, smaller Texas Instruments 8-lead IC (soldered SOP this time) on the bottom toward the right bears mentioning. It’s marked as follows:
N5532
TI41M
A9GG
Its first-line mark similarity to the previously mentioned NE5532P is notable, albeit potentially also coincidental. That said, Google Image search results also imply that it’s indeed another dual low-noise op amp. And it’s not the last of them we’ll see. Speaking of which, let’s next look at the other half of the PCB topside:

There it was at the bottom; another socketed TI NE5532P! Straddling it on either side are Omron G6K-2P-Y relays. At the top are even more relays, this time with functional symbol marks on top to eliminate any identity confusion: another white-color one, this time a Zhejiang HKE HRS3FTH-S-DC24V-A, and below it a dark grey HCP2-S-DC24V-A from the same supplier.
Remember when I mentioned earlier that after one V3 Mono stopped outputting amplified audio, I could still hear relay clicks when I toggled its power and input-select switches? Voila, the click-sound sources.
Those coupling capacitors are another curious component call-out on the V3 Mono product page; they’re apparently sourced from German supplier WIMA. The latter two, on either side of the aforementioned PCB solder pads that end up at the speaker’s banana plug connectors, are grey here but yellow colored at Fosi Audio’s website, so…

To the left of the red coupling caps is a grey metal box with two slits on top and copper-color contents visible through them; hold that thought. And last but not least, along the right edge of the PCB are (top to bottom) the power-input connector, two hefty resistors, the XLR input, and the RC input. The two-wire harness in the lower corner goes to the aforementioned gain switch.
Insufficient thermal protection?Now for the other side:

That IC at far left was quite a challenge to identify. To the right of an “AB” company logo is the following three-line mark:
TNJB0089A
UMG992
2349
Google searches on the text, either line-by-line or its entirety, were fruitless (at least to me). However, I found a photo of a chip with a matching first-line mark here. About the only thing on that page that I could read was the words “AB137A SOP16”, but that was the clue I needed.
The AB137A, is (more accurately was) from the company Shenzhen Bluetrum Technology, which Internet Archive snapshots suggest changed its name to Shenzhen Zhongke Lanxun Technology at the beginning of this year. The bluetrum.com/product/ab137a.html product page no longer seems to exist, nor does the link from there to the datasheet at bluetrum.com/upload/file/202411/1732257601186423.pdf. But again, thanks to the Internet Archive (the last valid snapshot of the product page that seems to exist there is from last November) I’ve been able to discern the following:
- CPU and Flexible IO
High-performance 32-bit RISC-V processor Core with DSP instructions
RISC-V typical speed: 125 MHz
Program memory: internal 2 Mbit flash
Internal 60 KB RAM for data and program
Flexible GPIO pins with programmable pull-up and pull-down resistors
Support GPIO wakeup or interrupt - Audio Interface
High-performance stereo DAC with 95 dB SNR
High-performance mono ADC with 90 dB SNR
Support flexible audio EQ adjust
MIC amplifier input
Support Sample rate 8, 11.025, 12, 16, 22.05, 32, 44.1, and 48 kHz
Four-channel Stereo Analog MUX - Package
SOP16 - Temperature
Operating temperature: -40℃ to +85℃
Storage temperature: -65℃ to +150℃
So, there you have it (at least I think)!
The other half of this side of the PCB is less exciting, unless you’re into blobs of solder (along with, let’s not forget, another glimpse of those hefty resistors), that is:

But it’s what’s in the middle of this side of the PCB, therefore common to both of those PCB pictures, that had me particularly intrigued; you too, I suspect. Remove the two screws whose heads are on the PCB’s other side:


Lift off the plate:


Clean the thermal paste off the top of the IC, and what comes into view is what you’ve probably already suspected: Texas Instruments’ TPA3255, the design’s Class D amplification nexus:

At this point in the write-up, I’m going to offer my conjecture on what happened with this device. The inside of the metal plate, acting as a heatsink, paste-mates with the TPA3255:

while the outside, also thermal paste-augmented, is intended to further transfer the heat to the bottom of the aluminum case via the two screws I removed prior to pulling the PCB out of it:

Key to my theory are the words and phrases “bottom” and “thermal paste”. First off, it’s a bit odd to me that the TPA3255, the design’s obvious primary heat-generation source, is on the bottom of the PCB, given that (duh) heat rises. The tendency would then be for it to “cook” not only itself but also circuitry above it, on the other side of the PCB, although the metal plate-as-heatsink should at least somewhat mitigate this issue or at least spread it out.
This leads to my other observation: there’s scant thermal paste on either side of the plate for heat-transfer purposes, off the IC and ultimately to the outside world, and what exists is pockmarked. I’m therefore guessing that the TPA3255 thermally destroyed itself, and with that, the music died.
Wrapping upBefore I forget, let’s detach that mini-PCB I mentioned earlier. Here are the backside nuts:

And the front-side bolt heads:

Disconnect the ribbon cable:

And you already know what comes next:



Not too exciting, but I’ve gotta be thorough, right?


At this point, it occurred to me that I hadn’t yet taken any main-PCB side shots. Front:

Left side:

The back:

The right side:

And after removing the two screws surrounding the XLR input:


I was able to lift the back panel away, exposing to view even more PCB circuitry:

In closing, remember that “grey box with two slits on top and copper-color contents visible through them” that I mentioned earlier? Had I looked closely enough at the V3 Mono product page before proceeding, I would have already realized what it was (although, in my slight defense, the photo is mis-captioned there):

Then again, I also could have identified it via the photo I included in my previous write-up:

Instead, I proceeded to use my flat-head screwdriver to rip it off the PCB in the process of attempting to more conservatively detach just its “lid”:


As I already suspected from the “copper-color contents visible through the two slits on top”, it’s a dual wirewound inductor:

from Sumida, offering “superior signal purity and noise reduction, elevating the amplifier’s sound performance,” per Fosi Audio’s website.
Crossing through 3,000 words, I’ll wrap up at this point and turn the keyboard over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Class D: Audio amplifier ascendancy
- Audio amplifiers: How much power (and at what tradeoffs) is really required?
- Class D audio power amplifiers: Adding punch to your sound design
- How Class D audio amplifiers work
The post The Fosi Audio V3 Mono: A compelling power amp with a tendency to blow appeared first on EDN.
Edge AI powers the next wave of industrial intelligence

Artificial intelligence is moving out of the cloud and into the operations that create and deliver products to us every day. Across manufacturing lines, logistics centers, and production facilities, AI at the edge is transforming industrial operations, bringing intelligence directly to the source of data. As the industrial internet of things (IIoT) matures, edge-based AI is no longer an optional enhancement; it’s the foundation for the next generation of productivity, quality, and safety in industrial environments.
This shift is driven by the need for real-time, contextually aware intelligence—systems that can see, hear, and even “feel” their surroundings, analyze sensor data instantly, and make split-second decisions without relying on distant cloud servers. From predictive maintenance and automated inspection to security monitoring and logistics optimization, edge AI is redefining how machines think and act.
Why industrial AI belongs at the edgeTraditional industrial systems rely heavily on centralized processing. Data from machines, sensors, and cameras is transmitted to the cloud for analysis before insights are sent back to the factory floor. While effective in some cases, this model is increasingly impractical and inefficient for modern, latency-sensitive operations.
Implementing at the edge addresses that. Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself. This local processing offers three primary advantages:
- Low latency and real-time decision-making: In production lines, milliseconds matter. Edge-based AI can detect anomalies or safety hazards and trigger corrective actions instantly without waiting for a network round-trip.
- Enhanced security and privacy: Industrial environments often involve proprietary or sensitive operational data. Processing locally minimizes data exposure and vulnerability to network threats.
- Reduced power and connectivity costs: By limiting cloud dependency, edge systems conserve bandwidth and energy, a crucial benefit in large, distributed deployments such as logistics hubs or complex manufacturing centers.
These benefits have sparked a wave of innovation in AI-native embedded systems, designed to deliver high performance, low power consumption, and robust environmental resilience—all within compact, cost-optimized footprints.
Edge-based AI is the foundation for the next generation of productivity, quality, and safety in industrial environments, delivering low latency, real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs. (Source: Adobe AI Generated)
Localized intelligence for industrial applications
Edge AI’s success in IIoT is largely based on contextual awareness, which can be defined as the ability to interpret local conditions and act intelligently based on situational data. This requires multimodal sensing and inference across vision, audio, and even haptic inputs. In manufacturing, for example:
- Vision-based inspection systems equipped with local AI can detect surface defects or assembly misalignments in real time, reducing scrap rates and downtime.
- Audio-based diagnostics can identify early signs of mechanical failure by recognizing subtle deviations in sound signatures.
- Touch or vibration sensors help assess machine wear, contributing to predictive maintenance strategies that reduce unplanned outages.
In logistics and security, edge AI cameras provide real-time monitoring, object detection, and identity verification, enabling autonomous access control or safety compliance without constant cloud connectivity. A practical example of this approach is a smart license-plate-recognition system deployed in industrial zones, a compact unit capable of processing high-resolution imagery locally to grant or deny vehicle access in milliseconds.
In all of these scenarios, AI inference happens on-site, reducing latency and power consumption while maintaining operational autonomy even in network-constrained environments.
Low power, low latency, and local learningIndustrial environments are unforgiving. Devices must operate continuously, often in high-temperature or high-vibration conditions, while consuming minimal power. This has made energy-efficient AI accelerators and domain-specific system-on-chips (SoCs) critical to edge computing.
A good example of this trend is the early adoption of the Synaptics Astra SL2610 SoC platform by Grinn, which has already resulted in a production-ready system-on-module (SOM), Grinn AstraSOM-261x, and a single-board computer (SBC). By offering a compact, industrial-grade module with full software support, Grinn enables OEMs to accelerate the design of new edge AI devices and shorten time to market. This approach helps bridge the gap between advanced silicon capabilities and practical system deployment, ensuring that innovations can quickly translate into deployable industrial solutions.
The Grinn–Synaptics collaboration demonstrates how industrial AI systems can now run advanced vision, voice, and sensor fusion models within compact, thermally optimized modules.
These platforms combine:
- Embedded quad-core Arm processors for general compute tasks
- Dedicated neural processing units (NPUs) delivering multi-trillion operations per second for inference
- Comprehensive I/O for camera, sensor, and audio input
- Industrial-grade security
Equally important is support for custom small language models (SLMs) and on-device training capabilities. Industrial environments are unique. Each factory line, conveyor system, or inspection station may generate distinct datasets. Edge devices that can perform localized retraining or fine-tuning on new sensor patterns can adapt faster and maintain high accuracy without cloud retraining cycles.
The Grinn OneBox AI-enabled industrial SBC, designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. (Source: Grinn Global)
Emergence of compact multimodal platforms
The recent introduction of next-generation SoCs such as Synaptics’ SL2610 underscores the evolution of edge AI hardware. Built for embedded and industrial systems, these platforms offer integrated NPUs, vision digital-signal processors, and sensor fusion engines that allow devices to perceive multiple inputs simultaneously, such as camera feeds, audio signals, or even environmental readings.
Such capabilities enable richer human-machine interaction in industrial contexts. For instance, a line operator can use voice commands and gestures to control inspection equipment, while the system responds with real-time feedback through both visual indicators and audio prompts.
Because the processing happens on-device, latency is minimal, and the system remains responsive even if external networks are congested. Low-power design and adaptive performance scaling also make these platforms suitable for battery-powered or fanless industrial devices.
From the cloud to the floor: practical examplesCollaborations like the Grinn–Synaptics development have produced compact, power-efficient edge computing modules for industrial and smart city deployments. These modules integrate high-performance neural processing, customized AI implementations, and ruggedized packaging suitable for manufacturing and outdoor environments.
Deployed in use cases such as automated access control and vision-guided robotics, these systems demonstrate how localized AI can replace bulky servers and external GPUs. All inference, from image recognition to object tracking, is performed on a module the size of a matchbox, using only a few watts of power.
The results:
- Reduced latency from hundreds of milliseconds to under 10 ms
- Lower total system cost by eliminating cloud compute dependencies
- Improved reliability in areas with limited connectivity or strict privacy requirements
The same architecture supports multimodal sensing, enabling combined visual, auditory, and contextual awareness—key for applications such as worker safety systems that must recognize both spoken alerts and visual cues in noisy and complex factory environments.
Toward self-learning, sustainable intelligenceThe evolution of edge AI is about more than just performance; it’s about autonomy and adaptability. With support for custom, domain-specific SLMs, industrial systems can evolve through continual learning. For example, an inspection model might retrain locally as lighting conditions or material types change, maintaining precision without manual recalibration.
Moreover, the combination of low-power processing and localized AI aligns with growing sustainability goals in industrial operations. Reducing data transmission, cooling needs, and cloud dependencies contributes directly to lower carbon footprints and energy costs, critical as industrial AI deployments scale globally.
Edge AI as the engine of industrial transformationThe rise of AI at the edge marks a turning point for IIoT. By merging context-aware intelligence with efficient, scalable compute, organizations can unlock new levels of operational visibility, flexibility, and resilience.
Edge AI is no longer about supplementing the cloud; it’s about bringing intelligence where it’s most needed, empowering machines and operators alike to act faster, safer, and smarter.
From the shop floor to the supply chain, localized, multimodal, and energy-efficient AI systems are redefining the digital factory. With continued innovation from technology partnerships that blend high-performance silicon with real-world design expertise, the industrial world is moving toward a future where every device is an intelligent, self-aware contributor to production excellence.
The post Edge AI powers the next wave of industrial intelligence appeared first on EDN.
The ecosystem view around an embedded system development

Like in nature, development tools for embedded systems form “ecosystems.” Some ecosystems are very self-contained, with little overlap on others, while other ecosystems are very open and broad with support for everything but the kitchen sink. Moreover, developers and engineers have strong opinions (to put it mildly) about this subject.
So, we developed a greenhouse that sustains multiple ecosystems; the greenhouse demo we built shows multiple microcontrollers (MCUs) and their associated ecosystems working together.
The greenhouse demo
The greenhouse demo is a simplified version of a greenhouse controller. The core premise of this implementation is to intelligently open/close the roof to allow rainwater into the greenhouse. This is implemented using a motorized canvas tarp mechanism. The canvas tarp was created from old promotional canvas tote bags and sewn into the required shape.
The mechanical guides and lead screw for the roof are repurposed from a 3D printer with a stepper motor drive. An evaluation board is used as a rain sensor. Finally, a user interface panel enables a manual override of the automatic (rain) controls.

Figure 1 The greenhouse demo is mounted on a tradeshow wedge. Source: Microchip
It’s implemented as four function blocks:
- A user interface, capacitive touch controller with the PIC32CM GC Curiosity Pro (EA36K74A) in VS Code
- A smart stepper motor controller reference design built on the AVR EB family of MCUs in MPLAB Code Configurator Melody
- A main application processor with SAM E54 on the Xplained Pro development kit (ATSAME54-XPRO), running Zephyr RTOS
- A liquid detector using the MTCH9010 evaluation kit
The greenhouse demo outlined in in this article is based on a retractable roof developed by Microchip’s application engineering team in Romania. This reference design is implemented in a slightly different fashion to the greenhouse, with the smart stepper motor controller interfacing directly with the MTCH9010 evaluation board to control the roof position. This configuration is ideal for applications where the application processor does not need to be aware of the current state of the roof.

Figure 2 This retractable roof demo was developed by a design team in Romania. Source: Microchip
User interface controller
Since the control panel for this greenhouse normally would be in an area where water should be expected, it was important to take this into account when designing the user interface. Capacitive touch panels are attractive as they have no moving parts and can be sealed under a panel easily. However, capacitive touch can be vulnerable to false triggers from water.
To minimize these effects, an MCU with an enhanced peripheral touch controller (PTC) was used to contain the effects of any moisture present. Development of the capacitive touch interface was aided with MPLAB Harmony and the capacitive touch libraries, which greatly reduce the difficulty in developing touch applications.
The user interface for this demo is composed of a PIC32CM GC Curiosity Pro (EA36K74A) development kit connected to a QT7 XPlained Pro Extension (ATQT7-XPRO) kit to provide a (capacitive) slider and two touch buttons.

Figure 3 The QT7 Xplained extension kit comes with self-capacitance slider and two self-capacitance buttons alongside 8 LEDs to enable button state and slider position feedback. Source: Microchip
The two buttons allow the user to fully open or close the tarp, while the slider enables partial open or closed configurations. When the user interface is idle for 30 seconds or more, the demo switches back to the MTCH9010 rain sensor to automatically determine whether the tarp should be opened or closed.
Smart stepper motor controller
The smart stepper motor controller is a reference design that utilizes the AVR EB family of MCUs to generate the waveforms required to perform stepping/half-stepping/microstepping of a stepper motor. By having the MCU generate the waveforms, the motor can behave independently, rather than requiring logic or interaction from the main application processor(s) elsewhere in the system. This is useful for signals such as limit switches, mechanical stops, quadrature encoders, or other signals to monitor.

Figure 4 Smart stepper motor reference design uses core independent peripherals (CIPs) inside the MCUs to microstep a bipolar winding stepper motor. Source: Microchip
The MCU receives commands from the application processor and executes them to move the tarp to a specified location. One of the nice things about this being a “smart” stepper motor controller is that the functionality can be adjusted in software. For instance, if analog signals or limit switches are added, the firmware can be modified to account for these signals.
While the PCB attached to the motor is custom, this function block can be replicated with the multi-phase power board (EV35Z86A), the AVR EB Curiosity Nano adapter (EV88N31A) and the AVR EB Curiosity Nano (EV73J36A).
Application processor and other ecosystems
The application processor in this demo is a SAM E54 MCU that runs Zephyr real-time operating system (RTOS). One of the biggest advantages of Zephyr over other RTOSes and toolchains is the way that the application programming interface (API) is kept uniform with clean divisions between the vendor-specific code and the abstracted, higher-level APIs. This allows developers to write code that works across multiple MCUs with minimal headaches.
Zephyr also has robust networking support and an ever-expanding list of capabilities that make it a must-have for complex applications. Zephyr is open source (Apache 2.0 licensing) with a very active user base and support for multiple different programming tools such as—but not limited to—OpenOCD, Segger J-Link and gdb.
Beyond the ecosystems used directly in the greenhouse demo, there are several other options. Some of the more popular examples include IAR Embedded Workbench, Arm Keil, MikroE’s Necto Studio and SEGGER Embedded Studio. These tools are premium offerings with advanced features and high-quality support to match.
For instance, I recently had an issue with booting Zephyr on an MCU where I could not access the usual debuggers and printf was not an option. I used SEGGER Ozone with a J-Link+ to troubleshoot this complex issue. Ozone is a special debug environment that eschews the usual IDE tabs to provide the developer with more specialized windows and screens.
In my case, the issue occurred where the MCU would start up correctly from the debugger, but not from a cold start. After some troubleshooting and testing, I eventually determined one of the faults was a RAM initialization error in my code. I patched the issue with a tiny piece of startup assembly that ran before the main kernel started up. The snippet of assembly that I wrote is attached below for anyone interested.

The moral of the story is that development environments offer unique advantages. An example of this is IAR adding support for Zephyr to its IDE solution. In many ways, the choice of what ecosystem to develop in is up to personal preference.
There isn’t really a wrong answer, if it does what you need to make your design work. The greenhouse demo embodies this by showing multiple ecosystems and toolchains working together in a single system.
Robert Perkel is an application engineer at Microchip Technology. In this role, he develops technical content such as application notes, contributed articles, and design videos. He is also responsible for analyzing use-cases of peripherals and the development of code examples and demonstrations. Perkel is a graduate of Virginia Tech where he earned a Bachelor of Science degree in Computer Engineering.
Related Content
- Just What is an Embedded System?
- Making an embedded system safe and secure
- Developing Energy-Efficient Embedded Systems
- Building Embedded Systems that Survive the Edge
- Next Gen Embedded System Hardware, Software, Tools, and Operating
The post The ecosystem view around an embedded system development appeared first on EDN.
The role of motion sensors in the industrial market

The future of the industrial market is being established by groundbreaking technologies that promise to reveal unique potential and redefine what is possible. These innovations range from collaborative robots (cobots) and artificial intelligence to the internet of things, digital twins, and cloud computing.
Cobots are not just tools but partners, empowering human workers to achieve greater creativity and productivity together. AI is ushering industries into a new era of intelligence, where data-driven insights accelerate innovation and transform challenges into opportunities.
The IoT is weaving vast, interconnected machines and systems that enable seamless communication and real-time responsiveness like never before. Digital twins bring imagination to life by creating virtual environments where ideas can be tested, refined, and perfected before they touch reality. Cloud computing serves as the backbone of this revolution, offering limitless power and connectivity to drive brave visions forward.
Together, these technologies are inspiring a new industrial renaissance, where innovation, sustainability, and human initiative converge to build a smarter, more resilient world.
The role of sensorsSensors are the silent leaders driving the industrial market’s transformation into a realm of intelligence and possibility. Serving as the “eyes and ears” of smart factories, these devices unlock the power of real-time data, enabling industries to look beyond the surface and anticipate the future. By continuously sensing pressure, temperature, position, vibration, and more, sensors enable workers to be continuously monitored and bring machines to life, turning them into connected, responsive entities within the industrial IoT (IIoT).
This flow of information accelerates innovation, enables predictive maintenance, and enhances safety. Sensors do not just monitor; they usher in a new era where efficiency meets sustainability, where every process is optimized, and where industries embrace change with confidence. In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries.
Challenges for industrial motion sensing applicationsSensors in industrial environments face several significant challenges. They must operate continuously for years on battery power without failure. Additionally, it is crucial that they capture every critical event to ensure no incidents are missed. Sensors must provide accurate and precise tracking to manage processes effectively. Simultaneously, they need to be compact yet powerful, integrating multiple functions into a small device.
Most importantly, sensors must deliver reliable tracking and data collection in any environment—whether harsh, noisy, or complex—ensuring consistent performance regardless of external conditions. Overcoming these challenges is essential to making factories smarter and more efficient through connected technologies, such as the IIoT and MEMS motion sensors.
MEMS inertial sensors are essential devices that detect motion by measuring accelerations, vibrations, and angular rates, ensuring important events are never missed in an industrial environment. Customers need these motion sensors to work efficiently while saving power and to keep performing reliably even in tough conditions, such as high temperatures.
However, there are challenges to overcome. Sometimes sensors can become overwhelmed, causing them to miss important impact or vibration details. Using multiple sensors to cover different motion ranges can be complicated, and managing power consumption in an IIoT node is also a concern.
There is a tradeoff between accuracy and range: Sensors that measure small movements are very precise but can’t handle strong impacts, while those that detect strong impacts are less accurate. In industrial settings, sensors must be tough enough to handle harsh environments while still providing reliable and accurate data. Solving these challenges is key to making MEMS sensors more effective in many applications.
How the new ST industrial IMU can helpInertial measurement units (IMUs) typically integrate accelerometers to measure linear acceleration and gyroscopes to detect angular velocity. These devices often deliver space and cost savings while reducing design complexity.
One example is ST’s new ISM6HG256X intelligent IMU. This MEMS sensor is the industry’s first IMU for the industrial market to integrate high-g and low-g sensing into a single package with advanced features such as sensor fusion and edge processing.
The ISM6HG256X addresses key industrial market challenges by integrating a single mechanical structure for an accelerometer with a wide dynamic range capable of capturing both low-g vibrations (16 g) and high-g shocks (256 g) and a gyroscope, effectively eliminating the need for multiple sensors and simplifying system architecture. This compact device leverages embedded edge processing and adaptive self-configurability to optimize performance while significantly reducing power consumption, thereby extending battery life.
Engineered to withstand harsh industrial environments, the IMU reliably operates at temperatures up to 105°C, ensuring consistent accuracy and durability under demanding conditions. Supporting Industry 5.0 initiatives, the sensor’s advanced sensing architecture and edge processing capabilities enable smarter, more autonomous industrial systems that drive innovation.
Unlocking smarter tracking and safety, this integrated MEMS motion sensor is designed to meet the demanding needs of the industrial sector. It enables real-time asset tracking for logistics and shipping, providing up-to-the-minute information on location, status, and potential damage. It also enhances worker safety through wearable devices that detect falls and impacts, instantly triggering emergency alerts to protect personnel.
Additionally, it supports condition monitoring by accurately tracking vibration, shock, and precise motion of industrial equipment, helping to prevent downtime and costly failures. In factory automation, the solution detects unusual vibrations or impacts in robotic systems instantly, ensuring smooth and reliable operation. By combining tracking, monitoring, and protection into one component, industrial operations can achieve higher efficiency, safety, and reliability with streamlined system design.
The ISM6HG256X IMU sensor combines simultaneous low-g (±16 g) and high-g (±256 g) acceleration detection with a high-performance precision gyroscope for angular rate measurement. (Source: STMicroelectronics)
As the industrial market landscape evolves toward greater flexibility, sustainability, and human-centered innovation, industrial IMU solutions are aligned with the key drivers shaping the future of the industrial market. IMUs can enable precise motion tracking, reliable condition monitoring, and energy-efficient edge processing while supporting the decentralization of production and enhancing resilience and agility within supply chains.
Additionally, the integration of advanced sensing technologies contributes to sustainability goals by optimizing resource use and minimizing waste. As manufacturers increasingly adopt AI-driven collaboration and advanced technology integration, IMU solutions provide the critical data and reliability needed to drive innovation, customization, and continuous improvement across the industry.
The post The role of motion sensors in the industrial market appeared first on EDN.
Lightning and trees

We’ve looked at lightning issues before. Please see “Ground strikes and lightning protection of buried cables.”
This headline below was found online at the URL hyperlinked here.

Recent headline from the local paper. Source: ABC7NY
This ABC NY article describes how a teenage boy tried to take refuge from the rain in a thunderstorm by getting under the canopy of a tree. In that article, we find this quote: “The teen had no way of knowing that the tree would be hit by lightning.”
This quote, apparently the opinion of the article’s author, is absolutely incorrect. It is total and unforgivable rubbish.
Even when I was knee-high to Jiminy Cricket, I was told over and over and over by my parents NEVER to try to get away from rain by hiding under a tree. Any tree that you come across will have its leaves reaching way up into the air, and those wet leaves are a prime target for a lightning strike, as illustrated in this screenshot:

Conceptual image of lightning striking tree. Source: Stockvault
Somebody didn’t impart this basic safety lesson to this teenager. It is miraculous that this teenager survived the event. The above article cites second-degree burns, but a radio item that I heard about this incident also cites nerve damage and a great deal of lingering pain.
Recovery is expected.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Ground strikes and lightning protection of buried cables
- Lightning rod ball
- Teardown: Zapped weather station
- No floating nodes
- Why do you never see birds on high-tension power lines?
- Birds on power lines, another look
- A tale about loose cables and power lines
- Shock hazard: filtering on input power lines
- Misplaced insulator proves fatal
The post Lightning and trees appeared first on EDN.



