Українською
  In English
EDN Network
Custom hardware helps deliver safety and security for electric traction

Electric traction has become a critical part of a growing number of systems that need efficient motion and position control. Motors do not just provide the driving force for vehicles, from e-bikes to cars to industrial and agricultural machinery. They also enable a new generation of robots, whether they use wheels, propellers or legs for motion.
The other common thread for many of these systems lies in the way they are expected to operate in a highly connected environment. For instance, wireless connectivity has enabled novel business models for e-bike rental and delivers positioning and other vital data to robots as they move around.
But the same connections to the Internet open avenues of attack in ways that previous generations of motion-control systems have not had to deal with. It complicates the tasks of designing, certifying, and maintaining systems that ensure safe operation.
To guarantee the actuators do not cause injury, designers must implement safeguards for their control systems to prevent them being bypassed and creating unsafe situations. They also need to ensure that corruption by hackers does not disrupt the system’s behavior. Security, therefore, now plays a major role in the design of the motor-control subsystems.
Figure 1 Connectivity in warehouse robots also opens vulnerabilities in motor control systems. Source: EnSilica
Algorithmic demands drive architectural change
Complexity in the motor control also arises from the novel algorithms that designers are using to improve energy efficiency and to deliver more precise positioning. The drive algorithms have moved away from simple strategies such as analog controllers that simply relate power delivered to the motor windings to the motors rotational speed.
They now employ far more sophisticated techniques such as field-oriented control (FOC) that are better able to deliver precise changes in torque and rotor position. With FOC, a mathematical model predicts with high precision when power transistors should activate to supply power to each of the stator windings in order to control rotor torque.
The maximum torque results when the electric and magnetic fields are offset by 90°, delivering highly efficient motion control. It also ensures high positioning accuracy with no need for expensive sensors or encoders. Instead, the mathematical model uses voltage and current inputs from the motor winding to provide the data needed to estimate position and state accurately.
Figure 2 The use of techniques like FOC delivers highly efficient motion control, which ensures greater positioning accuracy without expensive sensors or encoders. Source: EnSilica
In robotics, these algorithms are being supplemented by techniques such as reinforcement learning. Using machine learning to augment motion control has proven highly effective at delivering precise traction control for both wheeled vehicles and legged robots. Dusty or slippery surfaces can be problematic for any automated traction control systems. Training the system to cope with these difficult surfaces delivers greater stability than conventional model-based techniques.
Such control strategies often call for the use of extensive software-based algorithms running on digital signal processors (DSPs) and other accelerators alongside high-performance microprocessors in a layered architecture because of the different time horizons of each of the components.
An AI model trained using a reinforcement learning model, for example, will typically operate with a longer cycle time than the FOC algorithms and the pulse-width modulation (PWM) control signals below them that ensure the motors follow the response needed. As a result, DSP-based models with long time horizons will be supported by algorithms and peripherals that use hardware assistance to operate and meet the deadlines required for real-time operation.
The case for custom hardware
The hard real-time functions are those that have direct control over the power transistors that deliver power to the motor windings, usually implemented in an “inverter” comprising a half-bridge circuit for each of the motor phases. Traditionally, such half-bridge controllers have focused on the implementation of timing loops for PWM.
The switching frequencies are often too high to be supported reliably by software running even on a dedicated microprocessor without needing the processor to be clocked at excessive frequencies. The state machines used to implement PWM switching also take care of functions such as dead-time insertion, which is used to ensure that each transistor doesn’t turn on before its counterpart transistor in the half-bridge inverter is turned off.
The timing gap prevents the shoot-through of current that would result if both transistors were active at the same time. The excess current can damage the motor windings and the drive circuit board. These subsystems are so important that they are often provided as standard building blocks for industrial microcontrollers.
However, in the context of increased threats from hackers and the need to support advanced algorithms, the inverter controller can become a vital component in supporting overall system resilience. By customising the inverter controller, implementors can more easily guarantee safety and security, as well as protect core traction-control IP. Partitioning of the inverter and the rest of the drive subsystem need not just support all three aims, which can also reduce the cost of implementation and verification.
A major advantage of hardware in terms of security is its relative immutability compared to software code. Attackers cannot replace important parts of the hardware algorithm if they gain access. This simplifies some aspects of security certification in addition. Techniques such as formal verification can determine whether the circuitry can ever enter a particular state. Future updates to the system will not directly affect that circuitry.
It’s possible for code changes to alter the interactions between the microcontroller-based subsystems and the lower-level hardware. However, this relationship provides opportunities for the designer to improve their ability to guarantee safe operation, even under the worst-case conditions where a hacker has gained access and replaced the firmware.
Hardware-based lockout mechanisms and security checks can ensure that if the upper-level software of the system is compromised, the system will place itself into a safe state. The lockouts can include support for mechanisms such as secure boot. This ensures that only the software that passes the ASIC’s own checks can activate the motor.
Using hardware for safety and security protection can help reduce the cost of software assurance, which is now subject to legislation such as the European Union’s Cybersecurity Resilience Act (CRA). The new law demands that manufacturers and service operators issue software updates for critically compromised systems.
By moving key elements of the system design into hardware and minimizing the implications of a hack, the designer can reduce the need for frequent updates if new vulnerabilities are found in upper-level software. Similarly, moving interlocks into hardware simplifies the task of demonstrating safe operation for standards such as ISO 26262 compared with purely software-based implementations.
Physical attacks can often involve power interruptions, which provides a way to design an ASIC that protects against such tampering. For example, if power monitoring circuitry detects a brownout, it can reset the microprocessor and place the rest of the system in a safe, quiescent state.
Hardware choices that support compliance and control
Alongside the additional functions, an ASIC inverter controller can host more extensive parts of the motor-control subsystem and reduce the cost of the microprocessor components. For example, FOC relies on trigonometric and other computationally expensive transforms.
Moving these into a coprocessor block in the ASIC can streamline the design. This combination can also reduce control latency by connecting inputs from current and voltage sensors to the low-level DSP functions.
The functions need not all be fixed. Modern ASICs may include configurable blocks such as programmable filters, gain stages, and parameterizable logic to offer a level of adaptability. The use of programmable functions can let a single ASIC design control various motor configurations across an entire product range.
The programming of these elements illustrates one of the many safety and security trade-offs that design teams can make. Incorporating non-volatile memory into the ASIC can provide the greatest security. Putting the programmable elements into an ASIC that can be locked by blowing fuses after manufacturing is more secure than a design where a host microcontroller writes configuration values during the boot process.
The MCU-based control chips require a silicon process suitable for storing the firmware code, normally based on flash memory. This implies some additional processing masks, which increase the cost of the final product, a factor especially sensitive if the production volume is high.
If the design calls for the high-voltage capability offered by Bipolar-CMOS-DMOS (BCD) processes for the motor-drive circuitry, a second die may be needed for non-volatile memory. But the flash CMOS process will normally support a higher logic density than the BCD-based parts, which allows the overall cost to be optimized.
Thanks to its ability to support deterministic control loops and support verification techniques that can ease security and safety certification, the use of hardware is becoming increasingly important to e-mobility and robotics designs.
Through careful architecture selection, such hardware can enable the use of software for flexibility and its own ability to support novel control strategies as they evolve. The result is an environment where ASIC use can offer the best of both worlds to design teams.
David Tester, chief engineer at EnSilica, has 30+ years of experience in the development of analogue, digital and mixed-signal ICs across a wide range of semiconductor products.
Related Content
- Learning the Basics of Motor Control
- Optimizing motor control for energy efficiency
- Five trends to watch in automotive motor control
- MCUs specialize in motor control and power conversion systems
- High-Performance Motor Control Chip with Multi-Core Architecture
The post Custom hardware helps deliver safety and security for electric traction appeared first on EDN.
HV reed relays are customizable to 20 kV

Series 600 high-voltage reed relays from Pickering Electronics offer over 2500 combinations of rating and connection options. They are customizable from 3.5 kV to 12.5 kV, with standoff voltages from 5 kV to 20 kV and switching power up to 200 W. Switch-to-coil isolation reaches 25 kV, safely separating control circuitry from high-voltage paths even in demanding environments.
Built with vacuum-sealed, instrumentation-grade reed switches, the relays are available with 1 Form A (NO), 1 Form B (NC), and 1 Form C (Changeover) contacts and 5-V, 12-V, or 24-V coils. An optional diode or Zener-diode combination suppresses back EMF, while mu-metal screening reduces magnetic interference. Insulation resistance exceeds 1013 Ω, ensuring minimal leakage and maximum isolation.
A variety of case sizes, connection types (turrets, flying leads, PCB pins), and potting materials helps engineers meet thermal, mechanical, and environmental requirements. Series 600 relays support many high-voltage test and switching applications, including EV BMS and charge-point testing, inverter or insulation-resistance testing in solar systems, and isolation in medical equipment.
Request free pre-production samples, access the datasheet, or try the configuration tool via the product page link below.
The post HV reed relays are customizable to 20 kV appeared first on EDN.
WM-Bus modules enable flexible sub-GHz metering

Quectel has announced the KCMCA6S series of Wireless M‑Bus (WM‑Bus) modules, capable of sub-1 GHz operation for smart metering. Based on Silicon Labs’ EFR32FG23 wireless SoC, featuring a 73‑MHz Arm Cortex‑M33 processor, the modules operate in the 868‑MHz, 433‑MHz, and 169‑MHz bands.
The devices comply with EN 13757‑4, the European standard for wireless metering, and support the WM‑Bus protocol and other proprietary sub‑GHz protocols. Their built-in software stack and flexible configuration modes eliminate the need for third-party protocol integration.
Modules include an optional integrated SAW filter to limit interference from cellular signals, an important factor for devices combining WM-Bus with cellular technologies such as NB-IoT or LTE Cat 1. They feature 32 KB of RAM and 256 KB of flash memory.
Availability for the KCMCA6S series was not provided at the time of this announcement.
The post WM-Bus modules enable flexible sub-GHz metering appeared first on EDN.
TOLL-packaged SiC MOSFETs cut size, losses

Three 650-V SiC MOSFETs from Toshiba come in compact surface-mount TOLL packages, boosting both power density and efficiency. The 9.9×11.68×2.3-mm package shrinks volume by more than 80% compared to through-hole TO-247 and TO-247-4L(X) types.
TOLL also provides lower parasitic impedance, reducing switching losses. As a 4-terminal package, it enables a Kelvin source connection for the gate drive, minimizing the impact of package inductance and supporting high-speed switching. For the TW048U65C 650-V SiC MOSFET, turn-on and turn-off losses are about 55% and 25% lower, respectively, than the same Toshiba products in the TO-247 package without Kelvin connection.
The third-generation MOSFETs in this launch target switch-mode power supplies in servers, communication gear, and data centers. They are also suited for EV charging stations, photovoltaic inverters, and UPS equipment.
Datasheets and device availability are accessible via the product page links below.
Toshiba Electronic Devices & Storage
The post TOLL-packaged SiC MOSFETs cut size, losses appeared first on EDN.
Software verifies HDMI 2.2 electrical compliance

Keysight physical-layer test software provides compliance and performance validation for HDMI 2.2 transmitters and Cat 4 cables. The D9021HDMC electrical performance and compliance software and the N5992HPCD cable eye test software help engineers address the demands of UHD video and HDR content. Together, they improve signal integrity and support HDMI Forum compliance.
The recent release of the HDMI 2.2 test specification introduces more stringent compliance requirements for transmitters and cables, exposing gaps in conventional test coverage. As the HDMI ecosystem evolves to support higher resolutions, faster refresh rates, and greater bandwidth, the Keysight software provides a unified platform for automated electrical testing as defined by the specification.
Keysight’s platform combines high-bandwidth measurement hardware with automated compliance workflows to manage complex test scenarios across transmitters and cables. Its modular architecture enables flexible test configurations, and built-in diagnostics help identify the root causes of signal degradation. This allows design teams to verify compliance and optimize performance early in development.
The post Software verifies HDMI 2.2 electrical compliance appeared first on EDN.
GNSSDO modules ensure reliable PNT performance

Microchip’s GNSS-disciplined oscillator (GNSSDO) modules integrate positioning, navigation, and timing (PNT) for mission-critical aerospace and defense applications. Built with the company’s chip-scale atomic clock, miniature atomic clock, and OCXOs, the compact modules are well-suited for systems that operate in GNSS-denied environments.
The modules process reference signals from a GNSS or an alternative clock source to discipline the onboard oscillator, ensuring precise timing, stability, and holdover operation. They can function as a PNT subsystem within a larger system or as a stand-alone unit.
All modules output 1-PPS TTL and 10-MHz sine wave signals, with distinct features for different use cases:
- MD-013 ULTRA CLEAN – Highest-performance design with multi-constellation GNSS support, ultra-low phase noise, and short-term stability; optional dual-band receiver upgrades.
- MD-300 – Rugged 1.5×2.5-in. module with MEMS OCXO or TCXO for low g-sensitivity, shock/vibration tolerance, and low thermal response; suited for drones and manpacks.
- LM-010 – PPS-disciplined module for LEO requiring radiation tolerance, stability, and holdover; built with a digitally corrected OCXO or low-power CSAC.
The GNSSDO modules are available in production quantities.
The post GNSSDO modules ensure reliable PNT performance appeared first on EDN.
The Smart Ring: Passing fad, or the next big health-monitoring thing?

The battery in my two-year-old first-gen Pixel Watch generally—unless I use GPS and/or LTE data services heavily—lasts 24 hours-plus until it hits the 15%-left Battery Saver threshold. And because sleep quality tracking is particularly important to me, I’ve more or less gotten in the habit of tossing it on the charger right before dinner, for maximum likelihood it’ll then robustly make it through the night. Inevitably, however, once (or more) every week or so, I forget about the charger-at-dinner bit and then, right when I’m planning on hitting the sack, find myself staring at a depleted watch that won’t make it until morning. First world problem. I know. Still…
Therein lies one (of several) of the key motivations behind my recent interest in the rapidly maturing smart ring product category. Such devices typically tout ~1 week (or more) of between-charges operating life, and they also recharge rapidly, courtesy of their diminutive integrated cells. A smart ring also affords flexibility regarding what watches (including traditional ones) I can then variously put on my wrist. And, as noted within my 2025 CES coverage:
This wearable health product category is admittedly more intriguing to me because unlike glasses (or watches, for that matter), rings are less obvious to others, therefore it’s less critical (IMHO, at least) for the wearer to perfectly match them with the rest of the ensemble…plus you have 10 options of where to wear one (that said, does anyone put a ring on their thumb?).
I’ve spent the last few months acquiring and testing smart rings from three leading companies: Oura (the Gen3 Horizon), Ultrahuman (the Ring AIR), and RingConn (the Gen 2). They’re left-to-right on my left-hand index finger in the following photo: that’s my wedding band on the ring finger . The results have been interesting, to say the least. I’ll save per-manufacturer and per-product specifics for follow-up write-ups to appear here in the coming months. For now, in the following sections, I’ll share some general comparisons that span multiple-to-all of them.
An important upfront note: back in April, I learned that Finland-based Oura (the product category’s volume shipment originator, and the current worldwide market leader) had successfully obtained a preliminary ruling from the United States ITC (International Trade Commission) that both China-based RingConn and India-based Ultrahuman had infringed on its patent portfolio. The final ITC judgement, released on Friday, August 22 (three days ago as I write these words) affirmed that earlier ruling, blocking (in coordination with U.S. Customs and Border Protection enforcement) further shipments of both RingConn and Ultrahuman products into the country and, more generally, further sales by either company after a further 60 day review period ending on October 21. There’s one qualifier, apparently: retailers are allowed to continue selling past that point until their warehouse inventories are depleted.
I haven’t seen a formal response yet from RingConn, but Ultrahuman clearly hasn’t given up the fight. It’s already countersued Oura in its home country, also reporting that the disputed patent, which it claims combines existing components in an obvious way that renders it invalid, is being reviewed by the U.S. Patent and Trademark Office’s Patent Trial and Appeal Board.
We welcome the ITC’s recognition of consumer-protective exemptions and its rejection of attempts to block the access of U.S. consumers. Customers can continue purchasing and importing Ring AIR directly from us through October 21, 2025, and at retailers beyond this date.
What’s more, our software application and charging accessories remain fully available, after the Commission rejected Oura’s request to restrict them.
While we respectfully disagree with the Commission’s ruling on U.S. Patent No. 11,868,178, its validity is already under review by the USPTO’s Patent Trial and Appeal Board (PTAB) on the grounds of obviousness.
Public reporting has raised questions about Oura’s business practices, and its reliance on litigation to limit competition.
We are moving forward with confidence — doubling down on compliance while accelerating development of a next-generation ring built on a fundamentally new architecture. As many observers recognize, restricting competition risks fewer choices, higher prices, and slower innovation.
Ultrahuman remains energized by the road ahead, committed to championing consumer choice and pushing the frontier of health technology.
One perhaps-obvious note: the ITC’s actions only affect sales in the United States, not elsewhere. This also isn’t the first time that the ITC has gotten involved in a wearables dispute. Apple Watch owners, for example, may be familiar with the multi-year, ongoing litigation between Apple and Masimo regarding blood oxygen monitoring. Also, more specific to today’s topic, Samsung pre-emptively filed a lawsuit against Oura prior to entering the market with its Galaxy Ring in mid-2024, citing Oura’s claimed litigious history and striving to ensure that Samsung’s product launch wouldn’t be jeopardized by patent infringement lawsuits from Oura.
The lawsuit was eventually dismissed in March, with the judge noting a lack of evidence that Oura ever intended to sue Samsung, but Samsung is now appealing that ruling. And as I noted in recent Google product launch event coverage, this same litigious environment may at least partly explain why both Google/Fitbit and Apple haven’t entered the market…yet, at least.
Sizing prep is essentialBefore you buy a smart ring, whatever company’s device you end up selecting, I strongly advise you to first purchase a sizing kit and figure out what size you need on whatever finger you plan to wear it. Sizing varies finger-to-finger and hand-to-hand for every person, first and foremost. Not to mention that if the ring enhances your fitness, leading to weight loss, you’ll probably need to buy a smaller replacement ring eventually—the battery and embedded circuitry preclude the resizing that a jeweler historically would do—hold that thought.
Smart ring sizing can also vary not only from traditional ring measurements’ results, but also from company to company and model to model. My Oura and RingConn rings are both size 11, for example, whereas the Ultrahuman one is a size 10. Sizing kits are inexpensive…usually less than $10, with the purchase price often then applicable as credit against the subsequent smart ring price. And in the RingConn case, the kits are free from the manufacturer’s online store. A sizing kit is upfront money well spent, regardless of the modest-at-worst cost.
One key differentiator between manufacturers you’ll immediately run into involves charging schemes. Oura and Ultrahuman’s rings leverage close-proximity wireless inductive charging. Both the battery and the entirety of its charging circuitry, including the charging coil, are fully embedded within the ring. RingConn’s approach, conversely, involves magnetized (for proper auto-alignment)-connection contacts both on the ring itself and on the associated charger.
(Ultrahuman inductive charging)
(RingConn conventional contacts-based charging)
I’ve yet to come across any published pros-and-cons positioning on the two approaches, but I have theories. Charging speed doesn’t seem to be one of the factors. Second-gen-and-beyond Google Pixel Watches with physical contacts reportedly recharge faster than my wireless-based predecessor, especially after its firmware update-induced intentional slowdown. Conversely, I didn’t notice any statistically significant charge-speed variance between any of the smart rings I tested. Perhaps their diminutive battery capacities minimize any otherwise evident variances?
What about fluid-intrusion resistance? I could imagine that, in line with its usage with rechargeable electric toothbrushes operated in water exposure-prone environments:
inductive charging might make it possible, or at a minimum, easier from a design standpoint, to achieve higher IP (ingress protection) ratings for smart rings. Conversely, however, there’s a consumer cost-and-convenience factor that favors RingConn’s more traditional approach. I’ve acquired two chargers per smart ring I tested—one for upstairs at my desk, the other in the bathroom—the latter so I can give the ring a quick charge boost while I’m in the shower.
Were I to go down or (heaven forbid) up a size-or-few with an Oura or UltraHuman ring, my existing charger suite would also be rendered useless, since inductive charging requires a size-specific “mount”. RingConn’s approach, on the other hand (bad pun intended), is ring size-agnostic.
Speaking of RingConn, let’s talk about charging cases (and their absence in some cases). The company’s $199 Gen 2 “Air” model comes with the conventional charging dock shown earlier. Conversely, one of the added benefits (along with sleep apnea monitoring) of the $299 Gen 2 version is a battery-inclusive charging case, akin to those used by Bluetooth earbuds:
It’s particularly handy when traveling, since you don’t need to also pack a power cord and wall wart (conventional charger docks can also be purchased separately). Oura-compatible charging cases are, currently at least, only available from (unsanctioned-by-Oura, so use at your own risk) third parties and require a separate Oura-sourced dock.
And as for Ultrahuman, at least as far as I’ve found, there are only docks.
Internal and external form factorsIn addition to the aforementioned charging circuitry, there is other integrated-electronics commonality between the various manufacturers’ offerings (leading to the aforementioned patent infringement claim—if you’re Oura—or “obviousness” claim—if you’re Ultrahuman). You’ll find multi-color status LEDs, for example, along with Bluetooth and/or NFC connectivity, accelerometers, body temperature monitoring, and pulse rate (green) and oximetry (red) plus infrared photoplethysmography sensors.
The finger is the preferable location for blood-related monitoring vs the wrist, actually (theoretically at least), thanks to higher comparative aggregate blood flow density. That said, however, sensor placement is particularly critical on the finger, as well as particularly difficult to achieve, due to the ring’s circular and easily rotated form factor.
Most smart rings are more or less round, for style reasons and akin to traditional non-electronic forebears, with some including flatter regions to guide the wearer in achieving ideal on-finger placement alignment. One extreme example is the Heritage version of the Oura Gen3 ring:
with a style-driven flatter frontside compared to its Gen3 Horizon sibling:
Interestingly, at least to me, Oura’s newest Ring 4 only comes in a fully round style:
as well as in an expanded suite of both color and size options, all specifically targeting a growing female audience, which Ultrahuman’s Rare line is also more obviously pursuing (I hadn’t realized this until my recent research, but the smart ring market was initially male-dominated):
The Ring 4 also touts new Smart Sensing technology with 18 optical signal paths (vs 8 in the Gen3) and a broader sensor array. I’m guessing that this enhancement was made in part to counterbalance the degraded-results effects of non-ideal finger placement. To wit, look at the ring interior and you’ll encounter another means by which manufacturers (Oura with the Gen3, as well as RingConn, shown here) include physical prompting to achieve and maintain proper placement: sensor-inclusive “bump” guides on both sides of the backside inside:
Some people apparently find them annoying, judging from Reddit commentary and reviews I’ve read, along with the fact that Ultrahuman rings’ interiors are smooth, as well as the comparable sensor retraction done by Oura on the Ring 4. The bumps don’t bother me (and others); in contrast, in fact, I appreciate their ongoing optimal-placement physical-guidance assistance.
Accuracy, or lack thereofHow did I test all these rings? Thanks for asking. At any point in time, I had one on each index finger, along with my Pixel Watch on my wrist (my middle fingers were also available, along with my right ring finger, but their narrower diameters led to loose fits that I feared would unfairly throw off measurement results).
I rotated through my three-ring inventory both intra- and inter-day, also repeatedly altering which hand’s index finger might have a given manufacturer’s device on it. And I kept ongoing data-point notes to supplement my oft-imperfect memory.
The good news? Cardio- and pulmonary-related data measurements, including sleep-cycle interpretations (which I realize also factor in the accelerometer; keep reading), seemed solid. In the absence of professional medical equipment to compare against, I have no way of knowing whether any of the output data sets (which needed to be viewed on the associated mobile apps, since unlike watches, these widgets don’t have built-in displays…duh…) were accurate. But the fact that they all at least roughly matched each other was reassuring in and of itself.
Step counting was a different matter, however. Two general trends became increasingly apparent as my testing and data collection continued:
- Smart ring step counts closely matched both each other and the Pixel Watch on weekends, but grossly overshot the smart watch’s numbers on weekdays, and
- During the week, whatever ring I had on my right hand’s index finger overshot the step-count numbers accumulated by its left-hand counterpart…consistently.
Before reading on, can you figure out what was going on? Don’t feel bad if you’re stumped; I thank my wife’s intellect (which, I might add, immediately discerned the root cause), not mine (sloth-like and, standalone, unsuccessful), for sorting out the situation. On the weekends, I do a better job of staying away from my computer keyboard; during the week, the smart rings’ accelerometers were counting key presses as steps. And I’m right-handed, therefore leading to additional right-hand movement (and phantom step counts) each time I accessed the trackpad.
By the way, each manufacturer’s app, with varying breadth, depth, and emphasis, not only reports raw data but also interpretations of stress level and the like by combining and analyzing multiple sensors’ outputs. To date, I’ve generally overlooked these additional results nuances, no matter that I’m sure I’d find the machinations of the underlying algorithms fascinating. More to come in the future; for now, with three rings tested, the raw data was overwhelming enough.
Battery life and broader reliabilityAs I dove into the smart ring product category, I kept coming across mentions of claimed differentiation between their “health” tracking and other wearables’ “fitness” tracking. It turns out that, as documented in at least some cases, smart rings aren’t continuously measuring and logging data coming from a portion of their sensor suites. I haven’t been able to find any info on this from RingConn, whose literature is in general comparatively deficient; I’d welcome reader direction toward published info to bolster my understanding here. That said, the company’s ring was the clear leader of the three, dropping only ~5% of charge per day (impressively translating to nearly 3 weeks of between-charges operating life until the battery is drained).
Oura’s rings only discern heart rate variability (HRV) during sleep (albeit logging the base heart rate more frequently), “to avoid the daytime ‘noise’ that can affect your data and make it harder to interpret”. Blood oxygen (SpO2) sensing also only happens while asleep (I took this photo right after waking up, right before the watch figured out I’d done so and shut off):
Selective, versus continuous, data measurement has likely obvious benefits when it comes to battery life. That said, my Oura ring’s (which, like its RingConn counterpart, I bought already lightly used; keep reading) battery level dropped by an average of ~15% per day.
And Ultrahuman? The first ring I acquired only lasted ~12 hours until drained, and took nearly a day to return to “full”, the apparent result of a firmware update gone awry (unrecoverable in this case, alas). To its credit, the company sent me a replacement ring (and told me to just keep the existing one; stay tuned for a future teardown!). At about that same time, Ultrahuman also added another Oura-reminiscent and battery life-extending operating mode called “Chill” to the app and ring settings, which it also made the default versus the prior-sole “Turbo”:
Chill Mode is designed to intelligently manage power while preserving the accuracy of your health data. It extends your Ring AIR battery life by up to 35% by tracking only what matters, when it matters. Chill Mode uses motion and context-based intelligence to track heart rate and temperature primarily during sleep and rest.
More generally, keep in mind that none of these devices are particularly inexpensive; the RingConn Gen 2 Air is most economical at $199, with the Oura Ring 4 the priciest mainstream option at between $349 and $499, depending on color (and discounting the up-to-$2,200 Ultrahuman Rare…ahem…). A smart ring that lasts a few years while retaining reasonable battery life across inevitable cycle-induced cell degradation is one thing. One that becomes essentially unusable after a few months is conversely problematic from a reputation standpoint.
Total cost, and other factors to considerKeep in mind, too, that ongoing usage costs may significantly affect the total price you end up paying over a smart ring’s operating life. Ironically, RingConn is not only the least expensive option from an entry-cost standpoint but also over time; although the company offers optional extended warranty coverage for damage, theft, or loss, lifetime support of all health metrics is included at no extra charge.
On the other end of the spectrum is Oura; unless you pay $5.99/month or $69.99/year for a membership (first month free), “you’ll only be able to see your three daily Oura scores (Readiness, Activity, and Sleep), ring battery, basic profile information, app settings, and the Explore content.” Between these spectrum endpoints is Ultrahuman. Like RingConn, it offers extended warranties, this time including (to earlier comments) 2-year “Weight loss insurance”:
Achieved your weight loss goals? We’ll make resizing easy with a free Ultrahuman Ring AIR replacement, redeemable once during your UltrahumanX coverage period.
And, again, as with RingConn, although baseline data collection and reporting are lifetime-included, it also sells a suite of additional-function software plug-ins it calls PowerPlugs.
One final factor to consider, which I continue to find both surprising and baffling, is the fact that none of the three manufacturers I’ve mentioned here seems to support having more than one ring actively associated with an account, therefore, cloud-logging and archiving data, at the same time. To press a second ring into service, you need to manually delete the first one from your account first. The lack of multi-ring support is a frequent cause of complaints on Reddit on elsewhere, from folks who want to accessorize multiple smart rings just as they do with normal rings, varying color and style to match outfits and occasions. And the fiscal benefit to the manufacturers of such support is intuitively obvious, yes?
Looking back, having just crossed through 3,000 words, I’m sure glad I decided to split what was originally envisioned as a single write-up into a multi-post series I’ll try to get the RingConn and Ultrahuman pieces published ahead of that October 21 deadline, for U.S. readers that might want to take the purchase plunge before inventory disappears. And until then, I welcome your thoughts in the comments on what I’ve written thus far!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch
- The 2025 CES: Safety, Longevity and Interoperability Remain a Mess
- If you made it through the schtick, Google’s latest products were pretty fantastic
- Wireless charging: The state of disunion
The post The Smart Ring: Passing fad, or the next big health-monitoring thing? appeared first on EDN.
A design guide for respiratory belt transducers

Curious about how respiratory belt transducers work—or how to design one yourself? This quick guide walks you through the essentials, from sensing principles to circuit basics. Whether you are a hobbyist, student, or engineer exploring wearable health technology, you will find practical insights to kickstart your own design.
Belly breathing, also known as diaphragmatic or abdominal breathing, involves deep inhalation that expands the stomach and allows the lungs to fully inflate. This technique engages the diaphragm—a dome-shaped muscle at the base of the lungs—which contracts downward during inhalation to create space for lung expansion and relaxes upward during exhalation to push air out.
In contrast, chest breathing (also called thoracic or shallow breathing) relies on upper chest muscles and produces shorter, less efficient breaths, limiting oxygen intake and often contributing to stress and tension. Belly breathing has been shown to lower heart rate and blood pressure, promote relaxation, and improve overall respiratory efficiency.
What if you could measure your breathing motion, capture it in real time, and receive meaningful feedback? A respiratory belt transducer offers a simple and effective solution. It detects changes in chest or abdominal diameter during breathing and converts that movement into a voltage signal, which can be recorded and analyzed to assess breathing patterns, rate, and depth.
First off, note that while piezoelectric, inductive, capacitive, and strain gauge sensors are commonly used in respiratory monitoring, this post highlights more accessible alternatives, namely conductive rubber cords and stretch sensors. These materials offer a low-cost, flexible solution for detecting abdominal or chest expansion, making them ideal for DIY builds, classroom experiments, and basic biofeedback systems.
Figure 1 A generic 2-mm diameter conductive rubber cord stretch sensor kit that makes breathing belt assembly easier. Source: Author
As observed, the standard 2-mm conductive rubber cord commonly available in the hobby electronics market exhibits a resistance of approximately 140 to 160 ohms per centimeter. This capability makes it suitable for constructing a respiratory belt that generates a voltage in response to changes in thoracic or abdominal circumference during breathing.
Next, fabricate the transducer by securely bonding the flexible sensing element—the conductive rubber cord—to the inner surface of a suitably sized fabric belt. It should then be placed around the body at the level of maximum respiratory expansion.
A quick hint on design math: in its relaxed state, the conductive rubber cord (carbon-black impregnated) exhibits a resistance of approximately 140 ohms per centimeter. When stretched, the conductive particles disperse, increasing the resistance proportionally.
Once the force is removed, the rubber gradually returns to its original length, but not instantly. Full recovery may take a minute or two, depending on the material and conditions. You can typically stretch the cord to about 50–70% beyond its original length, but it must stay within that range to avoid damage. For example, a 15-cm piece should not be stretched beyond 25–26 cm.
Keep in mind, this conductive rubber cord stretch sensor does not behave in a perfectly linear way. Its resistance can change from one batch to another, so it’s best used to sense stretching motion in a general way, not for exact measurements.
To ensure accurate signal interpretation, a custom electronic circuitry with a sensible response to changes in cord length is essential; otherwise, the data will not hold water. The output connector on the adapter electronics should provide a directly proportional voltage to the extent of stretch in the sensing element.
Frankly, this post doesn’t delve into the mechanical construction of the respiratory belt transducer, although conductive rubber cords are relatively easy to use in a circuit. However, they can be a bit tricky to attach to things, both mechanically and electrically.
The following diagram illustrates the proposed front-end electronics for the resistive stretch sensor (definitely not the final look). Optimized through voltage scaling and linearization, the setup yields an analog output suitable for most microcontroller ADCs.
Figure 2 The proposed sensor front-end circuitry reveals a simplistic analog approach. Source: Author
So, now you have the blueprint for a respiratory belt transducer, commonly known as a breathing belt. It incorporates a resistive stretch sensor to detect changes in chest or abdominal expansion during breathing. As the belt stretches, the system produces an analog output voltage that varies within a defined range. This voltage is approximately proportional to the amount of stretch, providing a continuous signal that mirrors the breathing pattern.
Quick detour: A ratiometric output refers to a sensor output voltage that varies in proportion to its supply voltage. In other words, the output signal scales with the supply itself, so any change in supply voltage results in a corresponding change in output. This behavior is common in unamplified sensors, where the output is typically expressed as a percentage of the supply voltage.
Before wrapping up, I just came across another resistive change type strain sensor worth mentioning: GummiStra from Yamaha. It’s a rubber-like, stretchable sensor capable of detecting a wide range of small to large strains (up to twice in length), both statically and dynamically. You can explore its capabilities in detail through Yamaha’s technology page.
Figure 3 GummiStra unlocks new use cases for resistive stretch sensing across wearables, robotics, and structural health monitoring. Source: Yamaha
We will leave it there for the moment. Got your own twist on respiratory belt transducer design? Share your ideas or questions in the comments.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Key design considerations, future of IoT sensors
- High-Performance Design for Ultrasound Sensors
- Improving Sensor to ADC Analog Interface Design
- Transducer Options for Safe, Precise Current Sensing
- Vacuum transducer features cost-effective OEM integration
The post A design guide for respiratory belt transducers appeared first on EDN.
A temperature-compensated, calibration-free anti-log amplifier

The basic anti-log amplifier looks like the familiar circuit of Figure 1.
Figure 1 The typical anti-log circuit has uncertainties related to the reverse current, Is, and is sensitive to temperature.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The approximate equation for V0 given in Figure 1 comes from the Ebers-Moll model. A more advanced model employed by many modern spice simulators, such as LTspice, is the Gummel-Poon model, which I won’t discuss here. It suffices for discussions in this Design Idea (DI) to work with Ebers-Moll and to let simulations benefit from the Gummel-Poon model.
The simple Figure 1 circuit is sensitive to both temperature and the value of Is. Unfortunately, the value and limits of Is are not specified in datasheets. Interestingly, spice models employ specific parametric values for each transistor, but still say nothing about the limits of these values. Transistors taken from different sections of the same silicon wafer can have different parametric values. The differences between different wafers from the same facility can be greater yet and can be even more noticeable when those from different facilities of the same manufacturer are considered. Factor in the products of the same part number from different manufacturers, and clear, plausible concerns about design repeatability are evident.
Addressing temperature and Is variationsThere’s a need for a circuit that addresses these two banes of consistent performance. Fortunately, the circuit of Figure 2 is a known solution to the problem [1].
Figure 2 This circuit addresses variations in both temperature and Is. Key to its successful operation is that Q1a and Q1b constitute a matched pair, taken from adjacent locations on the same silicon wafer. Operating with the same VCEs is also beneficial for matching.
It works as follows. Given that Q1a and Q1b are taken from adjacent locations on the same silicon wafer, their characteristics (and specifically Is) are approximately identical (again, Is isn’t spec’d). And so, we can write that:
It’s also clear that:
Additionally,
So:
Therefore:
Substituting Ic expressions for the two VBEs,
And here’s some of the circuit’s “magic”: whatever their value, the matched Is’s cancel! From the properties of logarithms,
Again, from the properties of logarithms:
Exponentiating, substituting for the Ic’s, and solving for V0:
Note that Vi must be negative for proper operation.
Improving temperature compensationLet’s now turn our attention to using a thermistor to deal with temperature compensation. Those I’m used to dealing with are negative temperature coefficient (NTC) devices. But they’ll do a poor job of canceling the “T” in the denominator of Equation (1). Was there an error in Reference [1]?
I exchanged the positions of R3 and the (NTC) thermistor in the circuit of Figure 2 and added a few resistors in various series and parallel combinations. Trying some resistor values, this met with some success. But the results were far better with the circuit as shown when a positive temperature coefficient (PTC) was used.
I settled on the readily available and inexpensive Vishay TFPT1206L1002FM. These are almost perfectly linear devices, especially in comparison to the highly non-linear NTCs. Figure 3 shows the differences between two such devices with resistances of 10 kΩ at 25°C. It makes sense that a properly situated nearly linear device would do a better job of canceling the linear temperature variation.
Figure 3 A comparison of a highly non-linear NTC and a nearly linear PTC.
To see if it would improve the overall temperature compensation in the Figure 2 circuit, I considered adding a fixed resistor in series with the TFPT1206L1002FM and another in parallel with that series combination.
Thinking intuitively that this three-component combination might work better in the feedback path of an inverting op amp whose input was another fixed resistor, I considered both the original non-inverting and this new inverting configurations. The question became how to find the fixed resistor values.
The argument of the exponent in Equation (1) (exclusive of Vi) provides the transfer function H(T, <resistors, PTC>), which would be ideally invariant with temperature T (with Th1 suitably modified to accommodate the series and parallel resistors).
For any given set of resistor values, the configurations apply some approximate, average attenuation α to the input voltage Vi. We need to find the values of the resistors and of α such that for each temperature Tk over a selected temperature range (I chose to work with the integer temperatures from -40°C to +85°C inclusive and used the PTC’s associated values), the following expression is minimized:
Excel’s Solver was the perfect tool for this job. (Drop me a note in this DI’s comments section if you’re interested in the details.)
The winning resultThe configurations were found to work equally well (with different value components.) I chose the inverter because it allows Vi to be a positive voltage. Figure 4 shows the winning result. The average value α was determined to be 1.1996.
Figure 4 The simulated circuit with R2a, R2b, and R3 chosen with the help of Excel’s Solver. A specific matched pair of transistors has been selected, along with values for resistors R1 and Rref, and a voltage source Vref.
For Figure 4, Equation (1) now becomes approximately:
The circuit in Figure 4 was simulated with 10° temperature steps from -40°C to +80°C and values for Vi of 100 µV, 1 mV, 10 mV, 100 mV, 1 V, and 6 V. These V0 values were divided by those given by Equation (2), which are the expected results for this circuit.
Over the industrial range of operating temperatures and more than four orders of magnitude of input voltages, Figure 5 shows a worst-case error of -4.5% / +1.0%.
Figure 5 Over the industrial range of operating temperatures and over 4.5 orders of magnitude of input voltages from 100 µV to 6 V, the Figure 4 circuit shows a worst-case error of better than -5.0% / + 1.0%. V0 ranges from 2.5 mV to 3 V.
BonusWith a minor addition, this circuit can also support a current source output. Simply split Figure 4’s R1 into two resistors in series and add the circuit of Figure 6.
Figure 6 Split R1 of Figure 4 into R1a and R1b; also add U4, Rsense, and a 2N5089 transistor to produce a current source output.
CaveatsWith all of this, the simulation does not account for variations between the IS’s of a matched pair’s transistors; I’m unaware of a source for any such information. I’ve not specified op-amps for this circuit, but they will require positive and negative supplies and should be able to swing at least 1-V negative with respect to and have a common-mode input range that includes ground. Bias currents should not exceed 10 nA, and sub-1 mV offset voltages are recommended.
Temperature compensation for anti-log ampExcel’s Solver has been used to design a temperature-compensation network for an anti-log amplifier around a nearly linear PTC thermistor. The circuit exhibits good temperature compensation over the industrial range. It operates within a signal range of more than three orders of magnitude. Voltage and current outputs are available.
References
- Jain, M. K. (n.d.). Antilog amplifiers. https://udrc.lkouniv.ac.in/Content/DepartmentContent/SM_6aac9272-bddd-4108-96ba-00a485a00155_57.pdf
Related Content
- Gain control
- Why modulate a power amplifier?—and how to do it
- Power amplifiers that oscillate—deliberately. Part 1: A simple start.
- Log and limiting amps tame rowdy communications signals
- Mutant op amp becomes instrumentation amp
The post A temperature-compensated, calibration-free anti-log amplifier appeared first on EDN.
Positive analog feedback linearizes 4 to 20 mA PRTD transmitter

I recently published a simple design for a platinum resistance detector (PRTD) 4 to 20mA transmitter circuit, illustrated in Figure 1.
Figure 1 The PRTD 4 to 20 mA loop transmitter with constant current PRTD excitation that relies on 2nd order software nonlinearity correction math, ToC= (-u + (u2 – 4wx)1/2)/(2w).
Wow the engineering world with your unique design: Design Ideas Submission Guide
The simplicity of Figure 1’s circuitry is somewhat compromised, however, by its need for PRTD nonlinearity correction in software:
u and w constant and x = RPRTD@0oC – RPRTD@0oT
ToC= (-u + (u2 – 4wx)1/2)/(2w)
Unfortunately, implementing such quadratic floating-point arithmetic in a small system might be inconveniently costly in code complexity, program memory requirements, and processing time.
But fortunately, there’s a cool, clever, comparably accurate, code-ware-lite, and still (reasonably) uncomplicated alternative (analog) solution. It’s explained in this article “Design Note 45: Signal Conditioning for Platinum Temperature Transducers,” by (whom else?) famed designer Jim Williams.
Figure 2, shamelessly copied from William’s article, showcases his analog solution to PRTD nonlinearity.
Figure 2 A platinum RTD bridge where feedback to the bridge from A3 linearizes the circuit. Source: Jim Williams
Williams explains: The nonlinearity could cause several degrees of error over the circuit’s 0°C to 400°C operating range. The bridge’s output is fed to instrumentation amplifier A3, which provides differential gain while simultaneously supplying nonlinearity correction. The correction is implemented by feeding a portion of A3’s output back to A1’s input via the 10k to 250k divider. This causes the current supplied to Rp to slightly shift with its operating point, compensating sensor nonlinearity to within ±0.05°C.
Figure 3 shows William’s basic idea melded onto Figure 1’s current transmitter concept.
Figure 3 A PRTD transmitter based on the classic LM10 op-amp plus a 200 mV precision reference combo.
R5 provides PRTD-linearizing positive feedback to sensor excitation over the temperature range of -130 °C to +380 °C.
Here, linearity correction is routed through R5 to the LM10 internal voltage reference, where it is inverted to become positive feedback. The resulting “slight shift in operating point” (about 4% over the full temperature range) duplicates William’s basic idea to achieve the measurement linearity plotted in Figure 4.
Figure 4 Positive feedback reduces linearity error to < ±0.05 oC over -127 oC to +380 oC. The x-axis = Io (mA), left y-axis = PRTD temperature, right y-axis = linearity error. T oC = 31.7(Io – 8mA).
Of course, to consistently achieve this ppm level of accuracy and linearity probably needs an iterative calibration process like the one William’s describes. Figure 5 shows the modified circuit from Figure 3, which includes three additional trims to enable post-assembly tweaking using his procedure.
Figure 5 Linearized temperature transmitter modified for a post-assembly tweaking using his procedure.
Substituting selected precision resistors for the PRTD at chosen calibration points is vital to making the round-robin process feasible. Using actual variable temperatures would take impossibly long! Unfortunately, super precise decade boxes like the one William’s describes are also super scarce commodities. So, three suitable standard value resistors, along with the corresponding simulated temperatures and 4-20 mA loop currents, are suggested in Figure 5. They are:
51.7 Ω = -121 oC = 4.183 mA
100 Ω = 0 oC = 8.000 mA
237 Ω = 371 oC = 19.70 mA
Happy tweaking!
Oh yeah, to avoid overheating in Q1, it should ideally be in a TO-220 or similar package if Vloop > 15 V.
Related Content
- Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs
- The power of practical positive feedback to perfect PRTDs
- Improved PRTD circuit is product of EDN DI teamwork
- Platinum-RTD-based circuit provides high performance with few components
- DIY RTD for a DMM
The post Positive analog feedback linearizes 4 to 20 mA PRTD transmitter appeared first on EDN.
EMI fundamentals for spacecraft avionics & satellite applications

OEMs must ensure their avionics are electromagnetically clean and do not pollute other sub-systems with unwelcome radiative, conducted, or coupled emissions. Similarly, integrators must ensure their space electronics are not susceptible to RFI from external sources, as this could impact performance or even damage hardware.
As a product provider, how do you ensure that your subsystem can be integrated seamlessly and is ready for launch? As an operator, how does EMI affect your mission application and the quality of service you deliver to your customers?
EMI is unwanted electrical noise that interferes with the normal operation of spacecraft and satellite avionics, generated when fast switching signals with rapid changes in voltage and current interact with unintended capacitances and inductances, producing high-frequency noise that can radiate, conduct, or couple unintended energy into nearby circuits or systems. No conduction exists without some radiation and vice versa!
Fast switching signals with rapidly changing currents and voltages energise parasitic inductances and capacitances,
causing these to continuously store and release energy at high frequencies. These unintended interactions become stronger as the rate of change increases, generating transients, ringing, overshoot and undershoot, crosstalk, as well as power and signal-integrity problems that impact satellite applications.
Sources of EMIModern avionics use switching power supplies, e.g., isolated DC-DCs or point-of-load (POL) regulators, CPUs, FPGAs, clock oscillators, and speedy digital interfaces, all of which switch at high frequencies with increasingly faster edge rates that contain RF harmonics. These functions have become more tightly coupled as OEMs integrate more of these into physically smaller satellites, exacerbating the potential to form and spread EMI.
Furthermore, they typically share power or ground return rails, and a signal or noise in one circuit affects the others through common-impedance coupling via the shared impedance, contributing to power-integrity issues such as ground bounce.
Similarly, satellites use motors, relays, and mechanical switches to deploy and orient solar arrays, point antennae, control reaction wheels and gyroscopes, for robotics and to enable/disable redundant sub-systems. Rapid changes in current and voltage during their operation generate conductive and radiative EMI that impacts nearby circuits, caused by arcing, brush noise within motors, inductance kickback from coils, and contact bounce from mechanical switches.
EMI can also enter spacecraft from the external space environment, i.e., high-energy radiation from solar flares and cosmic rays can induce noise resulting in discharges and transient spikes. Over time, charged particles from the Earth’s magnetosphere, solar wind, or from geomagnetic storms, such as electrons and ions, accumulate on satellite surfaces, forming large potential differences. When the amassed electric-field strength exceeds the breakdown voltage of materials, ESD-induced EMI generates a fast, high-energy transient pulse that can couple into signal lines, disrupting or damaging space electronics. Conductive coatings and grounding networks are used to equalise surface potentials, as well as plasma contactors to remove built-up charge.
EM impact of a high dI/dt and dV/dtEMI can be generated, coupled, and then conducted through physical wires, traces, connectors, and cables. Conductors separated by a dielectric form a capacitor, even unintentionally, and a fast signal on one trace switching at nanosecond speeds, i.e., a high dV/dt, energizes a changing electric field that can capacitively couple noise onto an adjacent track, e.g., a sensitive analogue signal.
Similarly, any loop of wire or a PCB trace intrinsically contains inductance and a high dI/dt and energizes a changing magnetic field that can inductively couple (induce) noise onto an adjacent trace or circuit.
In both cases, inherent parasitic capacitance or inductance provides a lower impedance to current than the intended path. Since current must flow in a loop to its source, loop impedance is the key!
The faster the rate of change, the stronger the electromagnetic coupling, and a changing electric field generates a corresponding magnetic field, which will radiate as an antenna if its loop area is large, contains high-frequency harmonics, or if there is not tight coupling between the forward and return paths. The radiated EM wave couples into nearby conductive structures such as cables, traces, metal enclosures, and sensors, receiving the unwanted RFI.
Any conductor with a time-varying current creates an EM field, and the signal wire and its return path form a loop which can become an antenna when carrying fast-switching currents. Similarly, a PCB trace can start radiating, even if the fundamental signal frequency is low, but contains fast edges, if its forward path is not referenced to an adjacent solid ground plane or if the track length approaches 1/10th or more of the signal wavelength, when the EM fields no longer cancel, forming standing waves that radiate from the track. As a simple example, a 10-cm trace resonates around 350 MHz, depending on the PCB dielectric, and an edge rate of 1 ns contains harmonics up to this frequency that will radiate.
EMI issues in modern modulation techniquesFor telecommunications applications, EMI can raise the noise floor masking low-power uplink carriers (Figure 1), impacting receiver sensitivity and dynamic range, lowering SNR, and reducing channel capacity (). Unintended, in-band spurs can distort modulation constellations, leading to bit/symbol errors, degrading error vector magnitude (EVM). Energy from unwanted spurs can completely mask narrowband carriers or leak into adjacent channels, impacting performance and regulated RFI emissions levels.
Figure 1 Q-PSK and 16-PSK constellations before (left) and after (right) EMI.
Telecommunication satellites provide a continuous service with tight regulatory limits, and even small EMI emissions can be problematic. Payloads typically process many channels and frequency bands, receiving low-level uplinks, so any unwanted noise impacts the overall link budget and operational integrity.
RFI coupling into the low noise amplifiers (LNAs), frequency converters, and filters can generate harmonic distortion, intermodulation products, and crosstalk between channels.
EMI issues in space applicationsEarth-observation applications rely on high-precision optical, LiDAR, radar, or hyperspectral sensors, and unwanted EMI can introduce noise or distortion into the receive electronics, degrading resolution, accuracy, and calibration, misinterpreting the collected data (Figure 2).
Figure 2 Earth-observation imagery before (left) and after (right) EMI. Source: Spacechips
Signals intelligence (SIGINT) satellites rely on the accurate detection, reception, and analysis of weak, distant, and often low-power carriers, and unwanted EMI can severely degrade receiver performance, limit intelligence value, or even render it ineffective (Figure 3). RFI can reduce sensitivity and dynamic range, or overload (jam) RF front-ends, causing non-linear distortion. Internally generated noise can mimic the characteristics of actual intercepted signals, resulting in false-positive classifications or geolocation, misleading analysts or automated processing systems.
EMI from the on-board electronics or switching power supplies can raise the receiver’s noise floor, making it harder or impossible to detect weak signals of interest.
Figure 3 SIGINT spectra before (left) and after (right) EMI. Source: Spacechips
For in-space servicing, assembly, and manufacturing (ISAM) applications, unwanted EMI from motors, actuators, and robotics can impact LiDAR, radar, cameras, and proximity sensors, resulting in loss of situational awareness, errors in docking and alignment, and reduced control accuracy.
For space exploration, EMI can affect sensitive instruments, corrupting measurements, resulting in the misinterpretation of scientific data. For example, magnetometers are used to detect weak, planetary magnetic fields and their variation, and artificial emissions from the avionics or spacecraft motors can mask or distort real science. As shown in Figure 4, magnetometers are often mounted on long booms away from the satellite to reduce the impact of EMI from the on-board electronics.
Figure 4 NASA’s MESSENGER Spacecraft with Magnetometer Boom. Source: NASA
For all applications, unintended and uncontrolled EMI on power, ground, and signal cables/traces affects on-board circuits and overall system performance. If not managed, RFI can pose a greater threat to avionics than the radioactive environment of space, damaging sub-systems, impacting mission reliability, and satellite lifetime.
Regulatory agenciesFor decades, many OEMs have built avionics with little regard to EMI, only to discover emissions are too high or their sub-systems are susceptible to external RFI. Considerable time is then spent identifying the source of the interference, retrofitting fixes to patch the problem, and pass the mission’s EMC requirements. Often, the root cause is never found or fully understood, and this ‘sticking-plaster’ approach increases product cost, both non-recurrent and recurring, as well as delaying time-to-market.
What should you do if you discover EMI with your latest hardware? For all applications, unwanted noise could result in RFI emissions that violate spectral regulations and interfere with other satellites or terrestrial systems. The UN’s ITU defines how the radio spectrum is allocated between different services and sets maximum allowable levels for out-of-band emissions, spurs, effective radiated power (EIRP), and the received power flux density on Earth.
National regulators, such as the FCC (US), Ofcom (UK), CEPT (Europe), and ETSI (global), enforce these limits before granting operating licenses. Agencies provide EMC standards to guide OEMs developing avionics hardware, e.g., MIL-STD-461, AIAA S-121A, and ECSS-E-ST-20C.
Characterizing EMIThe first step in determining the origin of unwanted EMI is to understand whether this is being radiated, conducted, coupled, or a combination of these. EM hardware is often tested as a proof-of-concept PCB in a lab. without a case using unshielded cables and connectors, making system validation more susceptible to external pick-up and common-mode noise.
This interference needs to be initially characterized (probe ground to understand the measurement noise floor) and managed using ferrite-bead clamps, for example, to avoid false positives. Figure 5 and Figure 6 show EM testing with significant common-mode noise picked up by the setup that appears on all the power rails and the ground plane. Both the supply and return cables are around eighteen inches in length, mostly untwisted and unprotected from EMI:
Figure 5 Typical EM testing in a lab using exposed hardware. Source: Spacechips
Figure 6 Common and differential-mode scope measurements of 1V8 power rail. Source: Spacechips
Testing in an anechoic chamber isolates the device under test (DUT) from external interference as well as internal reflections, simulating open-space conditions, allowing you to measure the actual emissions from your avionics to understand their origin and mitigate their impact.
Engineering qualification model (EQM) and flight model (FM) hardware are typically verified in a sealed metal box with gaskets, shielded cables, and connectors, providing a protective Faraday cage for the DUT. This makes the system less susceptible to external EMI and minimizes RFI emissions from the avionics.
Reducing EMITo reduce EMI in existing avionics, filters, chokes, and ferrite beads (lossy as opposed to energy-storing inductors) are added to lower conducted noise on power, signal, and data cables. The most obvious way to decrease EM coupling is to increase the physical separation between conductors, but this may not always be possible. The use of twisted pairs equalizes field coupling between two wires, resulting in common-mode interference that can subsequently be removed. Similarly, differential signalling cancels EM fields.
Clamp-on ferrites choke high-frequency common-mode noise on conductors, allowing low-speed signals to pass while dissipating RF interference as heat. If the same EMI could have generated radiated emissions from long cables, then the ferrites would indirectly reduce this antenna effect. Chip-bead ferrites can suppress both differential and common-mode noise, depending on their placement.
Shielding reduces radiated EMI by creating a physical barrier that reflects or absorbs EM fields before they can escape, as well as preventing external noise from entering avionics. Gaskets maintain an electrically conductive seal, preventing external EMI radiation from entering through openings or internal RFI from escaping through gaps or seams in a metal enclosure. Gaskets ensure a continuous Faraday cage, maintaining a low-impedance electrical path to ground, reducing potential differences that could allow common-mode currents and radiation. The gasket redirects EM fields along the enclosure or to ground, instead of allowing them to radiate into or out of the avionics.
I’ve seen absorbing foam added to many avionics products to soak up unwanted radiated emissions, both internal reflections to prevent these bouncing around within enclosures, coupling and inducing further EMI, as well as reducing the strength of RF energy before it escapes through gaps or seams or conducts onto cables and traces. The foam contains carbon or ferrite particles that create resistive losses when RF fields interact with them. An electronic case can act as a cavity that resonates at a certain frequency, and the use of foam can reduce such standing waves.
Tips for proper EMC designWhile the addition of EMI filters, RF absorbing foam, and ferrites is very helpful, they should be the last line of defense, not the first solution. If you design it right, you won’t need to fix it later! Sometimes there will be exceptions to the rule, and I have used a high-speed semiconductor in a large ceramic package whose intrinsic parasitic inductance generated an EMI spur. Initially, this was an issue for both the OEM and the telecommunications operator, who cleverly positioned the problematic channel over a low-traffic region of the Indian Ocean.
Likewise, when observing and measuring signals, you must ensure your test equipment does not pick up unwanted interference, confuse decision-making, and delay time-to-market by incorrectly diagnosing a working sub-system as a faulty, noisy one. A scope probe and its ground lead form a loop creating a closed-circuit path that can pick up signals or interference due to electromagnetic induction. Faraday’s Law states, “a changing magnetic field through a closed loop induces an EMF in the loop.” The larger the loop area or the faster the rate of change in the magnetic field, the greater the induced voltage.
Proper EMC design and mitigation are essential to ensure data integrity, mission reliability, and satellite longevity. As avionics sub-systems become faster and more integrated, a more proactive approach is required to deliver right-first-time, EMC-compliant hardware and satellite applications:
- EMC compliance must be a key part of early product design.
- Understand the sources of emissions and how to control them – 90% of all EMI originates from unintentional signal flow, e.g., crosstalk or return currents flowing where they were never intended to be, such as to close to the edge of a PCB. All unwanted EMI originates from intentional signals!
- Simulate before building hardware: current radiates, not voltage, check its spectrum before building hardware. The radiated electric field, in V/m, from a current loop in free space can be simplified as,
where I is the current amplitude, A the loop area, and k a constant for a given frequency and observation point. The corresponding magnetic near field in A/m can be approximated as:
, where S is the loop separation and D the measurement distance.
- The most common cause of EMI from products is unintentional common-mode currents on external cables and shields as a result of voltage differences relative to the chassis.
- Manage the layout of your return currents by providing dedicated ground planes, their spread (path of least impedance dominated by inductance) on these reference planes to avoid them coupling, minimize loop area, and provide adjacent ground layers for signals. The following Hyperlynx simulation in Figure 7 predicts current-flow density from a SIGINT SDR:
Figure 7 Siemens’ Hyperlynx Post-Layout Prediction of Return-Current Flow. Source: Spacechips
- Minimize loop area by keeping PCB trace lengths and cables < λ/10 of the highest harmonic frequency within a signal, and not just the fundamental component.
- When probing signals using an oscilloscope, use the smallest ground lead possible to minimize loop area to reduce the amount of induced magnetic flux and hence EMI. A shorter ground connection also has less inductance, which means less distortion and a more accurate representation of the signal under test. Probing in differential mode cancels common-mode noise at the measurement point, and the use of a ferrite-bead clamp around the cable reduces the amount of external noise picked up (induced) by the lead entering the scope. Null probing of ground baselines, the noise floor, and future measurements!
- When testing EM hardware in the lab, exposed circuit boards and/or unshielded power and ground cables pick up EMI interference. These can pollute measurements and obfuscate decisions, validating the system design.
- Test in an anechoic chamber to isolate the avionics from external interference as well as internal reflections to measure the actual emissions from your hardware to understand their origin and mitigate their impact.
- Design your PCB stack, floorplan, and layout to prevent the generation of EMI: assign routing layers between neighbouring ground planes to contain the spread of return currents and maintain good Z0. Never route across a power or ground-plane split!
There’s so much more to say and if you would like to learn more, Spacechips teaches courses on Right-First-Time PCB Design for Spacecraft Avionics as well as EMI Fundamentals for Spacecraft Avionics and Satellite Applications.
Spacechips’ Avionics-Testing Services help OEMs and satellite integrators solve EMI issues that are preventing them from meeting regulatory targets and delivering hardware on time.
Dr. Rajan Bedi is the CEO and founder of Spacechips, which designs and builds a range of advanced, AI-enabled, re-configurable, L to K-band, ultra high-throughput transponders, SDRs, Edge-based on-board processors and Mass-Memory Units for telecommunication, Earth-Observation, ISAM, SIGINT, navigation, 5G, internet and M2M/IoT satellites. The company also offers Space-Electronics Design-Consultancy, Avionics Testing, Technical-Marketing, Business-Intelligence and Training Services. (www.spacechips.co.uk).
Related Content
- Satellite avionics grounding and design for EMC, part 1
- Power electronics in space: A technical peek into the future
- Time-to-digital conversion for space applications
- The EMC space
The post EMI fundamentals for spacecraft avionics & satellite applications appeared first on EDN.
Tearing apart a multi-battery charger

As regular readers may recall, I’m fond of acquiring gear from the “Warehouse” (now renamed as “Resale”) area of Amazon’s website, particularly when it’s temporary-promotion marked down even lower than the normal discounted-vs-new prices. The acquisitions don’t always pan out, but the success rate is sufficient (as are the discounts) to keep me coming back for more.
Today’s product showcase was a mixed-results outcome, which I’ve decided to tear down to maximize my ROI (assuaging my curiosity in the process). Last October, I picked up EBL’s 8-bay charger with eight included NiMH batteries (four AA and four AAA), $24.99 new, for $17.22 (post-20%-off promo discount) in claimed “mint” condition:
The price tag was the primary temptation; that said, the added inclusion of two USB-A power ports was a nice feature set bonus that I hadn’t encountered with other multi-bay chargers. And Amazon also claimed that this Warehouse-sourced device was the second-generation EBL model that supported per-bay charging flexibility.
Not exactly (or even remotely) as-advertisedWhen it arrived, however, while the device itself was in solid cosmetic condition, its packaging, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny in the following photos for size comparison purposes, definitely wasn’t “mint”:
and the contents (including the quick start guide, which I’ve scanned for your educational convenience) were also quite jumbled:
(I belatedly realized, by the way, that I’d forgotten one piece of paper, the also-scanned user manual, in the previous box-contents overview photo)
Not to mention the fact that the charger ended up being the first-generation model, not the second-gen successor, thereby requiring that both bays of each two-bay pair be populated (also with the same battery technology—Ni-MH or Ni-Cd—and size/capacity) to successfully kick off the charging process. When I grumbled, Amazon offered $4.49 in partial-refund compensation, which I begrudgingly accepted, rationalizing that the eight included batteries were still fine and the charger seemed to function fine for what it truly was. Only later did I realize that the charger was actually extremely finicky, rejecting batteries that other chargers accepted complaint-free:
And like I said before, I’d always been curious to look inside one of these things. So, I decided to pull it out of active service and sacrifice it to the teardown knife instead. Here’s our patient:
Note how both sides’ contact arrangements support both AA and AAA battery sizes:
Onward. Top:
Bottom:
Left and right sides:
And back, also including a label closeup:
Before continuing, here are both ends of the AC cord that powers the charger:
And now it’s time to dive inside. No visible (or even initially invisible) screws to speak of:
So, I resorted to “elbow grease”. The device didn’t give up its internal secrets easily (an understandable reality, given that its target customers are largely-tech-unsavvy consumers, and it has high-voltage AC running around inside it), but it eventually succumbed to my colorful language-augmented efforts:
Mission (finally) accomplished:
Some side (left, then right, at least when the device is upright…remember that right now it’s upside-down) shots of newly exposed circuit glimpses before proceeding:
And now let’s get that PCB outta there. At first glance, I saw only three screws holding it in place:
Uhhhh…nope, not yet:
Oh wait, there’s another one, albeit when removed, still delivering no dissection luck:
A bit more blue-streak phrasing…one more peek at the PCB, this time with readers…and…
That’s five minutes of my life I’m never gonna get back:
Upside: the PCB topside’s now exposed to view, too. Note, first off, the four multicolor LEDs (one per pair of charging bays) running along the left edge:
I was admittedly surprised, albeit not so much in retrospect, at just how “analog” everything was. I’d expect a higher percentage of “digital” circuitry were I to take apart my much more expensive La Crosse Technology BC-9009 AlphaPower charger (I’m not going to, to be clear):
Specifically, among other things, I was initially expecting to see a dedicated USB controller IC, which I regularly find in other USB-inclusive devices…until I realized that these USB-A ports had no data-related functions, only power-associated ones, and not even PD-enhanced. Duh on me:
Flipping the PCB back over once again revealed the unsurprising presence of a hefty ground plane and other thick traces. The upper right quadrant (upper left when not upside-down):
handles AC to DC conversion (along with the transformer and other stuff already seen on the other side); the two dominant ICs there are labeled (left to right):
CRE6536
2126KD
(seemingly an AC-DC power management IC from China-based CRE Semiconductor)
and:
ABS210
(which appears to be a single-phase bridge rectifier diode)
while the upper left area, routing the generated DC to the USB ports on the PCB’s other side (among other things), is landscape-dominated by an even larger SS54 diode.
Further down is more circuitry, including a long, skinny IC PCB-marked as U2 but whose topside markings are illegible (if they even ever existed in the first place):
I’ll close out with some side-view shots. Top:
Right:
Bottom:
And left:
And I’ll wrap up with a teaser photo of another, smaller, but no less finicky battery charger that I’ve also taken apart, but, due to this piece as-is ending up longer-than-expected (what else is new?), I have decided to instead save for another dedicated teardown writeup for another day:
With that, I’ll turn it over to you, dear readers, for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Resurrecting a 6-amp battery charger
- Tricky 12V Battery Charger Circuit
- Simplifying multichemistry-battery chargers
- 12V Battery Charger Circuit using SCR
The post Tearing apart a multi-battery charger appeared first on EDN.
Contactless potentiometers: Unlocking precision with magnetic sensing

In the evolving landscape of precision sensing, contactless potentiometers are quietly redefining what reliability looks like. By replacing mechanical wear points with magnetic sensing, these devices offer a frictionless alternative that is both durable and remarkably accurate.
This post offers a quick look at how contactless potentiometers work, where they are used, and why they are gaining ground.
Detecting position, movement, rotation, or angular acceleration is essential in modern control and measurement systems. Traditionally, this was done using mechanical potentiometers—a resistive strip with a sliding contact known as a wiper. As the wiper moves, it alters the resistance values, allowing the system to determine position.
Although these devices are inexpensive, they suffer from wear and tears due to friction between the strip and the wiper. This limits their reliability and shortens their lifespan, especially in harsh environments.
To address these issues, non-contact alternatives have become increasingly popular. Most rely on magnetic sensors and offer a range of advantages: higher accuracy, greater resistance to shocks, vibrations, moisture and contaminants, wider operating temperature ranges, and minimal maintenance. Most importantly, they last significantly longer, making them ideal for demanding applications where durability and precision are critical.
Where are contactless potentiometers used?
Contactless potentiometers (non-contact position sensors) are found in all sorts of machines and devices where it’s important to know how something is moving—without touching it directly. Because they do not wear out like traditional potentiometers, they are perfect for jobs that need long-lasting, reliable performance.
In factories, they help robots and machines move precisely. In cars, they track things like pedal position and steering angle. You will even find them in wind turbines, helping monitor movement to keep everything running smoothly.
They are also used in airplanes, satellites, and other high-tech systems where accuracy and reliability are absolutely critical. When precision and reliability are non-negotiable, contactless potentiometers outperform their mechanical counterparts.
What makes contactless potentiometers work
At the heart of every contactless potentiometer lies a clever interplay of magnetic fields and sensor technology that enables precise, wear-free position sensing.
Figure 1 The STHE30 series single-turn single-output contactless potentiometer employs Hall-effect technology. Source: P3 America
The contactless potentiometer shown above—like most contemporary designs—employs Hall-effect technology to sense the rotational travel of the knob. This method is favored for its reliability, long lifespan, and immunity to mechanical wear.
However, Hall-effect sensing is just one of several technologies used in contactless potentiometers. Other approaches include magneto-resistive sensing, which offers robust precision and thermal stability. Then there is inductive sensing, known for its robustness in harsh environments and suitability for high-speed applications. Next, capacitive sensing, often chosen for compact form factors, facilitates low-power designs. Finally, optical encoding provides high-resolution feedback by detecting changes in light patterns.
Ultimately, choosing the right sensing technology hinges on factors like required accuracy, environmental conditions, and mechanical limitations.
Displayed below is the SK22B model—a contactless potentiometer that operates using inductive sensing for precise, wear-free position detection.
Figure 2 The SK22B potentiometer integrates precision inductive elements to achieve contactless operation. Source: www.potentiometers.com
Contactless sensing for makers
So, contactless potentiometers—also known as non-contact rotary sensors, angle encoders, or electronic position knobs—offer precise, wear-free angular position sensing.
Something worth pointing out is that a quick pick for practical hobbyists is the AS5600—a compact, easy-to-program magnetic rotary position sensor that excels in such applications, thanks to its 12-bit resolution, low power draw, and strong immunity to stray magnetic fields.
Also keep in mind that while the AS5600 is favored for its simplicity and reliability, other magnetic position sensors—like the AS5048 or MLX90316—offer robust contactless performance for more advanced or specialized applications.
Another notable option is the MagAlpha MAQ470 automotive angle sensor, engineered to detect the absolute angular position of a permanent magnet—typically a diametrically magnetized cylindrical magnet mounted on a rotating shaft.
Figure 3 Functional blocks of the AS5600 unveil the inner workings. Source: ams OSRAM
And a bit of advice for anyone designing angle measurement systems using contactless potentiometers: success hinges on tailoring the solution to the specific demands of the application. These devices are widely used in areas like industrial automation, robotics, electronic power steering, and motor position sensing, where they monitor the angular position of rotating shafts in either on-axis or off-axis setups.
Key design considerations include shaft arrangement, air gap tolerance, required accuracy, and operating temperature range. During practical implementation, it’s crucial to account for two major sources of error—those stemming from the sensor chip itself and those introduced by the magnetic input—to ensure reliable performance and precise measurements.
A while ago, I shared an outline for weather enthusiasts to build an expandable wind vane using a readily available angle sensor module. This time, I am diving into a complementary idea: crafting a poor man’s optical contactless potentiometer/angle sensor/encoder.
The device itself is quite simple: a perforated disc rotates between infrared LEDs and phototransistors. Whenever a phototransistor is illuminated by its corresponding light sender, it becomes conductive. Naturally, you will need access to a 3D printer to fabricate the disc.
Be sure to position the phototransistors and align the holes strategically; this allows you to encode the maximum number of angular positions within minimal space. A quick reference drawing is shown below.
Figure 4 The schematic shows an optical alternative setup. Source: Author
It’s worth pointing out that this setup is particularly effective for implementing a Gray Coding system, as long as the disc is patterned with a single-track Gray Code. Developed by Frank Gray, Gray Code stands out for its elegant approach to binary representation. By ensuring that only a single bit changes between consecutive values, it streamlines logic operations and helps guard against transition errors.
That’s all for now, leaving plenty of intriguing ideas for you to ponder and inquire further. But the story does not end here—I have some deeper thoughts to share on absolute encoders, incremental encoders, rotary encoders, linear encoders, and more. Perhaps a topic for an upcoming post.
If any of these spark your curiosity, let me know—your questions and comments might just shape what comes next. Until then, stay curious, keep questioning, and do not hesitate to reach out with your thoughts.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- When potentiometers go to pot
- Contactless electric bell on a gradient relay
- Gear Potentiometers – A Quick Introduction
- The Contactless Passive Multifunctional Sensors
- Hall-effect sensors measure fields and detect position
The post Contactless potentiometers: Unlocking precision with magnetic sensing appeared first on EDN.
Simple diff-amp extension creates a square-law characteristic

Back on December 3, 2024, a Design Idea (DI) was published, “Single-supply single-ended inputs to pseudo class A/B differential output amp,” which created some discussion about using the circuit as a full wave rectifier.
DI editor Aalyia has kindly allowed a follow-up discussion about a circuit which could be utilized for this, but is better suited for square-law functions.
The circuit shown in Figure 1 is an LTspice implementation built around a bipolar differential amplifier with Q1 and Q3 serving as the + and – active differential input devices, respectively.
Figure 1 An LTspice implementation built around a bipolar differential amplifier with Q1 and Q3 serving as the + and – active differential input devices, respectively, allowing the circuit to be better suited for square-law functions.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Additional devices Q2 and Q4 are added at the “center point” between Q1 and Q3, and act such that the collector currents of all devices are equal when no differential voltage is present.
This occurs because resistors R7 and R8 create a virtual differential zero-volt “center point” between the + and – differential inputs, and all device Vbe’s are the same, neglecting the small voltage drop across R7 and R8 due to Q2 and Q4 base bias currents.
R7 and R8 set the differential input impedance for the circuit configuration, where R1 and R3 set the signal source differential impedances for the simulations.
The device emitter currents are controlled by the “tail current source” I1 at 4 mA; thus, each device has an emitter current of ~1 mA with zero differential input. Note the -Diff Input signal is created by using a voltage-controlled voltage source with an effective gain of -1 due to the inverted sensing of the +Diff Input voltage (VIN+). This arrangement allows the input signal to be fully differential when LTspice controls the VI+ voltage source during signal sweeps.
This is not part of the circuit but used for comparisons: Voltage-controlled current source, B1, is configured to produce an ideal square-law characteristic by squaring the differential voltage (Vin+ –Vin-) and scaling by factor “K”.
Figure 2 shows the simulation results of sweeping the differential input voltage sources from -200 mV to +200 mV while monitoring the various device currents. Note the differential output current, which is:
[Ic(Q1)+Ic(Q3)] – [Ic(Q2)+Ic(Q4)]
closely approximates the ideal square-law with a scale factor of 0.3 (amps/volt) for differential input voltages of ±60 mV.
Figure 2 Simulation results of sweeping the differential input voltage sources from -200 mV to +200 mV while monitoring the various device currents.
Please note this circuit is a transconductor type where the output is a differential current controlled by a differential input voltage.
Anyway, thanks to Aalyia for allowing us to follow up with this DI, and hopefully some folks will find this and the previous circuits interesting.
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.
Related Content
- Single-supply single-ended input to pseudo class A/B differential output amp
- Simple 5-component oscillator works below 0.8V
- Applying fully differential amplifier output-noise analysis to drive high-performance ADCs
- Understanding output filters for Class-D amplifiers
The post Simple diff-amp extension creates a square-law characteristic appeared first on EDN.
Event-based vision comes to Raspberry Pi 5

A starter kit from Prophesee enables low-power, high-speed event-based vision on the Raspberry Pi 5 single-board computer. Based on the GenX320 Metavision event-based vision sensor, the kit accelerates development of real-time neuromorphic vision applications for drones, robotics, industrial automation, security, and surveillance. The camera module connects directly to the Raspberry Pi 5 via a MIPI CSI-2 (D-PHY) interface.
Consuming less than 50 mW, the 1.5-in. GenX320 sensor provides 320×320-pixel resolution with an event rate equivalent to ~10,000 fps. It offers >140-dB dynamic range and sub-millisecond latency (<150 µs at 1,000 lux).
Software resources include OpenEB, the open-source core of Prophesee’s Metavision SDK, with Python and C++ API support. Drivers, data recording, replay, and visualization tools can be found on GitHub.
The GenX320 starter kit is available for pre-order through Prophesee and authorized distributors. The Raspberry Pi 5 board is sold separately.
GenX320 starter kit product page
The post Event-based vision comes to Raspberry Pi 5 appeared first on EDN.
MCUs drive LCD and capacitive touch

Renesas’ RL78/L23 16-bit MCUs provide segment LCD control and capacitive touch sensing for responsive HMIs in smart home appliances, consumer electronics, and metering systems. Running at 32 MHz, these low-power MCUs include 512 KB of dual-bank flash memory, enabling seamless over-the-air firmware updates.
The MCUs offer an active current of 109 µA/MHz and a standby current as low as 0.365 µA, with a fast 1‑µs wakeup time. With a wide voltage range of 1.6 V to 5.5 V, they can operate directly from 5‑V power supplies commonly used in home appliances and industrial systems.
The reference mode of the integrated LCD controller reduces display power by approximately 30% compared to the RL78/L1X series. A snooze mode sequencer (SMS) enables dynamic segment updates without CPU intervention, further enhancing energy efficiency.
Development tools for the RL78/L23 include the Smart Configurator and QE for Capacitive Touch, which simplify system design and firmware setup. Renesas also provides the RL78/L23 Fast Prototyping Board, compatible with the Arduino IDE, and a capacitive touch evaluation system for hardware testing and validation.
RL78/L23 MCUs are available now from the Renesas website or distributors.
The post MCUs drive LCD and capacitive touch appeared first on EDN.
Wireless SoC raises AI efficiency at the edge

The Apollo510B wireless SoC from Ambiq combines a 48-MHz dedicated network coprocessor with a Bluetooth LE 5.4 radio for power-efficient edge AI. Its Arm Cortex-M55 CPU, enhanced with Helium vector processing and Ambiq’s turboSPOT dynamic scaling, delivers up to 30× greater AI efficiency and 16× faster performance than Cortex-M4 devices.
With 64 KB each of instruction and data cache, 3.75 MB of RAM, and 4 MB of embedded nonvolatile memory, the Apollo510B provides fast, real-time processing. Its 2D/2.5D GPU handles vector graphics, while SPI, I²C, UART, and high-speed USB 2.0 support flexible sensor and device connections. High-fidelity audio is enabled via a low-power ADC and stereo digital microphone PDM interfaces.
Apollo510B also integrates secureSPOT 3.0 and Arm TrustZone, enabling secure boot, firmware updates, and protection of data exchange across connected devices. These features make the device well-suited for always-on, intelligent applications such as wearables, smart glasses, remote patient monitoring, asset tracking, and industrial automation.
The Apollo510B SoC will be available in fall 2025.
The post Wireless SoC raises AI efficiency at the edge appeared first on EDN.
Instruments work together to ensure design integrity

Smart Bench Essentials Plus is an enhanced set of Keysight test instruments offering improved precision and reliability. The core instruments—a power supply, waveform generator, digital multimeter, and oscilloscope—meet industry and safety standards such as ISO/IEC 17025, IEC 61010, and CSA. All instruments are managed from a single PC via PathWave BenchVue software, simplifying test automation and workflows.
According to Keysight, Smart Bench Essentials Plus delivers 10× higher DMM resolution, 5× greater waveform generator bandwidth, 4× more power supply capacity, and 64× higher oscilloscope vertical resolution over the previous series. Development engineers can test, troubleshoot, and qualify electronic designs while leveraging these benefits:
- Reduce measurement errors with Truevolt technology in a 6.5-digit dual-display digital multimeter.
- Generate accurate waveforms with Trueform technology in a 100-MHz waveform/function generator.
- Deliver reliable, responsive power with a 400-W, four-channel DC power supply.
- Capture even the smallest signals with a portable four-channel oscilloscope featuring a custom ASIC and 14-bit ADC.
Instruments have intuitive, color-coded interfaces and standardized menus to improve productivity. Built-in graphical charting tools make it easy to visualize and analyze test results.
To learn more about the Smart Bench Essentials Plus portfolio and request a bundled quote, click here.
The post Instruments work together to ensure design integrity appeared first on EDN.
AEC-Q100 LED driver delivers dynamic effects

Diodes’ AL5958Q matrix LED driver integrates a 48-channel constant-current source and 16 N-channel MOSFET switches for automotive dynamic lighting. Two cascade-connected drivers support up to 32 scans, well-suited for narrow-pixel mini- and micro-LED displays that use multiple RGB LEDs to deliver animated lighting effects and information.
The AEC-Q100 qualified driver employs multiplex pulse density modulation (M-PDM) control to raise the refresh rate of dynamic scanning systems without increasing the grayscale clock frequency or introducing EMI. Built-in matrix display command functions reduce processing overhead on the local MCU. These functions include automatic black-frame insertion, ghost elimination, and suppression of shorted-pixel caterpillars.
Operating from a 3-V to 5-V input, the AL5958Q’s 48 constant-current outputs supply up to 20 mA per LED channel string. Current accuracy between channels and matching across devices is typically ±1.5%.
The AL5958Q LED driver costs $1.60 each in lots of 2500 units.
The post AEC-Q100 LED driver delivers dynamic effects appeared first on EDN.
Mixed signals, on a power budget: Intelligent low-power analog in MCUs

It goes without saying that battery-powered devices are sensitive to power draw, especially during periods of inactivity. One such use case is in sensor nodes or portable sensors—these devices passively monitor a specific condition. When the threshold is exceeded, they trigger an alarm or log the event for further analysis. Since most devices incorporate some form of microcontroller (MCU), selecting an MCU with intelligent analog peripherals can reduce the Bill of Materials (BOM) by performing the same functions of a discrete device while potentially saving power by disabling the analog functionality when not needed.
To demonstrate these features, we built two demos on the PIC16F17576 microcontroller family. One demo aims to use as little power as possible while detecting temperature changes, while the other utilizes the embedded op-amps to dynamically adjust the gain based on the input signal.
Power consumptionLet’s start at the top—power consumption. No matter how you slice it, all roads will lead to the same basic tenets:
- Keep VDD as low as possible
- Minimize oscillator frequency
- Turn off all unused peripherals and external circuits, when possible, and as much as possible
- Avoid floating nodes on digital I/O
Beyond this advice, it becomes a lot more application-specific. For instance, most op-amps and ADCs don’t have an OFF switch. This is where intelligent analog peripherals fit into designs.
The “intelligent” part of their name is derived from the fact that they can be controlled in software. While most analog peripherals would not be considered power hungry, when optimizing battery life, every little bit of current matters, and generally, there is a higher quiescent current draw that the discrete device would have due to process limitations.
However, there are special low-power peripherals that allow for ultra-low power operation, even when enabled all the time. For instance, the Low Power Voltage Reference (VREFLP) and Low Power Analog Comparator (CMPLP) in the PIC16F17576 family of MCUs draw minimal power but can trigger interrupts to wake the CPU if action is needed.
For devices without these lower power peripherals, another peripheral available in PIC MCUs is the Analog Peripheral Manager (APM). The APM is a specialized counter that can toggle power ON/OFF to the analog peripherals while allowing the CPU to remain continuously in sleep.
If an event occurs, requiring intervention from the CPU, the peripherals can generate an interrupt to wake the device. This avoids having to perform the following sequence: wake the CPU, power on the peripherals, check the results, perform an action, shut down the peripherals, and return to deep sleep.
Low-power demoThe objective of the low-power demo is to demonstrate the new CMPLP and VREFLP as a temperature alarm. This application could be used for cold asset tracking to log when an event over the expected temperature occurs. For the demo implementation, we designed a circuit to detect when a person touches the thermistor(s), causing a rise in temperature.
Figure 1 A finished low power demo prototype that will detects the temperature rise that occurs when a person touches the thermistor(s).
This circuit is composed of two PIC16F17576 MCUs; one device acts like the device under test (DUT) while the other handles power measurement and display.
Power measurement and displayTo measure the minuscule amount of current pulled by the MCU DUT, it was important to design a circuit that could perform high-side current sensing while also being capable of maintaining the power supply at 1.8 V, which is the lowest recommended operating voltage for this device family. For reference, the minimum operating voltage is 1.62 V, which provides a 10% margin on the power supply before the device is out of specified operating conditions.
To measure the quiescent current of the MCU and low-power analog peripherals, a precision 1:1 current mirror IC was used to supply current to the DUT (Figure 2). This IC has a settable compliance output limit, but the tolerancing and ranging on the internal reference was not acceptable for our purposes, so we overdrive the integrated circuit with an external 1.8-V reference (MCP1501-18E) to avoid having to calibrate each unit individually.
Figure 2 The high-side current circuit to measure the minuscule amount of current pulled by the MCU DUT, and 1.8-V DUT power supply.
This ensures the power rail for the DUT is as close as possible to 1.8 V. Guard rings and planes are placed on the PCB to minimize the leakage current of this rail as much as possible. The 1:1 current output goes through a sense resistor, and then a differential measurement of the voltage at the resistor is performed with a 24-bit delta-sigma ADC (MCP3564R) with an external 2.048-V voltage reference (MCP1501-20E). This is shown in Figure 3. The resulting measurement is then displayed on the OLED screen attached to the board.
Figure 3 The ADC implementation where the differential measurement of the voltage at the resistor is performed with a 24-bit delta-sigma ADC with an external 2.048-V voltage reference.
A (good) problem we discovered late in the process was that the current measurement in this configuration is so stable, it looks hard-coded on the display. Thankfully, this can be easily disproved by gently touching the DUT’s decoupling capacitors with a finger or other slightly conductive object and observing the change in measured current.
DUTThe DUT device performs a simple but crucial role in detecting temperature changes with as little power consumption as possible. For this, CMPLP and VREFLP are used together with the Peripheral Pin Select (PPS) system to output the state of the CMPLP without waking the CPU.
In an actual application, CMPLP’s output edge (LOW HIGH) would be used to wake the CPU to perform some action like logging a temperature event or sounding an alarm.
Using the high-side current measurement circuit designed, we found the current of the microcontroller in this state is ~2.2 to 2.4 μA, but there is room for a tiny bit of extra power savings.
VREFLP is comprised of two separate subsystems: a low-power 1-V reference and a low-power DAC. This application uses the slightly more power-hungry low-power DAC instead of the fixed 1-V reference because the temperature change from physical contact is very small, and the system must recalibrate the threshold on startup to account for environmental variance. In an application where a few degrees of tolerance are acceptable, using the 1-V reference would save a few fractions of a microamp.
Notably, this demo does not use the APM because the APM requires an oscillator to remain active, consuming a little bit more power (~2.8 μA) than simply leaving these ultra-low power modules on. In a situation where multiple analog peripherals are being used, such as the integrated op-amps, ADC, etc., the APM would provide significant savings in power.
Dynamic gainAnother feature of intelligent analog peripherals is the ability to adjust on the fly. In some cases, a signal may have a large dynamic range that is tricky to measure without clipping.
Clipping a signal is usually considered undesirable, as waveform information about the signal is lost. A simple example of this is a microphone: whispering requires a high gain while shouting requires a low gain. With a fixed gain, designers pick the worst (reasonable) conditions to avoid signal clipping, but this, in turn, reduces the signal resolution.
A way around this problem is to use embedded op-amps. These op-amps aren’t going to outmatch the high-end op-amps, but they are often comparable to general-purpose ones.
And, in many cases, the integrated op-amps contain built-in resistor networks that allow the op-amp(s) to adjust the circuit gain as needed. This requires no extra components or specialized circuitry as it’s already integrated into the die.
Dynamic gain demoOne of the main use cases for the integrated op-amps inside MCUs is to dynamically switch gains depending on how strong the signal is. This is often performed to avoid clipping the signal when the signal strength is high.
This application creates a simple demonstration of this use case by amplifying the output of a pressure sensor and displaying it visually on an LED bar graph.
Figure 4 A dynamic gain demo that amplifies the output of a pressure sensor and displays it visually on an LED bar graph.
Theory of operation Pressure sensorThe pressure sensor in this application changes resistance depending on the amount of pressure applied. This resistor is used as part of a resistor divider network to generate an output signal from 0 to 2 V. Since both the discrete op-amp and the integrated op-amp have high-input impedances, the two circuits can share the same signal without loading down the network.
Dynamic gain circuitThe PIC16F17576 MCU has four op-amps, with two of them containing integrated resistor ladders. These ladders have eight steps, plus an additional option for unity gain (1x), for a total of nine options. Alternatively, resistors or other components can be connected to the I/O pins to assign an arbitrary gain or function, if desired.
In this demo, the MCU’s op-amp is switched between a gain of 2x (LOW) and 4x (HIGH) at runtime depending on the measured signal.
In most applications, when the signal strength is low, the gain would be HIGH. However, it is worth noting that in this demo, the inverse is true. This is purely for visual reasons; otherwise, the clipping condition would have more lights ON and thus appear “better” than the dynamic gain version at a glance. As the gain of the embedded op-amps is set up in software, it was easily reconfigured to match the desired behavior.
Measurement and displayThe PIC16F17576 MCU also performs the measurement of both op-amp outputs to display on the LED bar graph. The internal Fixed Voltage Reference (FVR) is used to generate a stable 4.096 V from the +5-V (USB) supply for conversions. MCP23017 I2C I/O expanders are used to drive the LEDs of the display.
Putting it all togetherAdjusting the circuit gain without any external circuitry greatly simplifies designs where there are large signal ranges. These peripherals, of course, will not replace high-performance op-amps, ADCs, DACs, or voltage references, but embedded analog peripherals are a good way to handle signals that require some conditioning but aren’t particularly sensitive. This, coupled with low power functionality, makes them a useful tool to reduce circuit complexity, time to market, and ultimately the BOM in your design.
Robert Perkel is an application engineer for Microchip Technology. In this role, he develops technical content such as App Notes, contributed articles, and videos. He is also responsible for analyzing use cases of peripherals and the development of code examples and demonstrations. Perkel is a graduate of Virginia Tech, where he earned a Bachelor of Science degree in Computer Engineering.
Related Content
- Developing a spectrophotometer with integrated analog peripherals
- Deploying task-specific microcontrollers simplifies complex designs
- Developing window security nodes with level shifting I/O – Part 2
- Fundamentals of I3C interface communication
- Slideshow: The most-popular MCUs ever
The post Mixed signals, on a power budget: Intelligent low-power analog in MCUs appeared first on EDN.