Українською
  In English
EDN Network
Submit your Electronic Product of the Year

Submissions are now open for the 2025 Product of the Year. Winners will be announced in January 2026 and featured in the January/February 2026 digital issue of Electronic Products Magazine, now presented by EDN.com.
Did your company announce or start shipping a product between November 1, 2024, and October 31, 2025, that represents a significant advancement in technology or its application, an innovation in design, or a gain in price/performance? If yes, tell us about it below. You may submit separate entries for more than one new product, and there are no fees of any kind. The product description can be just a few lines of key information, plus you can upload datasheets and images. The Electronic Products editors will select 13 winners from these and other products introduced or announced during the year.
Entries must be received by 11:59 p.m. PDT on Monday, November 3, 2025. Contact us at editorial@aspencore.com or gina.roos@aspencore.com with any questions.
/* "function"==typeof InitializeEditor,callIfLoaded:function(o){return!(!gform.domLoaded||!gform.scriptsLoaded||!gform.themeScriptsLoaded&&!gform.isFormEditor()||(gform.isFormEditor()&&console.warn("The use of gform.initializeOnLoaded() is deprecated in the form editor context and will be removed in Gravity Forms 3.1."),o(),0))},initializeOnLoaded:function(o){gform.callIfLoaded(o)||(document.addEventListener("gform_main_scripts_loaded",()=>{gform.scriptsLoaded=!0,gform.callIfLoaded(o)}),document.addEventListener("gform/theme/scripts_loaded",()=>{gform.themeScriptsLoaded=!0,gform.callIfLoaded(o)}),window.addEventListener("DOMContentLoaded",()=>{gform.domLoaded=!0,gform.callIfLoaded(o)}))},hooks:{action:{},filter:{}},addAction:function(o,r,e,t){gform.addHook("action",o,r,e,t)},addFilter:function(o,r,e,t){gform.addHook("filter",o,r,e,t)},doAction:function(o){gform.doHook("action",o,arguments)},applyFilters:function(o){return gform.doHook("filter",o,arguments)},removeAction:function(o,r){gform.removeHook("action",o,r)},removeFilter:function(o,r,e){gform.removeHook("filter",o,r,e)},addHook:function(o,r,e,t,n){null==gform.hooks[o][r]&&(gform.hooks[o][r]=[]);var d=gform.hooks[o][r];null==n&&(n=r+"_"+d.length),gform.hooks[o][r].push({tag:n,callable:e,priority:t=null==t?10:t})},doHook:function(r,o,e){var t;if(e=Array.prototype.slice.call(e,1),null!=gform.hooks[r][o]&&((o=gform.hooks[r][o]).sort(function(o,r){return o.priority-r.priority}),o.forEach(function(o){"function"!=typeof(t=o.callable)&&(t=window[t]),"action"==r?t.apply(null,e):e[0]=t.apply(null,e)})),"filter"==r)return e[0]},removeHook:function(o,r,t,n){var e;null!=gform.hooks[o][r]&&(e=(e=gform.hooks[o][r]).filter(function(o,r,e){return!!(null!=n&&n!=o.tag||null!=t&&t!=o.priority)}),gform.hooks[o][r]=e)}}); /* ]]> */
The post Submit your Electronic Product of the Year appeared first on EDN.
Applications processor targets in-cabin sensing

NXP Semiconductors unveils its i.MX 952 AI-enabled applications processor for automotive human-machine interfaces (HMIs), in-cabin sensing, and vision applications. This new applications processor leverages NXP’s sensor fusion, powered by the eIQ neutron neural processing unit (NPU), for applications such as driver monitoring, child presence detection, and industrial HMI systems.
(Source: NXP Semiconductors)
The i.MX 952 applications processor uses AI to take inputs from different sensors to deliver more accurate and usable data for improved safety in interior cabin sensing applications and to meet regulatory requirements such as the Euro NCAP. These in-cabin sensing systems are used to determine driver attention levels, ensure proper airbag calibration, and detect a child left alone in a car.
“By combining the data from cameras, UWB, ultrasonic and other sensors, the i.MX 952 SoC enhances the intelligence each system provides to deliver a more intuitive interaction between the driver and car,” said Dan Loop, vice president and general manager, edge microprocessor, NXP, in a statement. “This allows OEMs and Tier 1s to offer additional value beyond safety, such as health monitoring, personalization and more, while scalability with the i.MX 95 family reduces hardware and software total cost of ownership and improves times to market.”
The i.MX 952 also can be used in industrial applications, such as AI-powered surveillance and environment sensing applications, as well as HMI systems. The applications processor leverages AI to provide real-time analysis and anomaly detection across the factory floor, and it supports low-power scale to multi-site monitoring and control from a central office.
The i.MX 952, part of NXP’s i.MX 9 series, is pin-to-pin compatible with the i.MX 95 family. This makes it easier for developers to scale their hardware and software design to meet different price points with a single platform design, NXP said.
The i.MX 952 features an integrated eIQ Neutron NPU for use with multiple camera sensors and an image signal processor and supports RGB-IR sensors. It delivers low-power, real-time, and high-performance processing through a multi-core application domain with up to four Arm Cortex-A55 cores, and an independent safety domain with Arm Cortex-M7 and Arm Cortex-M33 CPUs. It enables ISO 26262 ASIL B compliant platforms and SIL2/SIL3 compliant platforms in industrial safety-critical environments.
NXP claims the i.MX 952 SoC is the industry’s first automotive and industrial processor with integrated support for local dimming, delivering lower power consumption and improved visibility.
With the iMX 952, in-cabin LCD panels and HUDs use less energy, deliver higher contrast, and enhance outdoor HMI panels by dynamically adjusting brightness for optimal visibility in harsh lighting conditions, NXP said, reducing power consumption and eliminating the need for additional components.
The new SoC also features advanced security. This includes EdgeLock Secure Enclave (Advanced Profile), a hardware root of trust that simplifies the implementation of security-critical functions such as secure boot, secure update, device attestation, and secure device access, based on both classic cryptography and post-quantum cryptography (PQC) to ensure security into the future. Together with NXP’s EdgeLock 2GO key management services, OEMs can securely provision i.MX 952 SoC-based products with credentials for secure remote management of devices deployed in the field, including secure over-the-air updates.
The i.MX 952 applications processor will start sampling in the first half of 2026.
The post Applications processor targets in-cabin sensing appeared first on EDN.
Lattice sets new standard for secure control FPGAs
Lattice Semiconductor claims the industry’s first post-quantum cryptography (PQC)-ready FPGAs with the launch of its MachXO5-NX TDQ family. Touted as the industry’s first secure control FPGAs, the MachXO5-NX TDQ family features full CNSA 2.0-compliant PQC support.
Built on the Lattice Nexus platform, these FPGAs target applications such as computing, communications, industrial, and automotive applications, addressing the continued threat of quantum-enabled cyberattacks.
The MachXO5-NX TDQ FPGA family provides the only complete CNSA 2.0 and National Institute of Standards and Technology (NIST)-approved PQC algorithms (LMS, XMSS, ML-DSA, ML-KEM, AES256-GCM, SHA2, SHA3, and SHAKE) offering robust protection against quantum threats, according to Lattice. Its authenticated and/or encrypted bitstream ensures data integrity and protection against unauthorized access with ML-DSA, LMS, XMSS, and AES256. It features crypto-agility via in-field algorithm update capability and anti-rollback version protection for ongoing alignment with evolving standards, and secure bitstream key management with revokable root keys and sophisticated key hierarchy for PQC and classical keys.
Advanced cryptography features include advanced symmetric and classical asymmetric cryptographic algorithms (AES-CBC/GCM 256 bit, ECDSA-384/521, SHA-384/512, and RSA 3072/4096 bit) for bitstream and user data protection. A device identifier composition engine, security protocol and data model, and Lattice SupplyGuard support provide attestation and secure lifecycle/supply chain management for future-proof, end-to-end security.
The FPGAs also provide hardware root of trust (RoT), delivering a trusted single-chip boot with integrated flash, a unique device secret that ensures distinct device identity, and integrated non-volatile configuration memory and user flash memory with flexible partitioning and secure locking. They also feature comprehensive locking control of the programming interface (SPI, JTAG), side channel attack resiliency, and NIST Cryptographic Algorithm Validation Program (CAVP) compliant algorithms.
In addition, Lattice expanded its RoT-enabled Lattice MachXO5-NX device family with new MachXO5-NX TD devices, offering new density and package options. The new Lattice MachXO5-NX TDQ and MachXO5-NX TD FPGA devices are currently available and are supported by the latest release of Lattice Radiant design software.
The post Lattice sets new standard for secure control FPGAs appeared first on EDN.
Exponentially-controlled vactrols
Brief intro to vactrols
Vactrols, or both an LED and a light depending resistor (LDR) in a light-tight housing, are found in analog music electronics circuits like audio compressors, voltage-controlled amplifiers (VCAs), voltage-controlled filters (VCFs), and other applications.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Nowadays, analog ICs are used for this purpose, so vactrols have become quite rare. One of their main advantages was and remains the low large signal distortion compared to transistor circuits.
On the other hand, they are slow and sluggish when driven by small control currents and have a nonlinear characteristic curve.
Fortunately, the characteristic curve of the conductance versus control current is more linear than that of the resistance. This is advantageous, for instance, for VCFs with a frequency response proportional to 1/RC. For music electronics applications, however, exponential control of the conductance is preferred since voltage-controlled circuits use the “volt/octave” characteristic, whereby with each volt of additional control voltage, the cutoff frequency of the VCF doubles.
Another advantage of exponential vactrol control is the fact that the LED current never becomes 0 [y= exp(x) > 0] and thus the LDR never reaches its full dark resistance, which has a positive effect on the response time of the LDR.
A vactrol circuitUsually, a pair of transistors is used to convert a linear control voltage into an exponential current. In the case of a vactrol, however, the pair of transistors can be replaced by the LED itself, which is like any diode a voltage-controlled exponential current source.
For temperature compensation, two matched LEDs are required, similar to the transistor circuit.
Figure 1 shows the simulated circuit of the exponential vactrol control.
Figure 1 An exponential vactrol drive where a reference LED is used to convert a linear control voltage into an exponential current, and two matched LEDs are used for temperature compensation.
The LED2 is operated with Iref = -V/R4. At CV=0, the current in the vactrol LED2 is identical, and the resistance of the LDR is set to the middle of the desired resistance range via Iref, here about 30 µA.
As CV increases, the voltage at the cathode of LED2 decreases, but the voltage between the anode and cathode increases so that the LED current increases exponentially.
With a negative CV, the voltage across LED2 decreases accordingly, so that the LED current decreases exponentially. The range of the LDR resistance is determined by summing amplifier U1’s gain. In practical applications, a range of ~ 1 MΩ (CV = -5 V) to 1 kΩ (CV = +5 V), is used, so that a VCF can be tuned from 20 Hz to 20 kHz.
Thermistor R3 improves the temperature drift of the LED current. Still, the LDR’s temperature dependence remains at approximately 0.2%/K, which makes the vactrol circuit less suitable for high-end VCOs.
For other applications (VCF, VCA), the temperature drift is good enough, and in most cases, the thermistor can be omitted.
Figure 2 shows the simulated resistance curve and LED2 current at 20°C and 40°C.

Figure 2 The simulated resistance curve and LED2 current at 20°C and 40°C.
Practical notesA small PCB was developed for the circuit. The SMD LEDs are standard white types in a 5730 case. Vactrol LED2 is on the PCB top side and illuminates two GL5537 LDRs, which are arranged at an angle of approximately 45 degrees above LED2.
By slightly bending the LDRs, they can be mechanically trimmed for matching resistance. A small black 3D-printed box and a PCB with black solder mask prevent external light from affecting the circuit. Circuits with two and four LDRs illuminated by one LED have been successfully tested to implement 2nd- and 4th-order VCFs.
Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.
Related Content
- Vactrol – A Lazy Walk
- Automatic Street Light Circuit
- LDR = Light Dependent Resistor = Photoresistor
- Electroschematics LDR circuits
The post Exponentially-controlled vactrols appeared first on EDN.
RingConn: Smart, svelte, and econ(omical)

Life is rife with dichotomies. Good and evil. Black and white. Up and down. Left and right. And, apparently, Ultrahuman and Ringconn ;-). My previous post detailed my experiences, observations, and conclusions from a week or so evaluating the Ultrahuman’s Ring AIR smart ring, following up on last month’s smart ring introductory overview write-up. This one will cover its also-scheduled-for-shipment-cessation-on-October-21 competitor, RingConn’s Gen 2.
What do I mean by dichotomy in this regard? Well, several of the Ultrahuman weak points were, in contrast, RingConn’s strengths. What did I like the most about the Ultrahuman smart ring? It’s the same thing I liked least about RingConn’s alternative device.
Color shortcomingsLet’s dive into the details, starting with that last nitpick bit, since it matches the ordering cadence from last time. Here again are all three smart rings I initially tested, simultaneously located on my left index finger:

The RingConn Gen 2 is at the right, with the Ultrahuman Ring AIR in the middle and Oura’s Gen3 Horizon at left. Color options specifically selected for my evaluations are as follows:
- RingConn Gen 2: Future Silver
- Ultrahuman Ring AIR: Raw Titanium
- Oura Gen3 Horizon: Brushed Titanium
As mentioned last time, the Ultrahuman ring is the closest match to my wedding band on the left-hand ring finger. The Oura Gen3 Horizon is next in the similarity line, although, as you’ll see in near-future detailed coverage of it, the differentiation from my band is more obvious when it’s standalone on the index finger. And the sketchiest match, at least from the standpoint of the wedding band’s body color, is the RingConn Gen 2, although in exchange, it alternatively does a decent job of accentuating the wedding band’s bright edges:

The irony here is that the original RingConn Gen 1 did come in a duller Moonlit Silver color option, which likely would have been a closer match, but for some unknown reason, the company decided not to continue it into the next-generation offering:

Other folks are apparently displeased with the shinier evolutionary trend, too, and have dulled their Gen 2s using abrasive-side kitchen sponges, Dremels, files, and the like. I’m impressed with the results, although I’m admittedly not sure I’ve got the moxie to follow in their footsteps:

From this point forward, pretty much everything else came up roses. I’d bought my ring, gently (and briefly) used, off Mercari (no, I never seemingly learn, but this time the outcome was positive) back in mid-June for ~$200 inclusive of tax, shipping, etc., representing a 33.3% (or more) discount off the normal sale price. Initially, the battery charge level only dropped ~5% per day, translating into a whopping nearly three weeks of estimated between-charges operating life (although I never let it completely drain to see if the discharge rate was truly linear or not). Even now, roughly three months later, the drain is still notably less than 10% per day. And it recharges very quickly.
To the best of my recollection, the ring (originally introduced in August 2024) has also received only one firmware update the entire time I’ve owned it, which installed successfully and drama-free. I really do like RingConn’s direct (vs inductive) charging scheme, which reliably mates the ring to the dock (courtesy of magnetic attraction between the two sets of contacts) and preserves existing dock investments if you change ring sizes:

And the high-end Gen 2 comes with an official (from-RingConn versus third-party) battery case, convenient for use when traveling (for long durations, mind you, given the ring’s inherent lengthy between-charges operating life):

Standard charging docks, factory-bundled with the lower-priced Gen 2 Air (which I’ll cover next), can also be purchased separately for both Gen 2 smart ring models.
The lower-priced, apnea-less alternativeThe mainstream Gen 2 smart ring I tested normally sell for $299 or more (minus occasional promotional discounts) on Amazon and elsewhere, and comes in three color scheme options:
- (aforementioned) Future Silver
- Matte Black
- Royal Gold
For $100 more ($399 total), there’s also a (fourth) Rose Gold color option.
RingConn also sells a $199 “Air” version of the Gen 2 smart ring. There are, as far as I know, only two differences between it and the more expensive alternative:
- Only two color options this time: Galaxy Silver and Dune Gold, and
- No sleep apnea measurement and analysis capabilities (which may reflect a reduced sensor or other functional allotment, or may just be a software feature lock-out)
The latter point is one for which I have personal interest, so I’ve spent a fair bit of time assessing it. For one thing, the RingConn Gen 2 is the only smart ring I’m aware of on the market that offers this feature. I tested it a bit; here’s the report I got on September 5, for example:

which closely correlated with the data that came directly from my Resmed CPAP machine:

That said, the comparative results for the next night weren’t quite as synonymous, although they were still “in the ballpark”:


What you’re looking for when comparing results, at least at first, is the AHI (Apnea-Hypopnea Index) number, which Resmed’s software alternately refers to as “Events/hr” in its summary screen. Here’s an overview description, from the Sleep Foundation website:
The Apnea-Hypopnea Index (AHI) quantifies the severity of sleep apnea by counting the number of apneas and hypopneas during sleep. Apneas are periods when a person stops breathing and hypopneas are instances where airflow is blocked, causing shallow breathing. Normal AHI is less than 5 events per hour, while severe AHI is more than 30 events per hour. The AHI guides healthcare professionals in their diagnosis and in determining effective treatment.
A key point to note here: I was using my CPAP machine both nights, which is why the AHI was so low in the first place. To that point, a sleep apnea-assessing smart ring is IMHO of limited-to-nonexistent value once you’ve been diagnosed and treatment is in process, since further apnea is suppressed (assuming your treatment regimen is effective, that is). Anyway, the treatment equipment is likely already reporting the data you need to assess effectiveness. Save the $100 in this case. Conversely, though, as an early-warning indication of potential apnea, which you don’t yet realize you’re suffering from? Given the large number of people who are reportedly sleep apnea-afflicted but don’t yet realize it, from study results I’ve seen, as well as how significantly apnea can health-compromise a person, I’m gung-ho on RingConn’s smart ring for that scenario.
Oh, and before going on, here’s the report that RingConn’s app generates after it’s gotten at least three nights’ worth of sleep data point sets to comparatively assess:

Much of what follows echoes what I said about the Ultrahuman smart ring in my previous post and/or in last month’s initial overview piece. Nevertheless, for completeness’ sake:
- It (like others) misinterpreted keyboard presses and other finger-and-hand movements as steps, leading to over-measurement results, especially on my dominant right hand.
- While the Bluetooth LE connectivity extends battery life versus a “vanilla” Bluetooth alternative, it also notably reduces the ring-to-phone connection range. Practically speaking, this isn’t a huge deal since the data is viewed on the phone. Picking up the phone (assuming your ring is also on your body) will prompt a speedy close-proximity preparatory sync.
- Unlike Oura (and like Ultrahuman), RingConn provides membership-free full data capture and analysis capabilities. The company also sells optional extended warranties.
- And the app will also automatically sync with other health services, such as Google Fit and, more recently, its Android Health Connect successor. That said, I wonder (but haven’t yet tested to confirm or deny) what happens if, for example, I’m wearing both the ring and my Health Connect-cognizant (either directly or via the Health Sync intermediary) smartwatches from Garmin or Withings. Will the service endpoint be intelligent enough to recognize that it’s receiving concurrent data from two different sources and either discard one data set or reconcile them, rather than just adding them together?


And with that, a few hundred words shorter than its Ultrahuman predecessor (which in this case definitely isn’t a bad thing from a RingConn standpoint), I’m going to wrap up this write-up.
It turns out I’ve got two different Oura posts coming up; I ended up picking up a gently used Ring 4 to supplement its Gen3 Horizon precursor. Plus, two different smart ring teardowns, as well. So, stay tuned for those. And until then, please share your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Can a smart ring make me an Ultrahuman being?
- The 2025 CES: Safety, Longevity and Interoperability Remain a Mess
- Smart ring allows wearer to “air-write” messages with a fingertip
The post RingConn: Smart, svelte, and econ(omical) appeared first on EDN.
An edge AI processor’s pivot to the open-source world

Edge AI, mired by fragmentation and a lack of broad availability of toolchains, is inching toward open architectures and open-source hardware and software. This shift was apparent at Synaptics Tech Day on 15 October 2025, held at the company’s headquarters in San Jose, California.
In other words, some edge AI processors are moving away from proprietary, closed AI software and tooling toward open software and ecosystems to deliver AI applications at scale. Google’s collaboration with Synaptics embodies this open-source approach to edge processors, aiming to deliver AI intelligence at very low power levels.

Figure 1 Astra SL2610 processors provide multimodal AI compute for smart appliances, home and factory automation equipment, charging infrastructure, retail PoS terminals and scanners, and more. Source: Synaptics
Google, which built a mini-TPU ASIC for edge AI under the Coral brand back in 2017, subsequently built the Coral NPU as a four-way superscalar 32-bit RISC-V CPU. Google is hoping that edge AI silicon suppliers will start using this small, lightweight CPU as a consistent front-end to other execution units on an edge AI processor.
As part of this initiative, Google has open-sourced a compiler and software stack to port models from any ML framework onto the CPU. That allows silicon vendors like Synaptics to create an open-standards-based pipeline from the ML frameworks all the way down to the NPU front-end.
But the question is why RISC-V, especially when Synaptics’ SL2610 processor is built around Arm Cortex-A55, Cortex-M52 with Helium, and Mali GPU technologies. Synaptics managers say that the move to RISC-V is intended to reduce fragmentation in software stacks serving edge AI designs.
When asked about this, John Weil, head of processing at Synaptics, told EDN that many semiconductor suppliers are employing RISC-V cores, generally as assisting cores, and most people don’t know that they are even there. “In this case, it’s a much more performance-oriented RISC-V core to perform neural processing.”
Synaptics tie-up with Google
In January 2025, Synaptics announced it would integrate Google’s ML core with its Astra open-source software to accelerate the development of context-aware devices. The collaboration aimed to combine AI-native hardware with open-source software to accelerate the development of context-aware devices.
Next, Synaptics introduced the Torq edge AI platform, which combines NPU architectures with open-source compilers to set a new standard in edge AI application development. Torq, leveraging an open-source IREE/MLIR compiler and runtime, has been critical in facilitating the deployment of Google’s RISC-V-based Coral open NPU in the edge AI processor Astra SL2610.

Figure 2 Torq, a combination of AI hardware and software, includes Google’s Coral NPU and Synaptics’ home-grown AI accelerator. Source: Synaptics
At Synaptics Tech Day, the company showcased the Astra SL2610 processor powering several edge AI applications. That included e-bikes, EV charging infrastructure, industrial-grade AI glasses, command-based speech recognition, and smart home automation.
Vikram Gupta, chief products officer at Synaptics, told EDN that when the company wanted to go broad, it decided that this processor would be AI native. “When we met with Google, it instantly resonated with us because they were working on Coral NPU, an open ML accelerator,” he said. “We also wanted to go open source as part of our AI-native processor story.”
Regarding Google’s interest in this collaboration, Gupta said that Google benefits because it has a silicon partner. “Google gets mindshare in the AI race while it’s prominent in the cloud as well as the edge AI.” Moreover, Google could bring multimodal capabilities to this tie-up to enable more context-aware user experiences, said Nina Turner, research director for enabling technologies and semiconductors at IDC.
Another critical goal of this silicon partnership is to confront fragmentation in the edge AI world. “Our take is that the only way to keep up with AI innovation at the edge is to be open,” said Weil of Synaptics. “While some edge AI suppliers want everything in their ecosystem, we are focused on how we knock down walled gardens.”
Regarding collaboration with Google, Weil added, “As an edge AI guy, I need to be working with guys working in the cloud, focused on the next big AI idea.” He further summed up by saying that for Synaptics, the challenge was how to make hardware that keeps up with the speed of AI, open architecture, and open source. “So, we took Google technology and matched it with ours.”
Open and collaborative
At a time when innovations in AI software and algorithms are far outpacing silicon advancements, an AI-native approach to edge IoT processing could be critical in adopting contextual LLMs for audio, voice, text, and video applications at the edge.
The launch of the Astra SL2610 processor, an AI-enabled system-on-chip (SoC) encompassing application processor-level as well as microcontroller-level parts, marks an important step in the availability of scalable, open systems for deploying real-world edge AI. These AI-native chips are expected to help create an ecosystem that will simplify development and unlock powerful new applications in the edge AI realm.
“We believe that the only way to keep up with AI innovation at the edge is to be open and collaborative,” Weil concluded.
Related Content
- It’s All About Edge AI, But Where’s the Revenue?
- How Edge AI Transforms IIoT and Enables Industry 5.0
- Hybrid system resolves edge AI’s on-chip memory conundrum
- Infineon Expands Edge AI Capabilities with Launch of DEEPCRAFT AI Suite
- The Future of the Edge: The Rising Tide for Better AI Performance, Scalability, and Security
The post An edge AI processor’s pivot to the open-source world appeared first on EDN.
Omnivision expands automotive image sensor portfolio

Omnivision expands its automotive portfolio with two new image sensors. The OX05C global shutter (GS) high dynamic range (HDR) sensor is a new addition to the company’s Nyxel near-infrared (NIR) family for in-cabin monitoring cameras, and the OXO8D20 image sensor targets advanced-driver assistance systems (ADAS) and autonomous driving (AD) applications.
The OX05C represents the automotive industry’s first and only 5-megapixel (MP) back-side illuminated (BSI) GS HDR sensor for driver and occupant monitoring systems, according to Omnivision. It delivers extremely clear images of the entire cabin, enabling improved algorithm accuracy even in high-brightness conditions.
OX05C GS HDR image sensor (Source: Omnivision)
The 2.2-µm OX05C features Omnivision’s Nyxel NIR technology, claiming world-class quantum efficiency (QE) at the 940-nm NIR wavelength, improving driver and occupant monitoring systems capabilities in low-light conditions. The on-chip RGB-IR separation eliminates the need for a dedicated image signal processor and backend processing.
The GS HDR OX05C also avoids interference from other IR light sources in the cabin, compared to rolling-shutter HDR sensors, Omnivision said, improving the RGB image quality and enabling more capture scheme and functions in real applications.
Measuring 6.61 × 5.34 mm, the OX05C1S package is 30% smaller than its predecessor, the OX05B (7.94 × 6.34 mm), allowing greater design flexibility when placing cameras in the automotive cabin. OEMs also use the same camera lens when upgrading from the OX05B to the newer OX05C for a design and cost advantage.
In addition, the integrated cybersecurity and the support of simultaneous driver and occupant monitoring with a single camera reduces complexity, cost, and space, Omnivision said.
The sensor comes in Omnivision’s stacked a-CSP package and a reconstructed wafer option for designers that need to customize their own package. The OX05C sensor is available in both color filter array RGB-IR and mono designs. Samples of the OX05C are currently available. Mass production starts in 2026.
In addition to the OX05C, Omnivision introduced the 8-MP OX08D20 automotive image sensor with TheiaCel technology for exterior automotive cameras. It delivers improvements in low-light ADAS and AD performance and is an upgrade to the OXO810 sensor for exterior cameras.
OX08D20 automotive image sensor (Source: Omnivision)
The OX08D20 features the same benefits of the OX08D10, plus an innovative capture scheme developed in collaboration with Mobileye that reduces the motion blur of nearby objects while driving and improves low-light performance. It also upgrades to 60 frames per second to enable dual-use cameras, and includes updated cybersecurity to match the MIPI CSE 2.0 standard.
The image sensor features low power consumption and is housed in an a-CSP package that is 50% smaller than other exterior sensors in its class. The OX08D20 will be sampling in November 2025 and will enter mass production in the fourth quarter of 2026.
The post Omnivision expands automotive image sensor portfolio appeared first on EDN.
Illuminated tactile switches withstand reflow soldering

Littelfuse Inc. extends its K5V Series of illuminated tactile switches with the release of new K5V4 models including the gull-wing and 2.1-mm pin-in-paste (PIP) versions compatible with reflow soldering. These switches target a range of applications, such as data centers, network infrastructure, industrial equipment, and pro audio/video systems.
(Source: Littelfuse Inc.)
The K5V4 is the first long-travel, single pole/double throw (SPDT) illuminated tactile switch in a reflow-capable SMT package, Littelfuse said, filling a critical gap in the market. They enable direct SMT assembly for the first time, reduce production costs, support higher throughput, and improve end-product quality, while maintaining durability and tactile performance, the company added.
The K5V4 switches are reflow soldering-compatible thanks to the use of a high-temperature polyarylate (PAR) material with a 250°C thermal deformation threshold, eliminating the need for silicone sleeves or special handling. They are suited for manufacturers transitioning from wave to reflow soldering processes.
Other features include SPDT contact configuration with normally-open and normally-closed options, a sharp tactile response with audible click and 4N operating force, and integrated high-brightness LEDs in a variety of colors and bi-color options.
For greater reliability, these switches provide a compact, dust-resistant design for reliable operation in dense boards, and gold-plated dome contacts for long-term contact performance. They are available in SMT (gull wing) and THT (PIP) versions for design flexibility.
The K5V tactile switches are currently available in tape and reel format, with quantities ranging from 1,000 to 2,000 units. Samples can be requested through authorized Littelfuse distributors.
The post Illuminated tactile switches withstand reflow soldering appeared first on EDN.
No more missed steps: Unlocking precision with closed-loop stepper control

Bipolar stepper motors provide precise position control while operating in an open loop. Industrial automation applications—such as robots and processing and packaging machinery—and consumer products—such as 3D printers and office equipment—effectively take advantage of the stepper’s inherent position retention. This eliminates the need for convoluted sensor technology, processing power requirements, or complex control algorithms.
However, driving a stepper motor in an open-loop methodology requires the motion profile to be errorless. Any glitch in which the stepper’s load abruptly changes results in step loss, which desynchronizes the stepper position from the application’s perceived position. In most cases, this position tracking loss is problematic. For example, in a label printer, step loss could cause the print to be skewed with the label, resulting in skewed label prints.
This article will describe a simple implementation that gives stepper motor the ability to sense its position and actively correct any error that might accrue during actuation.
Design assumptions
For this article, we will assume that a bipolar stepper motor with 200 steps per revolution is employed to drive a mechanism that is responsible for opening and closing some sort of flap or valve while servicing a production line. To make motion smooth, we will utilize a bipolar stepper driver with 8 degrees of microstepping, resulting in 1,600 step commands per full rotor revolution.
In order to fully open or close said mechanism, we will need multiple rotor turns; for simplicity, assume we need 10 full turns. In this case, the controller would need to send 16,000 step commands on each direction to successfully actuate the mechanism.
When the current is high enough to overcome any torque variation, the stepper moves accordingly and can fully open and close the control surface. In this scenario, the position is preserved. If steps are lost, however, the controller loses synchronization with the motor, and the actuation becomes compromised.
Newer technologies attempt to provide checks, such as stall detection, by measuring the motor winding’s back electromotive force (BEMF) when the applied revolving magnetic field crosses the zero-current magnitude. Stall detection only tells the application whether the motor is moving; it fails to report how many steps have been effectively lost. In cases like this, it’s worthwhile to explore closing the loop on the rotor position using sensing technology.
Sensor selection
In some cases, using simple limit switches—like magnetic, optical, or mechanical—might suffice to drive the stepper motor until the limits are met. However, there are plenty of cases where the available space does not allow the use of such switches. If a switch cannot be used, it might make sense to populate an optical shaft encoder (relative or absolute) at the motor’s back side shaft, but there is a high cost associated with these solutions.
An affordable solution for this dilemma is a contactless angular position sensor. This type of sensor involves the use of readily available magnetics with precise and accurate semiconductors that employ Hall sensors, which extract the rotor’s position with as much as 15 bits worth of resolution. That means each rotor revolution can be encoded to as much as 215 = 32,768 units, or 0.01 degrees (360/32,768).
For this example, an 11.5-bit resolution was selected, as that will be sufficient to encode the 1,600 microsteps. By using 11.5 bits of resolution, we can obtain 2,896.31 effective angle segments. A Hall-effect based contactless sensor such as the MA732 provides absolute position encoding with 11.5 bits of resolution.
When coupled to a diametrically magnetized round magnet, the sensor is periodically sampled through its serial peripheral interface (SPI) port at 1-ms intervals (Figure 1). When a read command is issued, the sensor responds with a 16-bit word. The application uses the 16 bits worth of information, although the system’s accuracy is driven by the effective 11.5-bit resolution.

Figure 1 The Hall-effect sensor is connected to the MCU through the SPI ports. Source: Monolithic Power Systems
Power stage selection
Driving bipolar steppers require two full H-bridges. The two main implementations to drive bipolar stepper motors are using a dual H-bridge power stage with a microcontroller unit (MCU) to generate sine/cosine wave pairs or using a fully integrated step indexer engine with microstepping support. Using an MCU and dual H-bridge combination provides more flexibility in terms of how to regulate the sine wave currents, but it also increases complexity.
For this article, a fully integrated step indexer with as much as 16 degrees of microstepping was selected (Figure 2). The integrated step indexer in this article is MP6602, which provides up to 4 A of current drive and is capable of driving NEMA 17 and NEMA 23 bipolar stepper motors. Meanwhile, the MCU drives all control signals, communicates with the indexer through the SPI port, and samples the fault information.

Figure 2 The step indexer is connected to an MCU to drive the bipolar stepper motor. Source: Monolithic Power Systems
Final implementation
For a closed-loop stepper implementation, the sensor and power stage should be controlled by an off-the-shelf ARM Cortex M4F MCU. The MCU communicates with both devices through a single SPI port with two chip selects. An internal timer generates the steps. The board measures 1.35”x1.35” and is small enough to fit behind a NEMA17 stepper motor (Figure 3). This allows the reference design to be used in a larger motor frame size such as the NEMA 23.

Figure 3 The PCB’s bottom side has the MA732 angle sensor. Source: Monolithic Power Systems
Figure 4 shows the motor assembly, in which Figure 4a (above) shows the motor assembly with a diametrically magnetized round magnet facing MA732 sensor, and Figure 4b (below) shows the final solution.


Figure 4 Assemble the motor such that the housing is invisible. Source: Monolithic Power Systems
Absolute position and sensor overflow
Although the contactless magnetic based sensor is an absolute position encoder, this is only true on a per-revolution basis. That is, throughout the rotor’s angular travel through each revolution, the sensor provides a 16-bit number that the MCU reads, which essentially allows the firmware to learn the rotor’s absolute position at any given time.
As the motor revolves, however, each new revolution is indistinguishable from the previous revolution. We can add angular position readings into a much larger number, which can be expressed as a variable that takes all the angle readings to obtain the entire position as an absolute value (called Rotor_Angle_Absolute). This variable is a 32-bit signed integer.
If the motor moves forward, increment the variable, and vice versa. Assuming 16-bit readings, 1,600 microsteps per revolution, and a 1,000-rpm step rate, it would take 22.37 hours for the variable to overflow. The MCU must ensure that the sensor readings are added correctly, even as the rotor goes through its overflow region. This absolute position correction must be executed whether the motor is rotating clockwise or counterclockwise; in other words, the sensor position is incrementing or decrementing.
Figure 5 shows how the angle position changes over time.

Figure 5 The angle position changes over time as the motor revolves. Source: Monolithic Power Systems
Figure 5 shows that the angular displacement (MA732_Angle_Delta, denoted as AD in figure) is computed at periodic intervals (1ms). During each sample, the previous read is stored within MA732_Angle_Prev (denoted as Prev Angle in figure), the new sample is stored at MA732_Angle_New (denoted as New Angle in figure). MA732_Angle_Delta can be calculated with Equation 1:
![]()
The result of Equation 1 is added to MA732_Angle_Absolute. If the rotor moved clockwise (forward), the displacement is positive; if the motor moves counterclockwise (reverse), the displacement is negative.
A special consideration must be made during angle sensor overflows. If the sensor moves forward past the maximum of 0xFFFF (denoted as OvF+AD in Figure 5), or if the sensor decrements its position past 0x0000 (denoted as OvF-AD in Figure 5), the previous equation can no longer be used. In both scenarios, the FW logic chooses one of the following equations, depending on which case we are servicing.
If the angle displacement overflows when counting up and exceeds the maximum (OvF+AD), then MA732_Angle_Delta can be calculated with Equation 2:
![]()
If the angle displacement overflows when counting down and falls below the minimum (OvF-AD), then MA732_Angle_Delta can be calculated with Equation 3:
![]()
Stepper motor: New frontiers
Using an off-the-shelf MCU, we can interface the stepper motor driver and Hall-sensor based sensor via an SPI port. The firmware can then continuously interrogate the position sensor and extrapolate the motor rotor position at all times. By comparing this position to a commanded position, the motor can be commutated to reach the commanded position in a timely fashion.
If an external force causes the motor to lose steps, the sensor information tracks how many steps were lost, which then allows the MCU to close the loop on position and successfully bring the stepper motor to the commanded position.
Although stepper motors are mostly used in open-loop applications, there are plenty of advantages in closing the loop on position. By employing cost-effective, Hall-sensing technologies, and an easy-to-use index-based stepper drivers, the application can now add servo-like properties to their stepper-based applications.
Jose Quinones is senior application engineer at Monolithic Power Systems (MPS).
Related Content
- Stepper Motor Controller
- Stepper Motors: Care & Feeding
- Stepper Motor Controller Eliminates Need for Tuning
- Standard Step-Motor Driver Interface Limits Performance
- Why microstepping in stepper motors isn’t as good as you think
The post No more missed steps: Unlocking precision with closed-loop stepper control appeared first on EDN.
Program sequence monitoring using watchdog timers
WDT in safety standards
With the prevalence of microcontrollers (MCUs) as processing units in safety-related systems (SRS) comes the need for diagnostic measures that will ensure safe operation. IEC 61508-2 specifies self-test supported by hardware (one channel) as one of the recommended diagnostic techniques for processing units. This measure uses special hardware that increases speed and extends the scope of the failure detection, for instance, a watchdog timer (WDT) IC that cyclically monitors the output of a certain bit pattern from the MCU.
The basic functional safety (FS) standard IEC 61508-2 Annex A Table A.10 recommends several diagnostic techniques and measures to control hardware failures in the program sequences of digital devices. Such techniques include a watchdog with a separate time base with or without a time window, as well as a combination of temporal and logical monitoring of program sequences. While each of these has corresponding maximum claimable diagnostic coverage, all these techniques employ WDTs.
This article will show how to implement these diagnostic functions using WDTs. Furthermore, the article will provide insights into the differences of program sequence monitoring diagnostic measures in terms of operation and diagnostic coverage when implemented with ADI’s high-performance supervisory circuits with watchdog function.
Low diagnostic coveragePart 2 of IEC 61508 describes simple watchdogs as external timing elements with a separate time base. Such devices allow the detection of program sequence failures in a computer device, such as MCUs, within a specified interval. This is done by having a mechanism that allows either:
- The MCU is to issue a signal to reset the watchdog before it reaches the timeout
- The watchdog timeout period to be reached so that the watchdog can issue a reset signal to the MCU
Step #1 occurs when the program sequence is running smoothly, while step #2 happens when it is not.
Figure 1a shows an example of the watchdog implementation with a separate time base but without a time window through the MAX6814. Notably, MCUs usually have an internal WDT, but it cannot be solely relied on to detect a fault if it is part of the defective MCU, which will be an issue considering common cause failures (CCF).
To address such CCF concerns, a separate WDT is used to ensure the MCU is placed in reset [1, 2]. Through a flowchart, Figure 1b illustrates the behavior of the WDT as embedded in the MCU’s program execution. Before the flow starts, it’s important to set the watchdog timeout period or the WDT’s maximum reset interval. When such a period or interval is defined, the WDT will run upon execution of the program. The MCU must be able to send a signal to the MAX6814’s WDI pin before it reaches timeout, as the device will issue a reset signal to the MCU if the timeout period is reached. When the MCU resets, the system will be placed into a safe state.
Figure 1 Simple watchdog operation showing (a) an example of the watchdog implementation with a separate time base but without a time window and (b) the behavior of the WDT as embedded in the MCU’s program execution. Source: Analog Devices
Such a WDT’s timeout period will capture program sequence issues; for example, a program sequence gets stuck in a loop, or an interrupt service routine does not return in time. For instance, only 5 of the 10 subroutines meant to be run on every loop of the software are executed.
However, the WDT’s timeout period will not cover other issues concerning program sequence issues—whether execution of the program took longer or shorter than expected, or if the sequence of the program sections is correctly executed. This can be solved by the next type of WDTs.
Medium diagnostic coverageSince the existence of a separate time window allows for the detection of both excessive delays and premature execution, windowed WDTs prohibit the MCU from responding longer or shorter than the WDT’s open window. This is also referred to as a valid window specification. As compared to simple watchdogs, it guarantees that all subroutines are executed by the program in a timely manner; otherwise, it will assert the MCU into reset [3].
Figure 2 shows an example implementation of program sequence monitoring using the MAX6753. It comes with a windowed watchdog with external-capacitor-configurable watchdog periods.

Figure 2 Sample implementation of a windowed watchdog operation with external-capacitor-configurable watchdog periods.
Figure 3, on the other hand, shows another implementation using the MAX42500, whose watchdog time settings can be configured through I2C—effectively reducing the number of external components. This allows for the capability to increase fault coverage through a packet error checking (PEC) byte as shown in Figure 4. The PEC byte increases diagnostic coverage against I2C communication-related failures such as bus errors, stuck-bus conditions, timing problems, and improper configuration.
Figure 3 Another implementation: windowed watchdog through I2C, reducing the number of external components compared to Figure 2. Source: Analog Devices
Figure 4 PEC byte coverage to I2C interface failures, such as bus errors, stuck-bus conditions, timing problems, and improper configuration. Source: Analog Devices
While watchdogs with a separate time base and time window offer higher diagnostic coverage compared to simple WDTs, they still cannot capture issues concerning whether the software’s subroutines have been executed in the correct sequence. This is what the next type of diagnostic technique addresses.
High diagnostic coverageDiagnostic techniques involving the combination of temporal and logical monitoring provide high diagnostic coverage to program sequences according to IEC 61508-2. One implementation of this technique involves a windowed watchdog and a capability to check whether the program sequence has been executed in the correct order.
An example can be visualized when the circuit in Figure 2 is combined with the sequence in Figure 5, where the MCU has each of its program routines employing a unique combination of characters and digits. Such unique combinations are then placed in an array each time a routine is executed. After the last routine, the MCU will only kick, or send a reset signal to, the watchdog if all words are correctly set in the array.

Figure 5 Checking the correct logic of the sequence through markers. Source: Analog Devices
Highest diagnostic coverageIn some systems, more diagnostic coverage may be required to capture failures of the MCU, which may mean simply that sending back a pulse in a windowed time is not enough. With this, it may be beneficial to require the MCU to perform a complex task, such as calculating, to ensure that it’s fully operational. This is where the MAX42500’s challenge/response watchdog can come into play.
In this watchdog mode, there’s a key-value register in the IC that must be read as the starting point of the challenge message. The MCU must use this message to calculate the appropriate response to send back to the watchdog IC, ensuring the watchdog is kicked within the valid window. This type of challenge/response watchdog operates similarly to a simple windowed one, except that the key register is updated rather than the watchdog being refreshed with a rising edge. This is shown in Figure 6. Notably, for the MAX42500’s WDT, the watchdog input is implemented using the I2C, while the watchdog output is the output reset pin.
Figure 6 A challenge/response windowed watchdog example where the MCU reads the challenge message in the IC and calculates an appropriate response to be sent back to the watchdog IC to allow it to be kicked within the valid window. Source: Analog Devices
The MAX42500 contains a linear-feedback shift key (LFSK) register with a polynomial of x8 + x6 + x5 + x4 + 1 that will shift all bits upward towards the most significant bit (MSB) and insert the calculated bit as the new least significant bit (LSB). With this, the MCU must compute the response in this manner and return it to the register of the MAX42500 through I2C. Notably, such a polynomial is identified as primitive and at the same time, a maximal length feedback polynomial for 8 bits. This ensures that all bit value combinations (1 to 255) are generated by the polynomial, and the order of the numbers is indeed pseudo-random [4][5].
Such a challenge/response can offer more coverage than the combination of temporal and logical program sequence monitoring, as it shows that the MCU can still do actual calculations. This is as opposed to an MCU just implementing decision-making routines, such as only checking whether the array of words is correct before issuing a signal to reset the watchdog.
Diagnostic coverage claimsThe basic functional safety standard has maximum claimable diagnostic coverage for each diagnostic measure recommended per block in an SRS. Table 1 corresponds to the program sequence according to IEC 61508, which utilizes WDTs.
|
Diagnostic Technique/Measure |
Maximum DC Considered Achievable |
|
Watchdog with a separate time base without a time window |
Low |
|
Watchdog with a separate time base and time window |
Medium |
|
Combination of temporal and logical monitoring of program sequences |
High |
Table 1 Watchdog program sequence according to IEC 61508-2 Annex A Table A.10.
Furthermore, with the existence of different implementations that may not be covered in the standard, a claimed diagnostic coverage can only be validated through fault insertion testing.
Diagnostic measures using WDTsThis article enumerates three types of diagnostic measures that use WDTs as recommended by IEC 61508-2 to address failures in program sequence. The first type of watchdog, which has a separate time base but without a time window, can be implemented using a simple watchdog. This diagnostic measure can only claim low diagnostic coverage.
On the other hand, the second type of watchdog, which has both a separate time base and a separate time window, can be implemented by a windowed watchdog. This measure can claim a medium diagnostic coverage.
To improve diagnostic coverage to high, one can employ logical monitoring aside from the usual temporal monitoring using watchdogs. A challenge/response windowed watchdog architecture can further increase diagnostic coverage against program sequence failures with its capability to check an MCU’s computational ability.
Bryan Angelo Borres is a TÜV-certified functional safety engineer who focuses on industrial functional safety. As a senior power applications engineer, he helps component designers and system integrators design functionally safe power products that comply to industrial functional safety standards such as the IEC 61508. Bryan is a member of the IEC National Committee of the Philippines to IEC TC65/SC65A and IEEE Functional Safety Standards Committee. He also has a postgraduate diplomat in power electronics and more than seven years of extensive experience in designing efficient and robust power electronics systems.
Christopher Macatangay is a senior product applications engineer supporting the industrial power product line. Since joining Analog Devices in 2015, he has played a key role in enabling customer success through technical support, system validation, and application development for analog and mixed-signal products. Christopher spent six years prior to ADI as a test development engineer at a power supply company, where he focused on the design and implementation of automated test solutions for high-reliability products.
References
- “IEC 61508 All Parts, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related ” International Electrotechnical Commission, 2010.
- “Top Misunderstandings About Functional Safety.” TÜV SÜD,
- “Basics of Windowed Watchdog Operation.” Analog Devices, Inc. December
- “Pseudo Random Number Generation Using Linear Feedback Shift Registers.” Maxim, June 2010.
- Mohammed Abdul Samad AL-khatib and Auqib Hamid Lone “Acoustic Lightweight Pseudo Random Number Generator based on Cryptographically Secure LFSR.” International Journal of Computer Network and Information Security, Vol. 2, February
Related Content
- Watchdog versus the truck
- Need a watchdog for improved system fault tolerance?
- WDT assumes varied roles
The post Program sequence monitoring using watchdog timers appeared first on EDN.
Fast, compact scopes reveal subtle signal shifts

Covering bandwidths from 100 MHz to 1 GHz, R&S MXO 3 oscilloscopes capture up to 4.5 million waveforms/s with 99% real-time visibility. According to R&S, the 4- and 8-channel models deliver responsive, precise performance in a space-saving form factor at a more accessible price point.

The MXO 3 offers hardware-accelerated zone triggering at up to 600,000 events/s, 50,000 FFTs/s, and 600,000 math operations/s, with a minimum trigger re-arm time of just 21 ns. It resolves small signal changes alongside larger ones with 12-bit vertical resolution at all sample rates, enhanced 18-bit HD mode, 125 Mpoints of standard memory, and a maximum sample rate of 5 Gsamples/s.
Both the 4- and 8-channel scopes come in a portable 5U design, weighing only about 4 kg, and fit easily on benches, even crowded ones. Each includes an 11.6-in. full-HD display with a capacitive touchscreen and intuitive user interface. VESA mounting compatibility allows additional flexibility in engineering environments.
Prices for the MXO3 oscilloscopes start at just over $6000.
The post Fast, compact scopes reveal subtle signal shifts appeared first on EDN.
Inductive sensors broaden motion-control options

Three magnet-free inductive position sensors from Renesas provide a cost-effective alternative to magnetic and optical encoders. With different coil architectures, the ICs address a wide range of applications in robotics, medical devices, smart buildings, home appliances, and motor control.

The dual-coil RAA2P3226 uses a Vernier architecture to deliver up to 19-bit resolution and 0.01° absolute accuracy, providing true power-on position feedback for precision robotic joints. The single-coil RAA2P3200 prioritizes high-speed, low-latency operation for motor commutation in e-bikes and cobots, with built-in protection for robust industrial use. Also using single-coil sensing, the RAA2P4200 offers a compact, cost-efficient option for low-speed applications such as service robots, power tools, and medical devices.
All three sensors share a common inductive sensing core that enables accurate, contactless position measurement in harsh industrial environments. Each device supports rotary on-axis, off-axis, arc, and linear configurations, and includes automatic gain control to compensate for air-gap variations. A 16-point linearization feature enhances accuracy.
The sensors are now in volume production, supported by a web-based design tool that automates coil layout, simulation, and tuning.
The post Inductive sensors broaden motion-control options appeared first on EDN.
AOS devices power 800-VDC AI racks

GaN and SiC power semiconductors from AOS support NVIDIA’s 800-VDC power architecture for next-gen AI infrastructure, enabling data centers to deploy megawatt-scale racks for rapidly growing workloads. Moving from conventional 54-V distribution to 800 VDC reduces conversion steps, boosting efficiency, cutting copper use, and improving reliability.

The company’s wide-bandgap semiconductors are well-suited for the power conversion stages in AI factory 800‑VDC architectures. Key device roles include:
- High-Voltage Conversion: SiC devices (Gen3 AOM020V120X3, topside-cooled AOGT020V120X2Q) handle high voltages with low losses, supporting power sidecars or single-step conversion from 13.8 kV AC to 800 VDC. This simplifies the power chain and improves efficiency.
- High-Density DC/DC Conversion: 650-V GaN FETs (AOGT035V65GA1) and 100-V GaN FETs (AOFG018V10GA1) convert 800 VDC to GPU voltages at high frequency. Smaller, lighter converters free rack space for compute resources and enhance cooling.
- Packaging Flexibility: 80-V and 100-V stacked-die MOSFETs (AOPL68801) and 100-V GaN FETs share a common footprint, letting designers balance cost and efficiency in secondary LLC stages and 54-V to 12- V bus converters. Stacked-die packages boost secondary-side power density.
AOS power technologies help realize the advantages of 800‑VDC architectures, with up to 5% higher efficiency and 45% less copper. They also reduce maintenance and cooling costs.
The post AOS devices power 800-VDC AI racks appeared first on EDN.
Optical Tx tests ensure robust in-vehicle networks

Keysight’s AE6980T Optical Automotive Ethernet Transmitter Test Software qualifies optical transmitters in next-gen nGBASE-AU PHYs for IEEE 802.3cz compliance. The standard defines optical automotive Ethernet (2.5–50 Gbps) over multimode fiber, providing low-latency, EMI-resistant links with high bandwidth, and lighter cabling. Keysight’s platform helps enable faster, more reliable in-vehicle networks for software-defined and autonomous vehicles.

Paired with Keysight’s DCA-M sampling oscilloscope and FlexDCA software, the AE6980T offers Transmitter Distortion Figure of Merit (TDFOM) and TDFOM-assisted measurements, essential for evaluating optical signal quality. Device debugging is simplified through detailed margin and eye-quality evaluations. The compliance application also automates complex test setups and generates HTML reports showing how devices pass or fail against defined limits.
AE6980T software provides full compliance with IEEE 802.3cz-2023, Amendment 7, and Open Alliance TC7 test house specifications. It currently supports 10-Gbps data rates, with 25 Gbps planned for the future.
For more information about Keysight in-vehicle network test solutions and their automotive use cases, visit Streamline In-Vehicle Networking.
The post Optical Tx tests ensure robust in-vehicle networks appeared first on EDN.
Gate drivers tackle 220-V GaN designs

Two half-bridge GaN gate drivers from ST integrate a bootstrap diode and linear regulators to generate high- and low-side 6-V gate signals. The STDRIVEG210 and STDRIVEG211 target systems powered from industrial or telecom bus voltages, 72-V battery systems, and 110-V AC line-powered equipment.

The high-side driver of each device withstands rail voltages up to 220 V and is easily supplied through the embedded bootstrap diode. Separate gate-drive paths can sink 2.4 A and source 1.0 A, ensuring fast switching transitions and straightforward dV/dt tuning. Both devices provider short propagation delay with 10-ns matching for low dead-time operation.
ST’s gate drivers support a broad range of power-conversion applications, including power supplies, chargers, solar systems, lighting, and USB-C sources. The STDRIVEG210 works with both resonant and hard-switching topologies, offering a 300-ns startup time that minimizes wake-up delays in burst-mode operation. The STDRIVEG211 adds overcurrent detection and smart shutdown functions for motor drives in tools, e-bikes, pumps, servos, and class-D audio systems.
Now in production, the STDRIVEG210 and STDRIVEG211 come in 5×4-mm, 18-pin QFN packages. Prices start at $1.22 each in quantities of 1000 units. Evaluation boards are also available.
The post Gate drivers tackle 220-V GaN designs appeared first on EDN.
Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive

A month and a few days ago, Apple dedicated an in-person event (albeit with the usual pre-recorded presentations) to launching its latest mainstream and Pro A19 SoCs and the various iPhone 17s containing them, along with associated smart watch and earbuds upgrades. And at the end of my subsequent coverage of Amazon and Google’s in-person events, I alluded to additional Apple announcements that, judging from both leaks (some even straight from the FCC) and historical precedents, might still be on the way.
Well, earlier today (as I write these words on October 15), at least some of those additional announcements just arrived, in the form of the new baseline M5 SoC and the various upgraded systems containing it. But this time, again following historical precedent, they were delivered only in press release form. Any conclusions you might draw as the relative importance within Apple of smartphones versus other aspects of the overall product line are…well…

Looking at the historical trends of M-series SoC announcements, you’ll see that the initial >1.5-year latency between the baseline M1 (November 2020) and M2 (June 2022) chips subsequently shrunk to a yearly (plus or minus a few months) cadence. To wit, since the M4 came out last May but the M5 hadn’t yet arrived this year, I was assuming we’d see it soon. Otherwise, its lingering absence would likely be reflective of troubles within Apple’s chip design team and/or longstanding foundry partner TSMC. And indeed, the M5 has finally shown up. But my concerns about development and/or production troubles still aren’t completely alleviated.

Let’s parse through the press release.
Built using third-generation 3-nanometer technology…
This marks the third consecutive generation of M-series CPUs manufactured on a 3-nm litho process (at least for the baseline M5…I’ll delve into higher-end variants next). Consider this in light of Wikipedia’s note that TSMC began risk production on its first 2 nm process mid-last year and was originally scheduled to be in mass production on 2 nm in “2H 2025”. Admittedly, there are 2.5 more months to go until 2025 is over, but Apple would have had to make its process-choice decision for the M5 many months (if not several years) in the past.
Consider, too, that the larger die size Pro and Max (and potentially also Ultra) variants of the M5 haven’t yet arrived. This delay isn’t without precedent; there was a nearly six-month latency between the baseline M4 and its Pro and Max variants, for example. That said, the M4 had shown up in early May, with the Pro and Max following in late October, so they all still arrived in 2024. And here’s an even more notable contrast: all three variants of the M3 were launched concurrently in late October 2023. Consider all of this in the light of persistent rumors that M5 Pro- and Max-based systems may not show up until spring-or-later 2026.
M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4.
Note that these Neural Accelerators are presumably different than those in the dedicated 16-core Neural Engine. The latter historically garnered the bulk of the AI-related press release “ink”, but this time it’s limited to terse “improved” and “faster” descriptions. What does this tell me?
- “Neural Accelerator” is likely a generic term reflective of AI-tailored shader and other functional block enhancements, analogous to the increasingly AI-optimized capabilities of NVIDIA’s various GPU generations.
- The Neural Engine, conversely, is (again, I’m guessing) largely unchanged here from the one in the M4 series, instead indirectly benefiting from a performance standpoint due to the boosted overall SoC-to-external memory bandwidth.
M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.
Core count proportions and totals both match those of the M4. Aside from potential “Neural Accelerator” tweaks such as hardware-accelerated instruction set additions (à la Intel’s MMX and SSE), I suspect they’re largely the same as the prior generation, with any performance uplift resulting from overall external memory bandwidth improvements. Speaking of which…
M5 also features…a nearly 30 percent increase in unified memory bandwidth to 153GB/s.
And later…
M5 offers unified memory bandwidth of 153GB/s, providing a nearly 30 percent increase over M4 and more than 2x over M1. The unified memory architecture enables the entire chip to access a large single pool of memory, which allows MacBook Pro, iPad Pro, and Apple Vision Pro to run larger AI models completely on device. It fuels the faster CPU, GPU, and Neural Engine as well, offering higher multithreaded performance in apps, faster graphics performance in creative apps and games, and faster AI performance running models on the Neural Accelerators in the GPU or the Neural Engine.
The enhanced memory controller is, I suspect, the nexus of overall M4-to-M5 advancements, as well as explaining why Apple’s still able to cost-effectively (i.e., without exploding the total transistor count budget) fabricate the new chip on a legacy 3-nm lithography. How did the company achieve this bandwidth boost? While an even wider bus width than that used with the M4 might conceptually provide at least part of the answer, it’d also both balloon the required SoC pin count and complicate the possible total memory capacity increments. I therefore suspect a simpler approach is at play. The M4 used 7500 Mbps DDR5X SDRAM, while the M4 Pro and Max leveraged the faster 8533 Mbps DDR5X speed bin. But if you look at Samsung’s website (for example), you’ll see an even faster 9600 Mbps speed bin listed. 9600 Mbps is 28% more than 7500 Mbps…voila, there’s your “nearly 30 percent increase”.
There’s one other specification, this time not found in the SoC press release but instead in the announcement for one of the M5-based systems, that I’d like to highlight:
…up to 2x faster storage read and write speeds…
My guess here is that Apple has done a proprietary (or not)-interface equivalent to the industry-standard PCI Express 4.x-to-5.x and UFS 4.x-to-5.x evolutions, which also tout doubled peak transfer rate speeds.
Speaking of speeds…keep in mind when reading about SoC performance claims that they’re based on the chip running at its peak possible clock cadence, not to mention when outfitted with maximum available core counts. An especially power consumption-sensitive tablet computer, for example, might clock-throttle the processor compared to the SoC equivalent in a mobile or (especially) desktop computer. Yield-maximization (translating into cost-minimization) “binning” aspirations are another reason why the SoC in a particular system configuration may not perform to the same level as a processor-focused press release might otherwise suggest. Such schemes are particularly easy for someone like Apple—who doesn’t publish clock speeds anyway—to accomplish.
And speaking of cost minimization, reducing the guaranteed-functional core counts on a chip can significantly boost usable silicon yield, too. To wit, about those M5-based systems…
11” and 13” iPad Pros
Last May’s M4 unveil marked the first time that an iPad, versus a computer, was the initial system to receive a new M-series processor generation. More generally, the fifth-gen iPad Pro introduced in April 2021 was the first iPad to transition from Apple’s A-series SoCs to the M-series (the M1, to be precise). This was significant because, up to that point, M-series chips had been exclusively positioned as for computers, with A-series processors for iPhones and iPads.
This time, both the 11” and 13” iPad Pro get the M5, albeit with inconsistent core counts (and RAM allocations, for that matter) depending on the flash memory storage capacity and resultant price tag. From 9 to 5 Mac’s coverage:
- 256GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
- 512GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
- 1TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
- 2TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
It bears noting that the 12 GByte baseline capacity is 4 GBytes above what baseline M4 iPad Pros came with a year-plus ago. Also, the deprecated CPU core in the lower-end variants is one of the four performance cores; CPU efficiency core counts are the same across all models, as are—a pleasant surprise given historical precedents and a likely reflection of TSMC’s process maturity—the graphics core counts. And for the first time, a cellular-equipped iPad has switched from a Qualcomm modem to Apple’s own: the newest C1X, to be precise, along with the N1 for wireless communications, both of which we heard about for the first time just a month ago.
A brief aside: speaking of A-series to M-series iPad Pro transitions, mine is a second-generation 11” model (one of the fourth-generation iPad Pros) dating from March 2020 and based on the A12Z Bionic processor. It’s still running great, but I’ll bet Apple will drop software support for it soon (I’m frankly surprised that it survived this year’s iPadOS 26 cut, to be honest). My wife-and-I have a wedding anniversary next month. Then there’s Christmas. And my 60th birthday next May. So, if you’re reading this, honey…

This one was not-so-subtly foreshadowed by Apple’s marketing VP just yesterday. The big claim here, aside from the inevitable memory bandwidth-induced performance-boost predications, is “phenomenal battery life of up to 24 hours” (your mileage may vary, of course). And it bears noting that, in today’s tariff-rife era, the $1599 entry-level pricing is unchanged from last year.
The Vision Pro
The underlying rationale for the performance boost is more obvious here; the first-generation model teased in June 2023 with sales commencing the following February was based on the three-generations-older M2 SoC. That said, given the rampant rumors that Apple has redirected its ongoing development efforts to smart glasses, I wonder how long we’ll be stuck with this second-generation evolutionary tweak of the VR platform. A redesigned headband promises a more comfortable wearing experience. Apple will also start selling accessories from Logitech (the Muse pencil, available now) and Sony (the PlayStation VR2 Sense controller, next month).
Anything else?I should note, by the way, that the Beats Powerbeats Fit earbuds that I mentioned a month back, which had been teased over YouTube and elsewhere but were MIA at Apple’s event, were finally released at the end of September. And on that note, other products (some currently with evaporating inventories at retail, another common tipoff that a next-generation device is en route) are rumored candidates for near-future launch:
- Next-gen Apple TV 4K
- HomePod mini 2
- AirTag 2
- One (or multiple) new Apple Studio Display(s)
- (???)
We shall see. Until next time, I welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Amazon and Google: Can you AI-upgrade the smart home while being frugal?
- The transition to Apple silicon Arm-based computers
- Apple’s 2022 WWDC: Generational evolution and dramatic obsolescence
- Apple’s Spring 2024: In-person announcements no more?
- Apple’s “October Surprise”: the M3 SoC Family and the A17 Bionic Reprise
- Apple’s 2H 2025 announcements: Tariff-touched but not bound, at least for this round
The post Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive appeared first on EDN.
TI launches power management devices for AI computing

Texas Instruments Inc. (TI) announced several power management devices and a reference design to help companies meet AI computing demands and scale power management architectures from 12 V to 48 V to 800 VDC. These products include a dual-phase smart power stage, a dual-phase smart power module for lateral power delivery, a gallium nitride (GaN) intermediate bus converter (IBC), and a 30-kW AI server power supply unit reference design.
“Data centers are very complex systems and they’re running very power-intensive workloads that demand a perfect balance of multiple critical factors,” said Chris Suchoski, general manager of TI’s data center systems engineering and marketing team. “Most important are power density, performance, safety, grid-to-gate efficiency, reliability, and robustness. These factors are particularly essential in developing next-generation, AI purpose-driven data centers, which are more power-hungry and critical today than ever before.”
(Source: Texas Instruments Inc.)
Suchoski describes grid-to-gate as the complete power path from the AC utility gird to the processor gates in the AI compute servers. “Throughout this path, it’s critical to maximize your efficiency and power density. We can help improve overall energy efficiency from the original power source to the computational workload,” he said.
TI is focused on helping customers improve efficiency, density, and security at every stage in the power data center by combining semiconductor innovation with system-level power infrastructure, allowing them to achieve high efficiency and high density, Suchoski said.
Power density and efficiency improvementsTI’s power conversion products for data centers address the need for increased power density and efficiency across the full 48-V power architecture for AI data centers. These include input power protection, 48-V DC/DC conversion, and high-current DC/DC conversion for the AI processor core and side rails. TI’s newest power management devices target these next-generation AI infrastructures.
One of the trends in the market is a move from single-phase to dual-phase power stages that enable higher current density for the multi-phase buck voltage regulators that power these AI processors, said Pradeep Shenoy, technologist for TI’s data center systems engineering and marketing team.
The dual-phase power stage has very high-current capabilities, 200-A peak, Shenoy said, and it is in a very small, 5 × 5-mm package that comes in a thermally enhanced package with top-side cooling, enabling a very efficient and reliable supply in a small area.
The CSD965203B dual-phase power stage claims the highest peak power density power stage on the market, with 100 A of peak current per phase, combining two power phases in a 5 × 5-mm quad-flat no-lead package. With this device, designers can increase phase count and power delivery across a small printed-circuit-board area, improving efficiency and performance.
Another related trend is the move to dual-phase power modules, Shenoy said. “These power modules combine the power stages with the inductors, all in a compact form factor.”
The dual-phase power module co-packages the power stages with other components on the bottom and the inductor on the top, and it offers both trans-inductor voltage regulator (TLVR) and non-TLVR options, he added. “They help improve the overall power density and current density of the solution with over a 2× reduction in size compared with discrete solutions.”
The CSDM65295 dual-phase power module delivers up to 180 A of peak output current in a 9 × 10 × 5-mm package. The module integrates two power stages and two inductors with TLVR options while maintaining high efficiency and reliable operation.
The GaN-based IBC achieves over 1.5 kW of output power with over 97.5% peak efficiency, and it also enables regulated output and active current sharing, Shenoy said. “This is important because as we see the power consumption and power loads are increasing in these data centers, we need to be able to parallel more of these IBCs, and so the current sharing helps make that very scalable and easy to use.”
The LMM104RM0 GaN converter module offers over 97.5% input-to-output power conversion efficiency and high light-load efficiency to enable active current sharing between multiple modules. It can deliver up to 1.6 kW of output power in a quarter-brick (58.4 × 36.8-mm) form factor.
TI also introduced a 39-kW dual-stage power supply reference design for AI servers that features a three-phase, three-level flying capacitor power-factor-correction converter paired with dual delta-delta three-phase inductor-inductor-capacitor converters. The power supply is configurable as a single 800-V output or separate output supplies.
30-kW HVDC AI data center reference design (Source: Texas Instruments Inc.)
TI also announced a white paper, “Power delivery trade-offs when preparing for the next wave of AI computing growth,” and its collaboration with Nvidia to develop power management devices to support 800-VDC power architectures.
The solutions will be on display at Open Compute Summit (OCP), Oct. 13–16, in San Jose, California. TI is exhibiting at Booth #C17. The company will also participate in technology sessions, including the OCP Global Summit Breakout Session and OCP Future Technologies Symposium.
The post TI launches power management devices for AI computing appeared first on EDN.
100-V GaN transistors meet automotive standard

Infineon Technologies AG unveils its first gallium nitride (GaN) transistor family qualified to the Automotive Electronics Council (AEC) standard for automotive applications. The new CoolGaN automotive transistor 100-V G1 family, including high-voltage (HV) CoolGaN automotive transistors and bidirectional switches, meet AEC-Q101.
(Source: Infineon Technologies AG)
This supports Infineon’s commitment to provide automotive solutions from low-voltage infotainment systems addressed by the new 100-V GaN transistor to future HV product solutions in onboard chargers and traction inverters. “Our 100-V GaN auto transistor solutions and the upcoming portfolio extension into the high-voltage range are an important milestone in the development of energy-efficient and reliable power transistors for automotive applications,” said Johannes Schoiswohl, Infineon’s head of the GaN business line, in a statement.
The new devices include the IGC033S10S1Q CoolGaN automotive transistor 100 V G1 in a 3 × 5-mm PQFN package, and the IGB110S10S1Q CoolGaN transistor 100 V G1 in a 3 × 3-mm PQFN. The IGC033S10S1Q features an Rds(on) of 3.3 mΩ and the IGB110S10S1Q has an Rds(on) of 11 mΩ. Other features include dual-side cooling, no reverse recovery charge, and ultra-low figures of merit.
These GaN e-mode power transistors target automotive applications such as advanced driver assistance systems and new climate control and infotainment systems that require higher power and more efficient power conversion solutions. GaN power devices offer higher energy efficiency in a smaller form factor and lower system cost compared to silicon-based components, Infineon said.
The new family of 100-V CoolGaN transistors target applications such as zone control and main DC/DC converters, high-performance auxiliary systems, and Class D Audio amplifiers. Samples of the pre-production automotive-qualified product range are now available. Infineon will showcase its automotive GaN solutions at the OktoberTech Silicon Valley, October 16, 2025.
The post 100-V GaN transistors meet automotive standard appeared first on EDN.
Voltage-to-period converter offers high linearity and fast operation

The circuit in Figure 1 converts the input DC voltage into a pulse train. The period of the pulses is proportional to the input voltage with a 50% duty cycle and a nonlinearity error of 0.01%. The maximum conversion time is less than 5 ms.
Figure 1 The circuit uses an integrator and a Schmitt trigger with variable hysteresis to convert a DC voltage into a pulse train where the period of the pulses is proportional to the input voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The circuit is made of four sections. The op-amp IC1 and resistors R1 to R5 create two reference voltages for the integrator.
The integrator, built with IC2, RINT, and CINT, generates two linear ramps. Switch S1 changes the direction of the current going to the integrating capacitor; in turn, this changes the direction of the linear ramps. The rest of the circuit is a Schmitt trigger with variable hysteresis. The low trip point VLO is fixed, and the high trip point VHI is variable (the input voltage VIN comes in there).
The signal coming from the integrator sweeps between the two trip points of the trigger at an equal rate and in opposite directions. Since R4 = R5, the duty cycle is 50% and the transfer function is as follows:
![]()
To start oscillations, the following relation must be satisfied when the circuit gets power:
![]()
Figure 2 shows that the transfer function of the circuit is perfectly linear (the R² factor equals unity). In reality, there are slight deviations around the straight line; with respect to the span of the output period, these deviations do not exceed ± 0.01%. The slope of the line can be adjusted to 1000 µs/V by R2, and the offset can be easily cancelled by the microcontroller (µC).

Figure 2 The transfer function of the circuit in Figure 1. It is very linear and can be easily adjusted via R2.
Figure 1 shows that the µC converts period T into a number by filling the period with clock pulses of frequency fCLK = 1 MHz. It also adds 50 to the result to cancel the offset. The range of the obtained numbers is from 200 to 4800, i.e., the resolution is 1 count per mV.
Resolution can be easily increased by a factor of 10 by setting the clock frequency to 10 MHz. The great thing is that the nonlinearity error and conversion time remain the same, which is not possible for the voltage-to-frequency converters (VFCs). Here is an example.
Assume that a voltage-to-period converter (VPC) generates pulse periods T = 5 ms at a full-scale input of 5 V. Filling the period with 1 MHz clock pulses produces a number of 5000 (N = T * fCLK). The conversion time is 5 ms, which is the longest for this converter. As we already know, the nonlinearity is 0.01%.
Now consider a VFC which produces a frequency f = 5 kHz at a 5-V input. To get the number of 5000, this signal must be gated by a signal that is 1 second long (N = tG * f). Gate time is the conversion time.
The nonlinearity in this case is 0.002 % (see References), which is five times better than VPC’s nonlinearity. However, conversion time is 200 times longer (1 s vs. 5 ms). To get the same number of pulses N for the same conversion time as the VPC, the full-scale frequency of the VFC must go up to 1 MHz. However, nonlinearity at 1 MHz is 0.1%, ten times worse than VPC’s nonlinearity.
The contrast becomes more pronounced when the desired number is moved up to 50,000. Using the same analysis, it becomes clear that the VPC can do the job 10 times faster with 10 times better linearity than the VFCs. An additional advantage of the VPC is the lower cost.
If you plan to use the circuit, pay attention to the integrating capacitor. As CINT participates in the transfer function, it should be carefully selected in terms of tolerance, temperature stability, and dielectric material.
Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.
Related Content
- Voltage-to-period converter improves speed, cost, and linearity of A-D conversion
- Circuits help get or verify matched resistors
- RMS stands for: Remember, RMS measurements are slippery
References:
- AD650 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Analog Devices; www.analog.com
- VFC320 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Burr-Brown; www.ti.com
The post Voltage-to-period converter offers high linearity and fast operation appeared first on EDN.
“Flip ON Flop OFF” for 48-VDC systems with high-side switching

My Design Idea (DI), “Flip ON Flop OFF for 48-VDC systems,“ was published and referenced Stephen Woodward’s earlier “Flip ON Flop OFF” circuit. Other DIs published on this subject matter were for voltages less than 15 V, which is the voltage limit for CMOS ICs, while my DI was intended for higher DC voltages, typically 48 VDC. In this earlier DI, the ground line is switched, which means the input and output grounds are different. This is acceptable to many applications since the voltage is small and will not require earthing.
However, some readers in the comments section wanted a scheme to switch the high side, keeping the ground the same. To satisfy such a requirement, I modified the circuit as shown in Figure 1, where input and output grounds are kept the same and switching is done on the positive line side.

Figure 1 VCC is around 5 V and should be connected to the VCC of the ICs U1 and U2. The grounds of ICs U1 and U2 should also be connected to ground (connection not shown in the circuit). Switching is done in the high side, and the ground is the same for the input and output. Note, it is necessary for U1 to have a heat sink.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In this circuit, the voltage dividers R5 and R7 set the voltage at around 5 V at the emitter of Q2 (at VCC). This voltage is applied to ICs U1 and U2. A precise setting is not important, as these ICs can operate from 3 to 15 V. R2 and C2 are for the power ON reset of U1. R1 and C1 are for the push button (PB) switch debounce.
When you momentarily push PB once, the Q1-output of the U1 counter (not the Q1 FET) goes HIGH, saturating the Q3 transistor. Hence, the gate of Q1 (PMOSFET, IRF 9530N, VDSS=-100 V, IDS=-14 A, RDS=0.2 Ω) is pulled to ground. Q1 then conducts, and its output goes near 48 VDC.
Due to the 0.2-Ω RDS of Q1, there will be a small voltage drop depending on load current. When you push PB again, transistor Q3 turns OFF and Q1 stops conducting, and the voltage at the output becomes zero. Here, switching is done at the high side, and the ground is kept the same for the input and output sides.
If galvanic isolation is required (this may not always be the case), you may connect an ON/OFF mechanical switch prior to the input. In this topology, on-load switching is taken care of by the PB-operated circuit, and the ON/OFF switch switches zero current only, so it does not need to be bulky. You can select a switch that passes the required load current. While switching ON, first close the ON/OFF switch and then operate PB to connect. While switching OFF, first push PB to disconnect and operate the ON/OFF switch.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- Flip ON Flop OFF
- To press ON or hold OFF? This does both for AC voltages
- Flip ON Flop OFF without a Flip/Flop
- Elaborations of yet another Flip-On Flop-Off circuit
- Another simple flip ON flop OFF circuit
- Flip ON Flop OFF for 48-VDC systems
The post “Flip ON Flop OFF” for 48-VDC systems with high-side switching appeared first on EDN.






