Українською
  In English
EDN Network
Firmware development: Redefining root cause analysis with AI

As semiconductor devices become smaller and more complex, the product development lifecycle grows increasingly intricate. So, from early builds to pre-qualification testing, firmware development and validation teams face escalating challenges in ensuring quality and performance. As a result, traditional root cause analysis (RCA) methods—performing manual checks, static rules, or post-mortem analysis—struggle to keep up with the complexity and velocity of modern firmware releases.
However, artificial intelligence (AI) and machine learning (ML) are changing the game. These technologies empower firmware teams to detect, diagnose, and prevent failures at scale—across performance testing, qualification cycles, and system integration—ushering in a new era of intelligent RCA.
But first let’s take a closer look at RCA challenges in firmware development.
RCA challenges in firmware development
RCA in firmware development, particularly for SSDs, is like finding a needle in a moving haystack. Engineers face several key challenges:
- Vast amounts of telemetry and debug logs: Firmware systems generate massive telemetry and debug logs. Manually sifting through this data to identify the root cause can be time-consuming, delaying development cycles.
- Elusive, intermittent failures: Firmware failures can be sporadic and difficult to reproduce, especially under high-stress conditions like heavy I/O workloads, making diagnosis even harder.
- Invisible code behavior changes: Minor firmware updates can introduce subtle issues that conventional diagnostics miss, complicating the identification of new bugs.
- Noisy, inconsistent defect signals: Defects often produce erratic and inconsistent signals, making it difficult to pinpoint the true source of failure without extensive testing.
These issues impact product timelines and customer qualifications. AI, rather than replacing engineers, enhances their ability to detect anomalies, reduce troubleshooting time, and improve the overall RCA process, speeding up diagnosis and uncovering hidden issues.
AI-driven approaches in RCA
Below are the AI techniques that streamline the RCA process, speeding up identification of root causes and improving firmware reliability.
- Anomaly detection: Unsupervised models like autoencoders and isolation forests detect abnormal patterns in real-time without requiring labeled failure data. These models learn normal behavior and flag deviations, helping to identify potential issues—like performance degradation—early in the process before they escalate.
- Predictive modeling: Machine learning algorithms such as XGBoost and neural networks analyze trends in historical test and telemetry data to predict future issues, like bugs or regressions. These models allow engineers to act proactively, preventing failures by predicting them before they occur.
- Correlation and pattern discovery: AI connects data across sources like test logs, code commits, and environmental factors to identify hidden relationships. It can pinpoint the root cause of issues faster by correlating failures with specific code changes, configurations, or conditions that traditional methods might overlook.
AI’s role in firmware validation
In firmware development—especially in NVMe devices and embedded systems—code changes can directly impact product stability and customer satisfaction. So, AI is now playing a critical role in this space.
- Monitoring I/O behavior: ML tracks latency, power, and throughput to flag regressions across firmware builds.
- Failure attribution: Historical test and return data are mined to correlate firmware changes with observed anomalies.
- Simulation: Generative models stress-test edge cases—such as power loss scenarios—to uncover potential flaws earlier in the cycle.
In an SSD development project, a firmware update intended to optimize memory management can cause subtle write workload failures during system integration. Traditional quality assurance (QA) can miss these failures, as they are intermittent and appear only under specific conditions.
However, Isolation Forest, an unsupervised machine learning model, is used to monitor real-time system behavior. The model detects timing anomalies tied to the firmware’s background garbage collection process by analyzing telemetry data, including latency and throughput. Isolation Forest identifies deviations from normal patterns, pinpointing the issues like delays introduced by changes in the garbage collection algorithm.
With these insights, engineers can root-cause and fix the issue within days, avoiding qualification delays. Without AI-based detection, there is a chance that this issue goes unnoticed, causing significant delays and customer qualification risks.
Benefits of AI-powered RCA
First and foremost, its speeds up the process by cutting debug time from weeks to hours. The AI-powered RCA also offers accuracy for multi-variable issues. Regarding scalability, it can monitor thousands of signals and logs continuously. Finally, the AI-powered RCA enables predictive action before issues reach customers.
Below is an outline of future directions for AI in RCA methods:
- Explainable AI for building trust in ML decisions.
- Multi-modal models for unifying logs, telemetry, images, and notes.
- Digital twins to simulate firmware behavior under varied scenarios.
AI is no longer optional; it’s becoming central to firmware development. On the other hand, root cause analysis is evolving into a fast, intelligent, and predictive practice. So, as firmware complexity grows, those who harness AI will lead in reliability and time-to-market.
For engineers, adopting AI isn’t about surrendering control—it’s about unlocking superhuman diagnostic capability.
Karan Puniani is a staff test engineer at Micron Technology.
Related Content
- 5 Tips for speeding firmware development
- Development tool evolution – hardware/firmware
- Use virtual machines to ease firmware development
- Will Generative AI Help or Harm Embedded Software Developers?
- No code: Passing Fad or Gaining Adoption for Embedded Development?
The post Firmware development: Redefining root cause analysis with AI appeared first on EDN.
LabVIEW gets an AI makeover with Nigel’s launch

Artificial intelligence (AI) makes a foray into the test and measurement world. An AI assistant trained across the NI software suite and built on Emerson’s secure cloud network can analyze code, offer suggestions for changes, and allow users to ask questions to employ correct tools across nearly 700 functions more quickly.
The Nigel AI Advisor will be integrated into LabVIEW and TestStand by July 2025 and will be available in most existing licenses at no extra cost. LabVIEW, a graphical programming environment, is primarily used by engineers for data acquisition, instrument control, and industrial automation. On the other hand, TestStand is management software that automates, accelerates, and standardizes the test process.
Figure 1 TestStand users can ask questions and get answers inside the window on right. Source: Emerson
Austin Hill, section manager of test software at Emerson, acknowledges that NI engineers have been working on integrating AI and machine learning into the company’s software for many years. “We are the software company, so we have been making critical investments in the capabilities of our software,” he said during an interview with EDN. “Our big focus this year is integration with AI, specifically generative AI.”
Nigel AI Advisor—unveiled during the NI Connect 2025 conference—promises users a step change in productivity. “This is just the beginning,” Hill added. “Nigel will keep getting smarter and better in years to come.” He told EDN that NI engineers are trying to thread the needle on how and where AI will change our industry.
“Besides large language models (LLMs), there are other pieces that we are trying to work around,” Hill said. “We have built hooks in LabVIEW and TestStand that allow us to update frequently, which enables us to work with the next generation of GPUs and compute.”
Figure 2 Here is what temperature monitoring looks like in LabVIEW with Nigel’s aid. Source: Emerson
Regarding Nigel’s capabilities, Hill calls it an AI experience that spans the software platform. “It’s going to help users onboard much faster, especially the new ones,” he said. “It’ll help users find out the functions they need, the suitable examples, and where they are getting errors.”
As users edit sequences or look at test reports, they can have Nigel in the window to ask questions. “That can turn a novice LabVIEW user into an expert LabVIEW user much faster,” Hill added.
Emerson will demonstrate Nigel AI capabilities at the NI Connect conference, which will be held from 28 to 30 April 2025 in Fort Woth, Texas.
Related Content
- The evolution of LabView
- LabVIEW Creator Talks its Past and Future
- National Instruments Releases Free Editions of LabVIEW
- NI’s Kevin Schultz: Embracing AI for Both Test and Design
- LabVIEW 7.1: Graphical Real-Time Development Leaps Ahead
The post LabVIEW gets an AI makeover with Nigel’s launch appeared first on EDN.
Precision programmable current sink

The TL431 has been around for nearly 50 years. During those decades, while primarily marketed as a precision adjustable shunt regulator, this legacy device also found its way into alternative applications. These include voltage comparators, audio amplifiers, current sources, overvoltage protectors, etc. Sadly, in almost every example from this mighty menagerie of circuits, the 431’s “anode” pin sinks to the same lowly fate. It gets grounded.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Current sink regulation
The design idea presented here offers that poor persecuted pin a more buoyant role to play, Figure 1.
Figure 1 The floated anode serves as a sense pin for active current sink regulation.
The Figure 1 block diagram shows how the 431 works at a conceptual level where:
Sink current = Is = (Vc – 2.5v)/R1 = 0 to 1/R1 as Vc = 2.5v to 3.5v
Vs < 37v, Is < 100mA, Is(Vs – R1Is) < 500mW @ 50oC ambient
Series connection adds an internal 2.5-V precision reference to external voltage input on the ANODE pin. The op-amp subtracts this sum from the voltage input on the REF pin, then amplifies and applies the difference to the pass transistor. If the difference is positive (sum < REF), the transistor turns on and shunts current from CATHODE to ANODE. Otherwise (sum > REF), it turns off.
If the 431 is connected in the traditional fashion (REF connected to CATHODE and ANODE grounded). In that case, the scheme works like a shunt voltage regulator should, forcing CATHODE to a resistor-string-programmed multiple of the internal 2.5-V reference voltage. But what will happen if the REF pin is connected to a constant control voltage (Vc > 2.5 V); and the ANODE, instead of being grounded, floats freely on current sensing resistor R1?
What happens is the current gets regulated instead of the voltage. Because Vc is fixed and can’t be pulled down to make REF = ANODE + 2.5, ANODE must be pulled up until equality is achieved. For this to happen:
Is = (Vc – 2.5v)/R1
Constant current sink regulation of 1/R1
Figure 2 illustrates how a fixed voltage divider might be used (assuming a 5-V rail that’s accurate enough) to use a floated-anode Z1 to regulate a constant sink current of:
Is = (3.5v – 2.5v)/R1 = 1/R1
It also illustrates adding a booster transistor Q1 to accommodate applications needing current or power beyond Z1’s modest TO92ish limits. Notice that Z1’s accuracy will be unimpaired because whatever fraction of Is that Q1 causes to bypass Z1 is summed back in before passing through R1.
Figure 2 Booster transistor Q1 can handle current and voltage beyond 431 max Ic and dissipation limits, while the 3.5-V voltage divider programs a constant Is.
Programming sink current with DAC
Figure 3 shows how Is might be digitally programmed with a 2.5-V DAC signal. Note the DAC signal is inverted (Is = max when Vx = 0) while Z2 provides the necessary level shift:
Is = (2.5v – Vx)/(2.5R1) = 0 to 1/R1 as Vx = 2.5v to 0
Figure 3 DAC control of Is, the DAC signal is inverted, while Z2 provides the necessary level shift.
Programming sink current to Df/R1 with DAC
Figure 4 shows an alternate programming method using PWM with Is = Df /R1 where Df equals the 0 to 1 (0% to 100%) PWM duty factor:
Is = (2.5R2/(R3/Df))/R1 as Df = 0 to 1
Df = IsR1R3/(2.5R2)
Df = Is R1
Figure 4 PWM control of Is, where Is is the ratio of the PWM duty factor and R1.
The 8-bit PWM resolution and 10-kHz frequency are assumed. The R2C1 single-pole ripple filter has a time constant of approximately 64x the PWM period (10 kHz = 100 µs assumed) for 1-lsb peak-to-peak max ripple and 38-ms max settling time.
Speeding up settling time
One shortcoming of Figure 4 is the long settling time (~40 ms to 8 bits) imposed by the single-pole R1C1\2 ripple filter. If an extra resistor and capacitor won’t break the bank, that can be sped up by a factor of about 5 (~8 ms) with Figure 5’s R5C2 providing 2nd-order analog-subtraction filtration.
Figure 5 The addition of R5 and C2 provides faster settling times with a 2nd-order ripple filter.
Programmable current sink application circuit
Finally, Figure 6 shows the Figure 4 circuit combined with an inexpensive 24-W AC adapter and a 5-V regulator to power a small digital testing system. Be sure to adequately heatsink Q1.
Figure 6 The combined current sink and small system power supply where the max Is is 1 A, Max Vs is 20 V, and Is = Df.
Thanks for the implicit suggestion, Ashutosh!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- TL431 Model
- PWM-programmed LM317 constant current source
- A negative current source with PWM input and LM337 output
- A high-performance current source
- VCO using the TL431 reference
The post Precision programmable current sink appeared first on EDN.
Tell us your Tale!

Dear EDN Readers,
We’re thrilled to announce the successful expansion of our Design Ideas section. Thanks to your support, we now publish two new DIs every week!
We’re also excited to revitalize our Tales from the Cube column. This platform allows engineers to share their unique experiences in solving challenging design issues—whether they encountered a product failure, dealt with troublesome equipment, or tackled a persistent problem on a personal project.
We aim to regularly update this column with fresh content. With your contributions, we hope to gradually breathe new life into Tales from the Cube with new articles to feature in our Fun Friday newsletter.
Here are some FAQs to get you started:
What are Tales from the Cube articles?
Tales from the Cube are generally brief, focused narratives where engineers outline how they arrive at a solution to a specific design challenge or an innovative approach to a design task. This can relate to a personal project, a contract, or a corporate design dilemma. Here are some basic guidelines that might help you as you write out your article:
- 600-1000 words
- 1-2 images
- One-sentence summary of your story that goes along with the title
- A short author bio
What technology areas are allowed?
We’re open to a wide range of technology areas, including (but not limited to) analog and digital circuits, RF and microwave, programmable logic, hardware-design languages, systems, programming tips, utilities, test equipment and techniques, power, and more. If you’re not sure about your topic, just email us at editors@aspencore.com.
Do I get paid for a Tales from the Cube article?
Yes! Monetary compensation for each Tales from the Cube article is $200 USD, not enough to keep the lights on, but it does offer you an avenue to tell your unique engineering story to tens of thousands of engineers globally and engage in some interesting conversations about your engineering remedy.
How can I submit a Tales from the Cube article?
Feel free to email us at editors@aspencore.com with your questions, thoughts, or a completed article. So, Tell us your Tale!
The post Tell us your Tale! appeared first on EDN.
Revealing the infrasonic underworld cheaply, Part 1

Editor’s Note:
Part 1 of this DI uses an electret mic to create infrasound. It starts with a basic equalization circuit validated with a DIY test fixture and simulations, and ends with a deeper analysis of the circuit’s real response.
Part 2 includes refinements to make the circuit more usable while extending its detectable spectrum with an additional technique that allows us to hear the infrasonic signals.
Although electret microphones are ubiquitous, they are more versatile than might be expected. With some extra equalization, their frequency responses can be made to range from earthquakes to bats. While this Design Idea (DI) ignores those furry mammals, it does show how to get a reasonably flat response down to way below 1 Hz.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Electrets aren’t only used as audio pickups. For decades, they have been employed in security systems to detect unexpected changes of air pressure within rooms, while more recently they can be found in vapes as suck-sensors (or, more technically, “draw sensors”, according to Brian Dipert’s recent teardown).
An excellent description of their construction and use, complete with tear-down pictures, can be found here. The capsules I had to hand were very similar to those shown, being 10 mm in diameter by 6 mm high. Some experiments to check their frequency response—practical details later—showed a steady 6 dB/octave roll-off below about 15 Hz, implying that a filter with an inverse characteristic could flatten the response down to a fraction of a Hertz. And so it proved!
Building an equalization circuit
A basic but usable circuit capable of doing this is given in Figure 1.
Figure 1 Simple equalization can extend the low-frequency response of an electret microphone down to well under 1 Hz.
While this exposes some problems, which we’ll address later, it works and serves to show what’s going on. R1 is chosen to give about half the rail voltage across the mic, and A1 boosts the signal by ~21 dB. At very low frequencies, A2’s stage has a maximum gain of ~30 dB. This falls by 6 dB/octave from ~160 mHz upwards, reaching unity gain at ~4.8 Hz. C3/4 and R7/8 top and tail the response, and A3 boosts the level appropriately. (Not shown is a rail-splitter, defining the central, common rail.) The op-amps used were MCP6022s because of their low input offset voltage.
The low 3-dB point is largely determined by C1/R2. (Adjusting the values of R5, R6, and C2 and adding an extra resistor in series with C2 would, in principle, let us equalize a specific mic to give a flat response from a few hundred millihertz up to its upper limit.)
Figure 2 shows the overall response to changes in air pressure, with 3 dB points at about 500 mHz and 12 Hz. While this is an LTspice-derived trace, it closely matches real-world measurements.
Figure 2 The response of Figure 1’s circuit to air-pressure changes at different frequencies.
Validating the frequency response
That confidence about the actual response may raise some eyebrows, given the difficulty in getting decent bass performance in even the best of hi-fi systems. A custom test rig was called for, using a small speaker to produce pressure changes in a sealed chamber containing a mic-under-test. It’s shown in Figure 3.
Figure 3 Two views of a test rig allowing sub-Hz measurements of a microphone’s frequency response.
The rig comprises an IP68 die-cast box fitted with a 50 mm plastic-coned speaker (42 ohms) and a jam-jar lid, the jar itself being the test chamber for the mic, which, when fitted with pins, could be swapped. Everything was sealed with lots of epoxy, plus some varnish in case of pinholes. A generous smear of silicone grease guaranteed that the jar seated almost hermetically. The speaker was driven by a custom sine-wave oscillator based on a simple squashed-triwave design and covering from 90 mHz to 11 Hz in two ranges.
This is actually the Mark 3 version. Mark 1 was based on a cut-down, wide-mouthed tablet bottle with a speaker fixed to it, which was adequate for initial tests but let in too much ambient noise for serious work. Mark 2 added a jam jar as a baffle behind the speaker, but the bottle’s walls were still too flexible. The more rigidly-constructed Mark 3 worked well, with an unequalized frequency response that was flat within a decibel from about 20 to 200 Hz. (It had a major cavity resonance at about 550 Hz, too high to affect our results.)
Simulations, mostly in hardware
To verify the performance of the rig itself at the lowest frequencies, some simulation was needed—but in hardware, not just with SPICE. Stripping a spare mic down to its bare JFET (a Sanyo 2SK156) and adding some components to that meant that it could be driven electrically rather than acoustically while still looking like the real thing to the circuit—or almost. The main divergence did not affect the frequency response, but did throw light on some unexpected behavior. The simple schematic is in Figure 4; the concept also worked well in LTspice, using their default “NJF” JFET, and formed part of Figure 2’s simulation.
Figure 4 A circuit that simulates an electret microphone in real life.
Once the circuit had settled down, the measured frequency responses using the test rig and the simulated mic matched closely, as did the LTspice sim. With the simulated mic, settling took a few seconds, as expected given the circuit’s long time constants, but with a real mic, it took many times as long. Perhaps the diaphragm was relaxing, or something? Another mic, torn down until only the JFET remained, behaved similarly (and, with its floating gate lead, made a near-electrometer-quality mains-hum probe!).
Curious behavior in a JFET, and how to fix it
It seemed that the FET’s gate was misbehaving—why? Perhaps charge was being injected when power is applied, and then leaking slowly away? Ramping the voltage up gently made some difference, but not enough to explain things fully. It appears that leakage is dominant, and that charge on the gate slowly equalizes, producing a long, slow “tail” which is still just fast enough to produce an offset at the circuit’s output, even with two C-R networks attempting to block it. With the low impedance on the simulated mic’s gate, such effects are negligible. It’s stuff that would never show up in audio work.
From this, we can deduce that the mic’s low 3-dB point is determined not by the FET’s time-constant but by the “acoustics” within the mic. But that extra, inherent time constant still needs addressing if the circuit is to settle in a reasonable time. If the gate must slowly drift towards equilibrium owing to leakage, could we inject a packet of charge at start-up to compensate? Experiments using the circuit of Figure 5 were successful, albeit empirically; the values given are cut-and-try ones. Shorting R1 for about 3 ms gave a pulse of double the final voltage across the mic, and that proved to be optimum for the available capsules in the circuit as built. The settling time is still around 10–15 seconds, but that’s a lot better than over a minute.
Figure 5 A few milliseconds of over-voltage applied across the mic at start-up injects enough charge to counterbalance much of the FET’s longer-term start-up drift.
This is also useful in the case of an overload, which sends the output off-scale. If that happens, you can now use the time-honored method of switching off, waiting a few seconds, and switching back on again!
Real-life response
Figure 6 shows the actual response as measured using the test rig. It’s a composite of two scans, one for each range. (Because tuning was done manually, the frequency scale is only roughly logarithmic.) R9 was set to about 50k, so the output stage had a gain of around 6.
Figure 6. The response of the circuit in Figure 1, measured using Figure 3’s test rig.
The upper trace is the driving waveform for the speaker, showing that a positive-going output from the circuit corresponds to increased pressure within the rig’s chamber. (From this, we can infer that the negatively-poled side of the electret film itself faces the JFET’s gate. That makes sense, because a serious acoustic insult like a handclap right in front of the mic will then charge the gate negatively, and excess negative charge drains away more easily through the JFET’s gate-source diode than positive charge can, speeding recovery from any such overload.)
Note how the baseline wanders. That is mostly due to 1/f or flicker noise in the mic capsule’s JFET; both the bare JFET and the simulated mic show a similar effect, while a resistor is much quieter. We can extend the LF response further, but only at the expense of a worse S/N ratio. And below a Hertz or two, the effects of wind and weather seem to be dominant, anyway.
Viewing the results
There are several further desirable refinements and additions, but they must wait for Part 2. We’ll close this part with some ways of seeing what’s lurking below our ears’ cutoff point. (And Part 2 will also show how to listen to it.)
An oscilloscope (usually bulky, static, and power-hungry) is too obvious to mention, so we won’t. A cheap 50–0–50 µA meter connected between the output and common via a suitable resistor worked, but its response was 50% down at ~2 Hz.
A pair of LEDs, perhaps red and green for positive- and negative-going swings, looked good, though the limited swings available with the 5 V rail meant that the drive circuit needed to be somewhat elaborate, as shown in Figure 7. Caution! Its power must come directly from the power input to avoid the LEDs’ currents disturbing the mic’s supply, which would (and did) cause distortion and even (very) low-frequency oscillation. A good, stable power source is needed anyway.
Figure 7 One LED lights up on positive swings and the other on negative ones, the intensities being proportional to the signal levels.
Part 2 will extend the detectable spectrum a little while mostly concentrating on making the basic circuit more usable. An audible output will mean that we will no longer have to worry about the Zen-like problem of, “if we can’t hear it, should we call it a sound?”
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Squashed triangles: sines, but with teeth?
- A pitch-linear VCO, part 1: Getting it going
- Earplugs ready? Let’s make some noise!
- Supersized log-scale audio meter
The post Revealing the infrasonic underworld cheaply, Part 1 appeared first on EDN.
The advent of recyclable materials for PCBs

Conventional PCB manufacturing—which is wasteful, energy intensive, and harmful to the environment—increasingly calls for electronics recycling to reduce material waste and energy requirements through less material production.
Figure 1 The conventional PCB world is ripe with recycling opportunities. Source: IDTechEx
IDTechEx’s new report, “Sustainable Electronics and Semiconductor Manufacturing 2025-2035: Players, Markets, Forecasts,” outlines new recyclable materials for PCBs and provides updates on their full-scale commercial readiness. Below is a sneak peek at these recyclable and biodegradable materials and how they facilitate sustainability in electronics manufacturing.
- New PCB substrates
While FR4, a glass-reinforced epoxy resin laminate, is a substrate of choice for PCBs due to being lightweight, strong, and cheap, it’s non-recyclable and can contain toxic halogenated flame retardants. That calls for alternative substrates that are biodegradable or recyclable.
Jiva’s Soluboard, a biodegradable substrate made from the natural fiber flax and jute, is emerging as a promising new material as it dissolves in 90°C water. That facilitates component recycling and precious metal recovery at the product’s end of life. Companies like Infineon, Jaguar, and Microsoft are currently testing if this new material can combat rising electronics waste levels.
Figure 2 Soluboard is a fully recyclable and biodegradable PCB substrate. Source: Jiva Materials
- Polylactic acid in flexible PCBs
Conventional flexible PCBs, built around plastic polyimide, are also ripe for alternative materials. Polylactic acid, currently in the prototype-scale validation phase, emerges as a sustainable material that can be sourced from organic industrial waste and is also biodegradable.
Polylactic acid can withstand temperatures of up to 140°C, which is lower than that of polyimide and FR4. However, it’s compatible with manufacturing processes such as silver ink sintering. Companies and research institutes like VTT are now demonstrating the potential of polylactic acid in flexible PCBs.
- Recycled tin
Around 180,000 tonnes of primary tin are used in electronics globally. It’s primarily sourced from mines in China, Indonesia and Myanmar and is causing significant environmental damage. Enter recycled tin, which is produced by smelting waste metal and metal oxide. It boasts the same quality as primary tin, which is confirmed by X-ray diffraction.
However, merely 30% of tin is currently recycled worldwide, so there is a greater need for regulatory drivers to encourage increased metal recycling. One example is Germany’s National Circular Economy Strategy (NKWS) unveiled in 2024, aiming to half per capita raw material consumption by 2045.
Figure 3 A boost in recycled tin relies on a strong regulatory push. Source: Mayerhofer Electronik
Mayerhofer Electronik was the first to demonstrate the use of recycled tin for soldering in its electronics manufacturing processes. Now, Apple has committed to using secondary tin in all products by 2035.
- Regeneration systems to minimize copper waste
It’s a widely known fact that copper is used wastefully in PCBs. This is how it happens: a flat sheet of copper is applied to the substrate before holes are drilled. Inevitably, a circuit pattern produced by etching away the excess copper requires large volumes of chemical etchants like ferric (III) chloride and cupric (II) chloride. As a result, around 70% of the copper initially applied to the board is often removed.
Here, additive manufacturing, in which copper is only applied where required, offers the solution in a method that requires no manufacturing switch. An etchant regeneration system recovers both copper etched from the laminate and etchant chemicals. The recycled copper can serve as an additional revenue stream for the electronics manufacturer.
Related Content
- The problem with recycling
- PCB materials: Recycle, reuse, dispose?
- Trends and Challenges in PCB Manufacturing
- Process for recycling turns up components ready for reuse
The post The advent of recyclable materials for PCBs appeared first on EDN.
Another simple flip ON flop OFF circuit

Editor’s Note: This Design Idea (DI) offers another alternative to the “To press ON or hold OFF? This does both for AC voltages” that was originally inspired by Nick Cornford’s DI: “To press ON or hold OFF? This does both.”
Figure 1 gives a simple circuit for the PUSH ON, PUSH OFF function with only a few inexpensive components. In this design, the output is connected to the input of the gadget when you press the push button (PB) once. For the next push, the output is disconnected.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This is an attractive alternative to bulkier ON/OFF switches for DC circuits. The circuit has a fairly simple explanation. U1 is a counter.
Figure 1 A Flip ON Flop OFF circuit for DC voltages. The gadget is connected to the output terminals of the PB. With an adequate heat sink for MOSFET Q1, the output current can go up to 50 A.
During power on, R2/C2 resets the counter to zero. When you push PB momentarily once, a pulse is generated and shaped by a Schmidt trigger inverter U2 (A & C), which counter U1 counts. Hence, the LSB (Q1) output of U1 becomes HIGH, making MOSFET Q1 conduct. At this point, the output gets the input DC voltage.
When you push PB momentarily again, another pulse is generated and counted by U1. Hence, its LSB (Q1) output goes LOW and MOSFET Q1 stops conducting and output is disconnected from input. This action continues, making output ON and OFF for each push of PB.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- To press ON or hold OFF? This does both for AC voltages
- To press on or hold off? This does both.
- Smart TV power-ON aid
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post Another simple flip ON flop OFF circuit appeared first on EDN.
FPGA prototyping harnessed for RISC-V processor cores

FPGA-based prototyping solutions provider S2C has teamed up with RISC-V processor IP supplier Andes Technology to enhance prototyping capabilities for system-on-chip (SoC) designs. This collaboration aims to bolster capacity and flexibility for modeling, prototyping, and software development work carried out around Andes’ RISC-V cores.
The partnership is built around S2C’s recently launched Prodigy S8-100 FPGA prototyping platform, which is based on AMD’s Versal Premium VP1902 adaptive SoC. “Versal Premium VP1902 adaptive SoC is the industry’s largest FPGA-based adaptive SoC,” said Mike Rather, senior product line manager at AMD. “That empowers engineers to push the boundaries of technology.”
Figure 1 The VP1902 adaptive SoC bolsters prototyping platform’s capacity with a larger FPGA. Source: AMD
Capacity limitations are a common challenge in FPGA prototyping, which restricts SoC developers’ ability to integrate multiple RISC-V cores along with subsystems like network-on-chip (NoC), DDR, PCIe controllers, and more. Prodigy S8-100 prototyping addresses these challenges by offering a single FPGA version with up to 100 million logic gates.
“The large-capacity FPGA-based prototyping allows early customizations, ultimately accelerating their time-to-market with Andes-based RISC-V SoCs,” said Emerson Hsiao, president of Andes Technology USA.
Figure 2 The new FPGA prototyping platform further boosts capacity with larger configurations. Source: S2C
As shown in the above figure, Prodigy S8-100 also includes larger configurations with two or even four VP1902 adaptive SoCs, which scales capacity to up to 400 million logic gates per system. That enables full SoC validation in hardware, significantly reducing development cycles, optimizing performance modeling, and accelerating software development before production silicon becomes available.
Next, the new prototyping platform encompasses S2C’s extensive library of nearly 100 daughter cards, which support applications ranging from networking, storage, and multimedia to generic IOs. This facilitates efficient interface modeling and simulation without sacrificing FPGA logic resources.
S2C’s new FPGA prototyping platform will be demonstrated live at the Andes RISC-V Con, which will be held on April 29, 2025, at the DoubleTree by Hilton Hotel in San Jose, California.
Related Content
- FPGA board offers low-cost prototyping
- 10 Favorite FPGA-Based Prototyping Boards
- FPGA Prototyping of System-on-Chip (SoC) Designs
- Embedded design with FPGAs: Development process
- Why, When and How: The basics of embedded systems prototyping
The post FPGA prototyping harnessed for RISC-V processor cores appeared first on EDN.
SiC modules boost thermal stability

SiCPAK power modules from Navitas use advanced epoxy-resin potting to achieve 5× lower thermal resistance shift for extended system lifetime. Based on trench-assisted planar SiC MOSFETs, the modules deliver efficient high-temperature performance for EV DC fast chargers, solar inverters, industrial motor drives, energy storage systems, and uninterruptible power supplies.
The 1200-V SiCPAKs are designed to resist high humidity and prevent moisture ingress, maintaining stable thermal performance under power and temperature cycling. In thermal shock testing (–40°C to +125°C, 1000 cycles), the modules showed a 5× smaller increase in thermal resistance compared to those with silicone-gel potting. Additionally, while silicone-gel modules failed isolation tests, the epoxy-resin potted SiCPAKs retained acceptable isolation levels.
Featuring built-in NTC thermistors, the 1200-V power modules are offered with on-resistance ratings from 4.6 mΩ to 18.5 mΩ in half-bridge, full-bridge, and 3L-T-NPC circuit configurations. They also maintain pin compatibility with industry-standard press-fit modules.
The 1200-V SiCPAK modules are now available for mass production. Datasheets can be found here.
The post SiC modules boost thermal stability appeared first on EDN.
Infineon advances EV drivetrain efficiency

To support the rapid growth of electric vehicles, Infineon has introduced a new generation of energy-efficient silicon IGBTs and reverse-conducting IGBTs (RC-IGBTs). The 3rd generation Electric Drive Train (EDT3) IGBTs target both 400-V and 800-V systems, while the RC-IGBTs are optimized for 800-V architectures. These devices boost drivetrain efficiency and are well-suited for automotive applications.
EDT3 chipsets support collector-emitter voltages of up to 750 V and 1200 V, with a maximum virtual junction temperature of +185°C. They offer high output current, making them useful for main inverters in battery-electric, plug-in hybrid, and range-extended EVs. Their compact, optimized design enables smaller modules, helping automakers build more efficient and reliable powertrains that can extend driving range and lower emissions.
The 1200-V RC-IGBT enhances performance by integrating IGBT and diode functions on a single die, achieving higher current density than discrete chipset solutions. This integration reduces assembly effort and chip size, offering a scalable, cost-effective option for powertrain systems.
All of the devices are offered with customized chip layouts, including on-chip temperature and current sensors. Additionally, metallization options for sintering, soldering, and bonding are available on request.
The EDT3 and RC-IGBT devices are now sampling. For more information, click here.
The post Infineon advances EV drivetrain efficiency appeared first on EDN.
MCUs cut power to 0.25 µA in standby

Powered by a 32-MHz Arm Cortex-M23 processor, the Renesas RAOE2 group of entry-level MCUs offers low power consumption and an extended temperature range. The devices have a feature set that is optimized for cost-sensitive applications such as battery-operated consumer electronics, small appliances, industrial control systems, and building automation.
RAOE2 MCUs consume 2.8 mA in active mode and 0.89 mA in sleep mode. An integrated high-speed on-chip oscillator supports fast wakeup, allowing the device to remain in software standby mode longer—where power consumption drops to just 0.25 µA. With ±1.0% precision, the oscillator also improves baud rate accuracy and maintains stability across a temperature range of -40°C to +125°C.
The MCUs operate from 1.6 V to 5.5 V, eliminating the need for a level shifter or regulator in 5-V systems. They offer up to 128 KB of code flash and 16 KB of SRAM, along with integrated timers, serial communication interfaces, analog functions, and safety features. Security functions include a unique ID, true random number generator (TRNG), AES libraries, and flash read protection.
RAOE2 MCUs are available now in a variety of packages, including a 5×5-mm, 32-lead QFN.
The post MCUs cut power to 0.25 µA in standby appeared first on EDN.
PMICs optimize energy harvesting designs

Low-current PMICs in AKM’s AP4413 series enable efficient battery charging in devices that typically use disposable batteries, including remote controls, IoT sensors, and Bluetooth trackers. With current consumption as low as 52 nA, they have minimal impact on a system’s power budget—critical for energy harvesting applications.
The series comprises four variants with voltage thresholds tailored to common rechargeable battery types. Each device integrates voltage monitoring to prevent deep discharge, enabling quick startup or recovery. An inline capacitor allows the AP4413 to maintain operation even when the battery is fully discharged, while recharging it simultaneously.
System configuration example.
The AP4413 PMICs are in mass production and come in 3.0×3.0×0.37-mm HXQFN packages.
The post PMICs optimize energy harvesting designs appeared first on EDN.
Cadence debuts DDR5 MRDIMM IP at 12.8 Gbps

Cadence has announced the first DDR5 12.8-Gbps MRDIMM Gen2 memory IP subsystem, featuring a PHY and controller fabricated on TSMC’s N3 (3-nm) process. The design was hardware-validated with Gen2 MRDIMMs populated with DDR5 6400-Mbps DRAM chips, achieving a 12.8-Gbps data rate—doubling the bandwidth of the DRAM devices. The solution addresses growing memory bandwidth demands driven by AI workloads in enterprise and cloud data center applications.
Based on a silicon-proven architecture, the DDR5 IP subsystem provides ultra-low latency encryption and advanced RAS features. It is designed to enable the next-generation of SoCs and chiplets, offering flexible integration options, as well as precise tuning of power and performance.
Combined with Micron’s 1γ-based DRAM and Montage Technology’s memory buffers, Cadence’s DDR5 MRDIMM IP delivers a high-performance memory subsystem with doubled bandwidth. The PHY and controller have been validated using Cadence’s DDR Verification IP (VIP), enabling rapid IP and SoC verification closure. Cadence reports multiple ongoing engagements with leading customers in AI, HPC, and data center markets.
For more information, visit the DDR5 MRDIMM PHY and controller page.
The post Cadence debuts DDR5 MRDIMM IP at 12.8 Gbps appeared first on EDN.
Quantum-safe root-of-trust solution to secure ASICs, FPGAs

A new quantum-safe root-of-trust solution enables ASICs and FPGAs to comply with post-quantum cryptography (PQC) standards set out in regulations like the NSA’s CNSA 2.0. PQPlatform-TrustSys, built around the PQC-first design philosophy, aims to help manufacturers comply with cybersecurity regulations with minimal integration time and effort.
It facilitates robust key management by tracking the key’s origin and permission, including key revocation, an essential and often overlooked part of securing any large-scale cryptographic deployment. Moreover, root-of-trust enforces restrictions on critical operations and maintains security even if the host system is compromised.
Next, key origin and permission attributes are extended to cryptographic accelerators connected to a private peripheral bus. PQPlatform-TrustSys, launched by London, UK-based PQShield, has been unveiled after the company achieved FIPS 140-3 certification through the Cryptographic Module Verification Program (CMVP), which is designed to evaluate cryptographic modules and provide agencies and organizations with a metric for security products.
PQShield, a supplier of PQC solutions, has also built its own silicon test chip to prove this can all be delivered ‘first time right’. Its PQC solutions are developed around three pillars: ultra-fast, ultra-secure, and ultra-small.
PQShield’s security products are built around three basic tenets: ultra-fast, ultra-secure, and ultra-small.
The PKfail vulnerability has thrust multiple security issues within the secure boot and secure update domains, which play a fundamental role in protection against malware. Inevitably, ASICs and FPGAs will need to ensure secure boot and secure update while meeting both existing and new regulatory requirements with clear timelines set out by NIST.
Industry watchers believe that we have a five-to-10-year window to migrate to the PQC world. So, the availability of a quantum-safe root-of-trust solution bodes well for preparing ASICs and FPGAs to function securely in the quantum era.
Related Content
- Post-Quantum Cryptography: Moving Forward
- An Introduction to Post-Quantum Cryptography Algorithms
- Perspectives on Migration Toward Post-Quantum Cryptography
- Release of Post-Quantum Cryptographic Standards Is Imminent
- The need for post-quantum cryptography in the quantum decade
The post Quantum-safe root-of-trust solution to secure ASICs, FPGAs appeared first on EDN.
Current monitor

Almost every wall power supply has no indicator showing whether current is consumed by the load or not.
Wow the engineering world with your unique design: Design Ideas Submission Guide
It seems that this was a shortcoming that was not only noticed by me: I once saw the solution given in Figure 1.
Figure 1 Wall power supply indicator solution showing whether or not a current is being consumed by the load or not.
The thing is that the circuit was not functional—there were only places for the transistor, LED, and resistors on the board, not the elements themselves. It’s easy to say why: the voltage drop base-emitter (Vbe) is about 0.7 V, or 15% from the output voltage of this 5-V device. A monitor like this (Figure 1) would only be tolerable with a 12-V device or higher (24 V).
The circuit in Figure 2 is exceptionally good for low voltages, around 3 to 9 V, and for currents exceeding ~50 mA.
Figure 2 Current monitor circuit for a wall power supply that is good for voltages from 3 to 9 V and currents exceeding 50 mA.
It provides not only the opportunity to monitor its output current in a more efficient (30x) way, the bi-color LED allows it to estimate the value of the current and indicates the on-state of the device. Of course, the LEDs might be separate as well.
As for Q1, Q2: any low-power PNP with a reasonably high B will do, e.g., BC560.
—Peter Demchenko studied math at the University of Vilnius and has worked in software development.
Related Content
- Multi-decade current monitor the epitome of simplicity
- Current Sensor LED Indicator
- High-side current monitor operates at high voltage
- Current monitor compensates for errors
- Current monitor uses Hall sensor
The post Current monitor appeared first on EDN.
Selective averaging in an oscilloscope

Sometimes, you only want to analyze those signal components that meet certain criteria or occur at certain times within an acquisition. This is not too difficult for a single acquisition, but what if you want to obtain the average of those selected measurement events? Here is where seemingly unrelated features of the oscilloscope can work together to get the desired data.
Consider an application where a device produces periodic RF pulse bursts, as shown in Figure 1.
Figure 1 The device under test produces periodic RF pulse bursts; the test goal is to acquire and average bursts with specific amplitudes. Source Arthur Pini
The goal of the test is to acquire and average only those bursts with a specific amplitude. In this case, those with a nominal value of 300 millivolts (mV) peak-to-peak. This desired measurement can be accomplished using the oscilloscope’s Pass/Fail testing capability to qualify the signal. Pass/Fail testing allows the user to test the waveform based on parametric measurements, like amplitude, and pass or fail the measured waveform based on it meeting preset limits. Alternatively, it can be tested by comparing the waveform to a mask template to determine if the waveform is within or outside of the mask. Based on the test results, many actions can be taken, from stopping the acquisition, storing the acquired waveform to memory or file, sounding an audible alarm, or emitting a pulse.
Selective averaging uses Pass/Fail testing to isolate the desired pulse bursts based on their amplitude or conformance to a mask template. Signals meeting the Pass/Fail criteria are stored in internal memory. The averager is set to use that storage memory as its source so that qualified signals transferred to the memory are added to the average.
Setting up Pass/Fail testingTesting is based on the peak-to-peak amplitude, which uses measurement parameter P1. The measurement setup accepts or passes a pulse burst having a nominal peak-to-peak amplitude of 300 mV within a range of ±50 mV of nominal. The test limits are set up in test condition Q1 (Figure 2).
Figure 2 The initial setup to capture and average only pulses with amplitudes of 300 ± 50 mV. Source: Arthur Pini
The oscilloscope’s timebase is set to capture individual pulse bursts, in this case, 100 ns per division. This is important as only individual bursts should be added to the average. A single burst has been acquired, and its peak-to-peak amplitude is 334 mV, as read in parameter P1. The Pass/Fail test setup Q1 tests for the signal amplitude within ±50 mV of the nominal 300 mV amplitude. These limits are user-adjustable to acquire pulse bursts of any amplitude.
A single acquisition is made, acquiring a 338 mV pulse, which appears in the top display grid. This meets the Pass/Fail test criteria, and the signal is stored in memory M1 (Figure 3).
Figure 3 Acquiring a signal that meets the acceptance criteria adds a copy of the signal in memory M1 (center grid) and adds it to the averager contents (lower grid). Source: Arthur Pini
The memory contents are added to the average, showing a waveform count of 1. The Actions tab of the Pass/Fail setup shows that if the acquired signal passes the acceptance criteria, it is transferred into memory. The waveform store operation (i.e., what trace is stored in what memory) is set up separately in the Save Waveform operation under the File pulldown menu.
What happens if the acquired pulse doesn’t meet the test criteria? This is shown in Figure 4.
Figure 4 Acquiring a 247 mV burst results in a failed Q1 condition. In this case, the signal is not stored to M1 and is not added to the average. Source Arthur Pini
The acquired waveform has a peak-to-peak amplitude of 247 mV, outside the test limit. This results in a failure of the Q1 test (shown in red). The test action does not occur, and the low amplitude signal is not added to the average.
Using mask templatesSelective averaging can also be based on mask testing. Masks can be created based on an acquired waveform, or custom masks can be created using software utilities from the oscilloscope manufacturer and downloaded to the oscilloscope. This example uses a mask based on the acquired signal (Figure 5).
Figure 5 A mask, based on the nominal amplitude signal, is created in the oscilloscope. The acquired signal passes if all waveform samples are within the mask. Source Arthur Pini
The mask is created by adding incremental differences both horizontally and vertically about the source waveform. All points must be inside the mask for the acquired signal to pass. As in the previous case, if the signal passes, it is stored in memory and added to the average (Figure 6).
Figure 6 If the acquired signal is fully inside the mask, it is transferred to memory M1 and added to the average. Source Arthur Pini
If the acquired signal has points outside the mask, the test fails, and the signal is not transferred to memory or the average (Figure 7).
Figure 7 An example of a mask test failure with the circled points outside the mask. This waveform is not added to the average. Source Arthur Pini
Selective averaging with a gating signalThis technique can also be applied to signals on a multiplexed bus with a gating signal, such as a chip select, available (Figure 8).
Figure 8 Pass/Fail testing can be employed to select only those signals that are time-coincident with a gating signal, such as a chip select signal. Source: Arthur Pini
The gating signal or chip select is acquired on a separate acquisition channel. In the example, channel 3 (C3) was used. The gating signal is positive when the desired signal is available. To add only those signals that coincide with the gating signal, pass/fail testing verifies the presence of a positive gating signal. Testing that the maximum value of C3 is greater than 100 mV verifies that the gate signal is in a high state, and the test is passed. The oscilloscope is set to store C1 in memory M1 under a passed condition, which is added to the average (Figure 9).
Figure 9 The average based on waveforms coincident with the gate positive gate signal state. Source: Arthur Pini
Isolating test signalsIf the segments of the analyzed signal are close together and cannot be separated using the standard timebase (1,2,5 step) scales, a horizontal (zoom) expansion of the acquired signal can be used to select the desired signal segment part. The variable zoom scale provides very fine horizontal steps. The zoom trace can be used instead of the acquired channel, and the average source is the zoom trace.
Selective averagingSelective averaging, based on Pass/Fail testing, is an example of linked features in an oscilloscope that complement each other and offer the user a broader range of measurements. Averaging was the selected analysis tool, but it could have been replaced with the fast Fourier transform (FFT) or a histogram. The oscilloscope used in this example was a Teledyne LeCroy HDO 6034B.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Reducing noise in oscilloscope and digitizer measurements
- 10 tricks that extend oscilloscope usefulness
- FFTs and oscilloscopes: A practical guide
- Oscilloscope special acquisition modes
- Understanding and applying oscilloscope measurements
- Combating noise and interference in oscilloscopes and digitizers
The post Selective averaging in an oscilloscope appeared first on EDN.
Portable power station battery capacity extension: Curious coordination

I’m still awaiting an opportunity, when I have spare time, the snow’s absent from the deck and winds are calm, to test out those two 220W solar panels I already mentioned I bought last year:
for parallel-combining and mating with my EcoFlow DELTA 2 portable power station:
While I remain on more-favorable-conditions standby, I’ve got two other pieces of EcoFlow gear also in the queue to tell you about. One, the 800W Alternator Charger that I mentioned in a more recent piece, isn’t an installation high-priority right now, so hands-on results prose will also need to wait.
But the other (and eventually also its replacement; hold that thought), which I pressed into service as soon as it arrived, is the topic of today’s post. It’s the DELTA 2 Smart Extra Battery, which mates to the DELTA 2 base unit over a thick dual-XT150-connectors-inclusive cable and combo-doubles the effective subsequently delivered storage capacity:
Here’s what my two identical-sized (15.7 x 8.3 x 11 in/400 x 211 x 281 mm) albeit different-weight (DELTA 2 base unit: 27 lbs/12 kg, DELTA 2 Smart Extra Battery: 21 lbs/9.5 kg) devices look like in their normal intended stacked configuration:
And here’s my more haphazard, enthusiastic initial out-of-box hookup of them:
In the latter photo, if you look closely, you can already discern why I returned the original Smart Extra Battery, which (like both its companion and its replacement) was a factory-refurbished unit from EcoFlow’s eBay storefront. Notice the brightness difference between it and the more intense DELTA 2’s displays. I should note upfront that at the time I took that photo, both devices’ screens still had the factory-installed clear plastic protectors on them, so there might have been some resultant muting. But presumably it would have dimmed both units’ displays equally.
The displays are odd in and of themselves. When I’d take a screen protector off, I’d see freakish “static” (for lack of a better word) scattered all over it for a few (dozen) seconds, and I could also subsequently simulate a semblance of the same effect by rubbing my thumb over the display. This photo shows the artifacts to a limited degree (note, in particular, the lower left quadrant):
My root-cause research has been to-date fruitless; I’d welcome reader suggestions on what core display technology EcoFlow is using and what specific effect is at play when these artifacts appear. Fortunately, if I wait long enough, they eventually disappear!
As for the defective display in particular, its behavior was interesting, too. LCDs, for example, typically document a viewing angle specification, which is the maximum off-axis angle at which the display still delivers optimum brightness, contrast and other attributes. Beyond that point, typically to either side but also vertically, image quality drops off. With the DELTA 2 display, it was optimum when viewed straight on, with drop-off both from above and below. With the original Smart Extra Battery display, conversely, quality was optimum when viewed from below, almost (or maybe exactly) as if the root cause was a misaligned LCD polarizer. Here are closeups of both devices’ displays, captured straight on in both cases, post-charging:
After checking with Reddit to confirm that what I was experiencing was atypical, I reached out to EcoFlow’s eBay support team, who promptly and thoroughly took care of me (and no, they didn’t know I was a “press guy”, either), with Fedex picking up the pre-paid return-shipping defective unit at my front door:
and a replacement, quick-shipped to me as soon as the original arrived back at EcoFlow.
That’s better!
The Smart Extra Battery appears within the app screens for the DELTA 2, vs as a distinct device:
Here’s the thick interconnect cable:
I’d initially thought EcoFlow forgot to include it, but eventually found it (plus some documentation) in a storage compartment on top of the device:
Here are close-ups of the XT150 connectors, both at-device (the ones on the sides of the DELTA 2 and Smart Extra Battery are identical) and on-cable (they’re the same on both ends):
I checked for available firmware updates after first-time connecting them; one was available.
I don’t know if it was related to the capacity expansion specifically or was just timing-coincidental, and if it was for the DELTA 2 (with in-progress status shown in the next photo), Smart Extra Battery or both…but it completed uneventfully and successfully.
Returning to the original unit, as that’s what I’d predominantly photo-documented, it initially arrived only 30% “full”:
With the DELTA 2 running the show, first-time charging of the Smart Extra Battery was initially rapid and high power-drawing; note the incoming power measured at it:
and flowing both into and out of the already-fully-charged DELTA 2:
As the charging process progressed, the current flow into the Smart Extra Battery slowed, eventually to a (comparative) trickle:
until it finished. Note the high reported Smart Extra Battery temperature immediately after charge completion, both in an absolute sense and relative to the normal-temperature screenshot shown earlier!
In closing, allow me to explain the “Curious Coordination” bit in the title of this writeup. I’d upfront assumed that if I lost premises power and needed to harness the electrons previously collected within the DELTA 2/Smart Extra Battery combo instead, the Smart Extra Battery would be drained first. Such a sequence would theoretically allow me to, for example, then disconnect the Smart Extra Battery and replace it with another already-fully-charged one I might have sat around to further extend the setup’s total usable timespan prior to complete depletion.
In saying this, I realize that the feasibility of such a scenario isn’t likely, since the Smart Extra Battery can’t be charged directly from AC (or solar, for that matter) but instead requires an XT150-equipped “smart” source such as a (second, in this scenario) DELTA 2. That said, what I discovered to be the case when I finally got the gear in my hands was the exact opposite; the DELTA 2 battery drained first, down to a nearly (but not completely) empty point, then the discharge source switched to the extra battery. And that said, further research has educated me that actual behavior varies depending on how much current is demanded by whatever the combo is powering; in heavy-load scenarios, the two devices’ battery packs drain in parallel.
What are your thoughts on this behavior, and/or anything else I’ve mentioned here? Share them with your fellow readers (and me!) in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- A holiday shopping guide for engineers: 2024 edition
- The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
- EcoFlow’s Delta 2: Abundant Stored Energy (and Charging Options) for You
The post Portable power station battery capacity extension: Curious coordination appeared first on EDN.
LM4041 voltage regulator impersonates precision current source

The LM4041 has been around for over 20 years. During those decades, while primarily marketed as a precision adjustable shunt regulator, this classic device also found its way into alternative applications. These include voltage comparators, overvoltage protectors, voltage limiters, etc. Voltage, voltage, voltage, must it always be voltage? It gets tedious. Surely this popular precision chip, while admittedly rather—um—“mature”, must have untapped potential for doing something that doesn’t start with voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The Design Idea (DI) presented in Figure 1 offers the 4041 a usual, possibly weird, maybe even new role to play. It’s a precision current source.
Figure 1 Weirdly, the “CATHODE” serves as the sense pin for active current source regulation.
The above block diagram shows how the 4041 works at a conceptual level:
Sourced current = Is = (V+ – (Vc + 1.24v))/R1
Is > 0, V+ < 15v, Is < 20 mA
The series connection subtracts an internal 1.24-V precision reference from the external voltage input on the CATHODE pin. The internal op-amp subtracts the voltage input on the FB pin from that difference, then amplifies and applies the result to the pass transistor. If it’s positive [(V+ – 1.24) > Vc], the transistor turns on and shunts current from CATHODE to ANODE. Otherwise, it turns off.
When a 4041 is connected in the traditional fashion (FB connected to CATHODE and ANODE grounded), the scheme works like a shunt voltage regulator should, forcing CATHODE to the internal 1.24-V reference voltage. But what will happen if the FB pin is connected to a constant control voltage [Vc < (V+ – 1.24v)] and CATHODE—and instead of being connected to FB—floats freely on current sensing resistor R1?
What happens is the current gets regulated instead of the voltage. Because Vc is fixed and can’t be pulled up to make FB = CATHODE – 1.24, CATHODE must be pulled down until equality is achieved. For this to happen, a programmed current, Is, must be passed that is given by:
Is = (V+ – (Vc – 1.24))/R1.
Figure 2 illustrates how this relationship can be used (assuming a 5-V rail that’s accurate enough) to make a floated-cathode 4041 regulate a constant current source of:
Is = (5v – 2.5 – 1.23)/R1 = 1.27v /R1
It also illustrates how adding a booster transistor Q1 can accommodate applications needing current or power beyond Z1’s modest limits. Notice that Z1’s accuracy will be unimpaired because, with whatever fraction of Is that Q1 causes to bypass, Z1 is summed back in before passing through R1.
Figure 2 The booster transistor Q1 can handle current beyond 4041 max Is and dissipation limits.
Figure 3 shows how Is can be digitally linearly programmed with PWM.
Figure 3 Schematic showing the DAC control of Is. Is = Df amps, where Df = PWM duty factor. The asterisked resistors should be 1% or better.
Incoming 5-Vpp, 10-kHz PWM causes Q2 to switch R5, creating a variable average resistance = R5/Df. Thanks to the 2.5-V Z1 reference, the result is a 0 to 1.22 mA current into Q1’s source. This is summed with a constant 1.22 mA bias from R4 and level shifted by Q1 to make a 1.22 to 2.44 V control voltage, Vc, for current source Z2.
The result is a linear 0- to 1-A output current, Is, into a grounded load where Is = Df amps. Voltage compliance is 0 to 12 V. The 8-bit compatible PWM ripple filtering is 2nd order using “Cancel PWM DAC ripple with analog subtraction.”
R3C1 provides the first-stage ripple filter and R7C2 the second. The C1 and C2 values shown are scaled for Fpwm = 10 kHz to provide an 8-bit settling time of 6 ms. If a different PWM frequency is used, scale both capacitors by 10kHz/Fpwm.
A hot topic is that Q4 can be called on to dissipate more than 10 W, so don’t skimp on heatsink capacity.
Q3 is a safety shutdown feature. It removes Q1 gate drive when +5 falls below about 3 V, shutting off the current source and protecting the load when controller logic is powered down.
Figure 4 adds zero and span pots to implement a single-pass calibration for best accuracy:
- Set Df = 0% and adjust single turn ZERO trim for zero output current
- Set Df = 100% and adjust single turn CAL trim for 1.0 A output
- Done.
Figure 4 Additional zero and span pots to implement a single-pass calibration for best accuracy.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- PWM-programmed LM317 constant current source
- Low-cost precision adjustable current reference and application
- A negative current source with PWM input and LM337 output
- A high-performance current source
- Simple, precise, bi-directional current source
The post LM4041 voltage regulator impersonates precision current source appeared first on EDN.
Did you put X- and Y-capacitors on your AC input?

X- and Y-capacitors are commonly used to filter AC power-source electromagnetic interference (EMI) noise and are often referred to as safety capacitors. Here is a detailed view of these capacitors, related design practices and regulatory standards, and profile of supporting power ICs. Bill Schweber also provides a sneak peek into how they operate in AC power line circuits.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- When the AC line meets the CFL/LED lamp
- How digital capacitor ICs ease antenna tuning
- What would you ask an entry-level analog hire?
- Active filtering: Attenuating switching-supply EMI
The post Did you put X- and Y-capacitors on your AC input? appeared first on EDN.
Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost

Within my teardown published last summer of Walmart’s “onn.”-branded original Android TV-based streaming receiver, the UHD Streaming Device:
I mentioned that I already had Google TV operating system-based successors for both the “box” and “stick” Android TV form factor (subsequently dissected by me and published last December) sitting on my shelves awaiting my teardown attention. That time is now, specifically for the onn. Google TV 4K Streaming Box I’d bought at intro in April 2023 for $19.88 (the exact same price as its Android TV-based forebear):
The sizes of the two device generations are near-identical, although it’s near-impossible to find published dimension specs for either device online, only for the retail packaging containing them. As such, however, a correction is in order. I’d said in my earlier teardown that the Android TV version of the device was 4.9” both long-and-wide, and 0.8” tall: it’s actually 2.8” (70mm, to be precise) in both length and width, with a height of ~0.5” (13 mm). And the newer Google TV-based variant is ~3.1” (78mm) both long and wide and ~0.7” (18 mm) tall.
Here are more “stock” shots of the newer device that we’ll be dissecting today, along with its bundled remote control and other accessories:
Eagle-eyed readers may have already noticed the sole layout difference between the two generations’ devices. The reset switch and status LED are standalone along one side in the original Android TV version, whereas they’re at either side of, and on the same side as, the HDMI connector in the new Google TV variant. The two generations’ remote controls also vary slightly, although I bet the foundation hardware design is identical. The lower right button in the original gave user-access favoritism to HBO Max (previously HBO Go, now known as just “Max”):
whereas now it’s Paramount+ getting the special treatment (a transition which I’m guessing was motivated by the more recent membership partnership between the two companies and implemented via a relabel of that button along with an integrated-software tweak).
Next, let’s look at some “real-life” shots, beginning with the outside packaging:
Note that, versus the front-of-box picture of its precursor that follows, Walmart’s now referring to it as capable of up to “4K” output resolution, versus the previous, less trendy “UHD”:
Also, it’s now called a “box”, versus a “device”. Hold that latter thought until next month…now, back to today’s patient…
The two sides are comparatively info-deficient:
The bottom marks a return to info-rich form:
While the top as usual never fails to elicit a chuckle from yours truly:
Let’s see what’s inside:
That’s quite a complex cardboard assemblage!
The first thing you’ll see when you flip up the top flap:
are our patient, currently swathed in protective opaque plastic, and a quick start guide that you can find in PDF form here, both as-usual accompanied in the photo by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes.
Below them, in the lower level of the cardboard assemblage, are the aforementioned remote control and a 1-meter (3.28 ft) HDMI cable:
Here’s the backside of the remote control; note the added sticker (versus its predecessor) above the battery compartment with re-pairing instructions, along with the differing information on the smaller sticker in the upper right corner within the battery compartment:
I realized after typing the previous words that since I hadn’t done a teardown of the remote control last time, I hadn’t taken a picture of its opened backside, either. Fortunately, it was still inhabiting my office, so…here you go!
Also originally located in the lower level of the cardboard assemblage are the AC adapter, an oval-shaped piece of double-sided adhesive for attaching the device to a flat surface, and a set of AAA batteries for the remote control:
Here’s the micro-USB jack that plugs into the on-device power connector:
And here are the power adapter’s specs:
which are comparable, “wall wart” form factor variances aside, with those of its predecessor:
Finally, here are some overview images of our patient, first from above:
Here’s the micro-USB side:
This side’s bare on this generation of the device:
but, as previously mentioned, contained the status LED and reset switch in the prior generation:
They’ve moved one side over this time, straddling the HDMI cable (I realized after taking this shot and subsequently moving in with the disassembly that the status LED was behind the penny; stand by for another look at it to come shortly!):
The last (left) side, in contrast, is bare in both generations:
Finally, here’s the device from below:
And here’s a closeup of the label, listing (among other things) the FCC ID, 2AYYS-8822K4VTG (no, I don’t know why there are 28 different FCC documents posted for this ID, either!):
Now to get inside. Ordinarily, I’d start out by peeling off that label and seeing if there are any screw heads visible underneath. But since last time’s initial focus on the gap between the two case pieces panned out, I decided to try going down that same path again:
with the same successful outcome (a reminder at the start that we’re now looking at the underside of the inside of the device):
Check out the hefty piece of metal covering roughly half of the interior and linked to the Faraday cage on the PCB, presumably for both thermal-transfer and cushioning purposes, via two spongy pieces still attached to the latter:
I’m also presuming that the metal piece adds rigidity to the overall assembly. So why doesn’t it cover the entirety of the inside? They’re not visible yet, but I’m guessing there are Bluetooth and Wi-Fi antenna somewhere whose transmit and receive potential would have been notably attenuated had there been intermediary metal shielding between them and the outside world:
See those three screws? I’m betting we can get that PCB out of the remaining top portion of the case if we remove them first:
Yep!
Before we get any further, let me show you that status LED that was previously penny-obscured:
It’s not the actual LED, of course; that’s on the PCB. It’s the emissive end of the light guide (aka, light pipe, light tube) visible in the upper left corner of the inside of the upper chassis, with its companion switch “plunger” at upper right. Note, too, that this time one (gray) of the “spongy pieces” ended up stuck to this side’s metal shielding, which once again covers only ~half of the inside area:
The other (pink) “spongy piece” is still stuck to one of the two Faraday cages on the top side of the PCB, now visible for the first time:
In the upper right corner is the aforementioned LED (cluster, actually). At bottom, as previously forecasted unencumbered by intermediary shielding thanks to their locations, are the 2.4 GHz and 5 GHz Wi-Fi antennae. Along the right edge is what I believe to be the PCB-embedded Bluetooth antenna. And as for those Faraday cages, you know what comes next:
They actually came off quite easily, leaving me cautiously optimistic that I might eventually be able to pop them back on and restore this device to full functionality (which I’ll wait to try until after this teardown is published; stay tuned for a debrief on the outcome in the comments):
Let’s zoom in and see what’s under those cage lids:
Within the upper one’s boundary are two notable ICs: a Samsung K4A8G165WC-BCTD DDR4-2666 8 Gbit SDRAM and, to its right, the system’s “brains”, an Amlogic S905Y4 app processor.
And what about the lower cage region?
This one’s an enigma. That it contains the Wi-Fi and Bluetooth transceivers, and other circuitry is pretty much a given, considering its proximity to the antennae (among other factors). And it very well could be one and the same as the Askey Computer 8822CS, seemingly with Realtek wireless transceiver silicon inside, that was in the earlier Android TV version of device. Both devices support the exact same Bluetooth (5.0) and Wi-Fi (2.4/5GHz 802.11 a/b/g/n/ac MIMO) protocol generations, and the module packaging looks quite similar in both albeit rotated 90° in one PCB layout versus the other:
That said, unfortunately, there’s no definitively identifying sticker atop the module this time, as existed previously. If it is the same, I hope the manufacturer did a better job with its soldering this time around!
Now let’s flip the PCB back over to the bottom side we’ve already seen before, albeit now freed from its prior case captivity:
I’ll direct your attention first to the now clearly visible reset switch at upper right, along with the now obscured light guide at upper left. I’m guessing that the black spongey material makes sure that as much of the light originating at the PCB on the other side makes it outside as possible, versus inefficiently illuminating the device interior instead.
Once again, the Faraday Cage lifts off cleanly and easily:
The Faraday cage was previously located atop the PCB’s upper outlined region:
Unsurprisingly, another Samsung K4A8G165WC-BCTD DDR4-2666 8 Gbit SDRAM is there, for 2 GBytes of total system memory.
The region below it, conversely, is another enigma of this design:
Its similar outline to the others suggests that a Faraday cage should have originally been there, too. But it wasn’t; you’ve seen the pictorial proof. Did the assembler forget to include it when building this particular device? Or did the manufacturer end up deciding it wasn’t necessary at all? Dunno. What I do know is that within it is nonvolatile storage, specifically the exact same Samsung KLM8G1GETF-B041 8 GByte eMMC flash memory module that we saw last time!
More generally, what surprises me the most about this design is its high degree of commonality with its predecessor despite its evolved operating system foundation:
- Same Bluetooth and Wi-Fi generations
- Same amount and speed bin of DRAM, albeit from different suppliers, and
- Same amount of flash memory, in the same form factor, from the same supplier
The SoCs are also similar, albeit not identical. The Amlogic S905Y2 seen last time dates from 2018, runs at 1.8 GHz and is a second-generation offering (therefore the “2” at the end). This time it’s the 2022-era Amlogic S905Y4, with essentially the same CPU (quad-core Arm Cortex-A53) and GPU (Mali-G31 MP2) subsystems, and fabricated on the same 12-nm lithography process, albeit running 200 MHz faster (2 GHz). The other notable difference is the 4th-gen (therefore “4” at the end) SoC’s added decoding support for the AV1 video codec, along with both HDR10 and HDR10+ high dynamic range (HDR) support.
Amlogic also offers the Amlogic S905X4; the fundamental difference between “Y” and “X” variants of a particular SoC involves the latter’s integration of wired Ethernet support. This latter chip is found in the high-end onn. Google TV 4K Pro Streaming Device, introduced last year, more sizeable (7.71 x 4.92 x 2.71 in.) than its predecessors, and now normally selling for $49.88, although I occasionally see it on sale for ~$10 less:
The 4K Pro software-exposes two additional capabilities of the 4th-generation Amlogic S905 not enabled in the less expensive non-Pro version of the device: Dolby Vision HDR and Dolby Atmos audio. It also integrates 50% more RAM (to 3 GBytes) and 4x the nonvolatile flash storage (to 32 GBytes), along with making wireless connectivity generational advancements (Wi Fi 6: 2.4/5GHz 802.11ax), embedding a microphone array and swapping out geriatric micro-USB for USB-C. And although it’s 2.5x the price of its non-Pro sibling, everything’s relative; since Google has now obsoleted the entire Chromecast line, including the HD and 4K versions of the Chromecast with Google TV, the only Google-branded option left is the $99.99 Google TV Streamer successor.
I’ve also got an onn. Google TV 4K Pro Streaming Device sitting here which, near term, I’ll be swapping into service in place of its Google Chromecast with Google TV (4K) predecessor. Near-term, stand by for an in-use review; eventually, I’m sure I’ll be tearing it down, too. And even nearer term, keep an eye out for my teardown of the “stick” form factor onn. Google TV Full HD Streaming Device, currently scheduled to appear at EDN online sometime next month:
For now, I’ll close with some HDMI and micro-USB end shots, both with the front:
and backsides of the PCB pointed “up”:
Along with an invitation for you to share thoughts on anything I’ve revealed and discussed here in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Google’s Chromecast Ultra: More than just a Stadia consorta
The post Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost appeared first on EDN.