EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 1 година 3 хв тому

Transitioning from Industry 4.0 to 5.0: It’s not simple

Втр, 12/02/2025 - 18:35
Industry 4.0 to Industry 5.0.

The shift from Industry 4.0 to 5.0 is not an easy task. Industry 5.0 implementation will be complex, with connected devices and systems sharing data in real time at the edge. It encompasses a host of technologies and systems, including a high-speed network infrastructure, edge computing, control systems, IoT devices, smart sensors, AI-enabled robotics, and digital twins, all designed to work together seamlessly to improve productivity, lower energy consumption, improve worker safety, and meet sustainability goals.

Industry 4.0 to Industry 5.0.(Source: Adobe Stock)

In the November/December issue, we take a look at evolving Industry 4.0 trends and the shift to the next industrial evolution: 5.0, building on existing AI, automation, and IoT technologies with a collaboration between humans and cobots.

Technology innovations are central to future industrial automation, and the next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making, according to Jack Howley, senior technology analyst at IDTechEx. He believes the global industry will be defined by the integration of AI with robotics and IoT technologies, transforming manufacturing and logistics across industries.

As factories become smarter, more connected, and increasingly autonomous, MES, digital twins, and AI-enabled robotics are redefining smart manufacturing, according to Leonor Marques, architecture and advocacy director of Critical Manufacturing. These innovations can be better-interconnected, contributing to smarter factories and delivering meaningful, contextualized, and structured information, she said.

One of those key enabling technologies for Industry 4.0 is sensors. TDK SensEI defines Industry 4.0 by convergence, the merging of physical assets with digital intelligence. AI-enabled predictive maintenance systems will be critical for achieving the speed, autonomy, and adaptability that smart factories require, the company said.

Edge AI addresses the volume of industrial data by embedding trained ML models directly into sensors and devices, said Vincent Broyles, senior director of global sales engineering at TDK SensEI. Instead of sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated, reducing latency and bandwidth use, he said.

Robert Otręba, CEO of Grinn Global, agrees that industrial AI belongs at the edge. It delivers three key advantages: low latency and real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs, he said.

Otręba thinks edge AI will power the next wave of industrial intelligence. “Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself.”

AI is no longer an optional enhancement, and this shift is driven by the need for real-time, contextually aware intelligence with systems that can analyze sensor data instantly, he said.

Lisa Trollo, MEMS marketing manager at STMicroelectronics, calls sensors the silent leaders driving the industrial market’s transformation, serving as the “eyes and ears” of smart factories by continuously sensing pressure, temperature, position, vibration, and more. “In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries,” she said.

Energy efficiency also plays a big role in industrial systems. Power management ICs (PMICs) are leading the way by enabling higher efficiency. In industrial and industrial IoT applications, PMICs address key power challenges, according to contributing writer Stefano Lovati. He said the use of AI techniques is being investigated to further improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.

Don’t miss the top 10 AC/DC power supplies introduced over the past year. These power supplies focus on improving efficiency and power density for industrial and medical applications. Motor drivers are also a critical component in industrial design applications as well as automotive systems. The latest motor drivers and development tools add advanced features to improve performance and reduce design complexity.

The post Transitioning from Industry 4.0 to 5.0: It’s not simple appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

Втр, 12/02/2025 - 18:00
Microchip's MCP19061 USB dual-charging-port board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip's MCP19061 USB dual-charging-port board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Block diagram shows two independently managed USB PD channels on Microchip's MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of Microchip's MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of Microchip's UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Simple state variable active filter

Втр, 12/02/2025 - 15:00

The state variable active filter (SVAF) is an active filter you don’t see mentioned much today; however, it’s been a valuable asset for us old analog types in the past. This became especially true when cheap dual and quad op-amps became common place, as one can “roll their own” SVAF with just one IC package and still have an op-amp left over for other tasks!

Wow the engineering world with your unique design: Design Ideas Submission Guide

The unique features of this filter are having low-pass (LP), high-pass (HP), and band-pass (BP) filter results simultaneously available, with low component sensitivity, and an independent filter “Q” while creating a quadratic 2nd order filter function with 40-dB/decade slope factors. The main drawback is requiring three op-amps and a few more resistors than other active filter types.

The SVAF employs dual series-connected and scaled op-amp integrators with dual independent feedback paths, which creates a highly flexible filter architecture with the mentioned “extra” components as the downside.

With the three available LP, HP, and BP outputs, this filter seemed like a nice candidate for investigating with the Bode function available in modern DSOs. This is especially so for the newer Siglent DSO implementations that can plot three independent channels, which allows a single Bode plot with three independent plot variables: LP, HP, and BP.

Creating a SVAF with a couple of LM358 duals (didn’t have any DIP-type quad op-amps like the LM324 directly available, which reminds me, I need to order some soon!!), a couple of 0.01-µF mylar Caps, and a few 10 kΩ and 1 kΩ resistors seemed like a fun project.

The SVAF natural frequency corner is simply 1/RC, as shown in the notebook image in Figure 1 as ~1.59 kHz with the mentioned component values. The filter’s “Q” was set by changing R4 and R5.

Figure 1 The author’s hand-drawn schematic with R1=R2, R3=R6, and C1=C2, resistor values are 1 kΩ and 10 kΩ, and capacitors are 0.01 µF.

This produced plots of a Q of 1, 2, and 4 shown in Figure 2Figure 3, and Figure 4, respectively, along with supporting LTspice simulations.

The DSO Bode function was set up with DSO CH1 as the input, CH2 (red) as the HP, CH3 (cyan) as the LP, and CH4 (green) as the BP. The phase responses can also be seen as the dashed color lines that correspond to the colors of the HP, LP, and BP amplitude responses.

While it is possible to include all the DSO channel phase responses, this clutters up the display too much, so on the right-hand side of each image, the only phase response I show is the BP phase (magenta) in the DSO plots.

Figure 2 The left side shows the Q =1 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =1 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 3 The left side shows the Q =2 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =2 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 4 The left side shows the Q =4 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =4 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

The Bode frequency was swept with 33 pts/dec from 10 Hz to 100 kHz using a 1-Vpp input stimulus from a LAN-enabled arbitrary waveform generator (AWG). Note how the three responses all cross at ~1.59 kHz, and the BP phase, or the magenta line for the images on the right side, crosses zero degrees here.

If we extend the frequency of the Bode sweep out to 1 MHz, as shown in Figure 5, well beyond where you would consider utilizing an LM358. The simulation and DSO Bode measurements agree well, even at this range. Note how the simulation depicts the LP LM358 op-amp output resonance ~100 kHz (cyan) and the BP Phase (magenta) response.

Figure 5 The left side shows the Q =7 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =7 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

I’m honestly surprised the simulation agrees this well, considering the filter was crudely assembled on a plug-in protoboard and using the LM358 op-amps. This is likely due to the inverting configuration of the SVAF structure, as our experience has shown that inverting structures tend to behave better with regard to components, breadboard, and prototyping, with all the unknown parasitics at play!

Anyway, the SVAF is an interesting active filter capable of producing simultaneous LP, HP, and BP results. It is even capable of producing an active notch filter with an additional op-amp and a couple of resistors (requires 4 total, but with the LM324, a single package), which the interested reader can discover.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

Related Content

The post Simple state variable active filter appeared first on EDN.

A budget battery charger that also elevates blood pressure

Пн, 12/01/2025 - 16:55

At the tail end of my September 1 teardown of EBL’s first-generation 8-bay battery charger:

I tacked on a one-paragraph confession, with an accompanying photo that as usual, included a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

I’ll wrap up with a teaser photo of another, smaller, but no less finicky battery charger that I’ve also taken apart, but, due to this piece as-is ending up longer-than-expected (what else is new?), I have decided to instead save for another dedicated teardown writeup for another day:

An uncertain lineage

That day is today. And by “finicky”, as was the case with its predecessor, I was referring to its penchant for “rejecting batteries that other chargers accepted complaint-free.”

Truth be told, I can’t recall how it came into my possession in the first place, nor how long I’ve owned it (aside from a nebulous “really long time”). Whatever semblance of an owner’s manual originally came with the charger is also long gone; tedious searches of both my file cabinet and online resources were fruitless. There’s not even a company name or product code to be found anywhere on the outer device labeling, just a vague “Smart Timer Charger” moniker:

The best I’ve been able to do, thanks to Google Image Search, is come across similar-looking device matches from a company called “Vidpro Power2000” (with the second word variously alternatively referred to as “Power 2000”) listed on Amazon under multiple different product names, such as the XP-333 when bundled with four 2900 mah AA NiMH batteries:

and the XP-350 with four accompanying 1000mAh AAA batteries, again NiMH-based:

My guess is that neither “Vidpro Power2000” nor whatever retail brand name was associated with this particular charger was actually the original manufacturer. And by the way, those three plastic “bumps” toward the top of the front panel, above the battery compartment and below the “Power2000” mark, aren’t functional, only cosmetic. The only two active LEDs are the rectangular ones at the front panel’s bottom edge, seen in action in an earlier photo.

Anyhoo, after some preparatory top, bottom, and side chassis views as supplements to the already shared front and back perspectives:

A few screws loose

Let’s work our way inside, beginning (and ending?) with the visible screw head in between the two foldable AC plug prongs:

Nope, that wasn’t enough:

Wonder what, if anything, is under the back panel sticker? A-ha:

There we are:

“Nice” unsightly blob of dried glue in the upper left corner there, eh?

No more screws, clips, or other retainers left; the PCB lifts away from the remainder of the plastic chassis straightaway:

As I noted earlier, those “three bumps” are completely cosmetic, with no functional purpose:

Dual-tone and contract manufacturer-grown

And speaking of cosmetics, the two-tone two-sided PCB is an unexpected aesthetic bonus:

As you may have already noticed from the earlier glimpse of the PCB’s backside, the trace regions are sizeable, befitting their hefty AC and DC power routing purposes and akin to those seen last time (where, come to think of it, the PCB was also two-tone for the two sides). But the PCB itself is elementary, seemingly with no embedded trace layers, therein explaining the between-regions routing jumpers that through-hole feed to the other side:

We’ve also finally found a product name: the “TL2000S” from “Samyatech”. My Google search results on the product code were fruitless; let me know in the comments if you had any better luck (I’m particularly interested in finding a PDF’d user manual). My research on the company was more fruitful, but only barely so. There are (or perhaps more accurately in this case, were) two companies that use(d) the “Samyatech” abbreviation, both named “Samya Technology” in full. One is based in Taiwan, the other is in South Korea. The former, I’m guessing, is our candidate:

Samya Technology is a manufacturer of charging solutions for consumer products. The company manufactures power banks, emergency chargers, mobile phone battery chargers, USB charging products, Solar based chargers, Secondary NiMH Batteries, Multifunction chargers, etc. The company has two production bases, one in Taiwan and the other in China.

The website associated with the main company URL, www.samyatech.com, is currently timing out for me. Internet Archive Wayback Machine snapshots suggest two more information bits:

  • The main URL used to redirect to samyatech.com.tw, which is also timing out, and
  • More generally, although I can’t read Chinese, so don’t take what I’m saying as “gospel”, it seems the company shut down at the start of the COVID-19 lockdown and didn’t reopen.

Up top is the AC-to-DC conversion circuitry, along with other passives:

And at the bottom are the aforementioned LEDs and their attached light pipes:

Back to the PCB backside, this time freed of its previous surrounding-chassis encumbrance:

That blotch of dried glue sure is ugly (not to mention, unlike its same-color counterparts on the other side that keep various components in place, of no obvious functional value), isn’t it?

Algorithmic (over)simplicity

The IC nexus of the design was a surprise (at least to me, perhaps less so to others who are already more immersed in the details of such designs):

At left is the AZ324M, a quad low-power op amp device from (judging by the company logo mark) Advanced Analog Circuits, part of BCD Semiconductor Manufacturing Limited, and subsequently acquired by Diodes Incorporated.

And at right? When I first saw the distinctive STMicroelectronics mark on one end of the package topside, I assumed I was dealing with a low-end firmware-fueled microcontroller. But I was wrong. It’s the HCF4060, a 14-stage ripple carry binary counter/divider and oscillator. As the Build Electronics Circuits website notes, “It can be used to produce selectable time delays or to create signals of different frequencies.”

This all ties to, as I’ve been able to gather from my admittedly limited knowledge and research, how basic battery chargers like this one work in the first place (along with why they tend to be so fickle). Perhaps obviously, it’s important upfront for such a charger to be able to discern whether the batteries installed in it are actually the intended rechargeable NiMH formulation.

So, it first subjects the cells to a short-duration, relatively high current pulse (referencing the HCF4060’s time delay function), then reads back their voltages. If it discerns that a cell has a higher-than-expected resistance, it assumes that this battery’s not rechargeable or is instead based on an alternative chemistry such as alkaline or NiCd…and terminates the charge cycle.

That said, rechargeable NiMH cells’ internal resistance also tends to increase with use and incremental recharge cycles. And batteries that are in an over-discharge state, whether from sitting around unused (a particular problem with early cells that weren’t based on low self-discharge architectures) or from being excessively drained by whatever device they were installed in, tend to be intolerant of elementary recharging algorithms, too.

That said, I’ve conversely in the past sometimes been able to convince this charger to accept a cell that it initially rejected, even if the battery was already “full” (if I’ve lost premises power and the charger acts flaky when the electricity subsequently starts flowing again later, for example) by popping it into an illuminated flashlight for a few minutes to drain off some of the stored electrons.

So…🤷‍♂️ And again, as I mentioned back in September, a more “intelligent” (albeit also more expensive) charger such as my La Crosse Technology BC-9009 AlphaPower is commonly much more copacetic with (including being capable of resurrecting) cells that simplistic chargers comparatively reject:

Some side-view shots in closing, including closeups:

And with that, I’ll turn it over to you for your thoughts in the comments. A reminder that I’m only nominally cognizant of analog and power topics (and truth be told, I’m probably being overly generous-of-self in even claiming that), dear readers—I’m much more of a “digital guy”—so tact in your responses is as-always appreciated! I’m also curious to poll your opinions as to whether I should bother putting the charger back together and donating it to another, as I normally do with devices I non-destructively tear down, or if it’d be better in this case to save potential recipients the hassle and instead destine it for the landfill. Let me know!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

 Related Content

The post A budget battery charger that also elevates blood pressure appeared first on EDN.

Delta-sigma demystified: Basics behind high-precision conversion

Пн, 12/01/2025 - 07:57

Delta-sigma (ΔΣ) converters may sound complex, but at their core, they are all about precision. In this post, we will peel back the layers and uncover the fundamentals behind their elegant design.

At the heart of many precision measurement systems lies the delta-sigma converter, an architecture engineered for accuracy. By trading speed for resolution, it excels in low-frequency applications where precision matters most, including instrumentation, audio, and industrial sensing. And it’s worth noting that delta-sigma and sigma-delta are interchangeable terms for the same signal conversion architecture.

Sigma-delta classic: The enduring AD7701

Let us begin with a nod to the venerable AD7701, a 16-bit sigma-delta ADC that sets a high bar for precision conversion. At its core, the device employs a continuous-time analog modulator whose average output duty cycle tracks the input signal. This modulated stream feeds a six-pole Gaussian digital filter, delivering 16-bit updates to the output register at rates up to 4 kHz.

Timing parameters—including sampling rate, filter corner, and output word rate—are governed by a master clock, sourced either externally or via an on-chip crystal oscillator. The converter’s linearity is inherently robust, and its self-calibration engine ensures endpoint accuracy by adjusting zero and full-scale references on demand. This calibration can also be extended to compensate for system-level offset and gain errors.

Data access is handled through a flexible serial interface supporting asynchronous UART-compatible mode and two synchronous modes for seamless integration with shift registers or standard microcontroller serial ports.

Introduced in the early 1990s, Analog Devices’ AD7701 helped pioneer low-power, high-resolution sigma-delta conversion for instrumentation and industrial sensing. While newer ADCs have since expanded on their capabilities, AD7701 remains in production and continues to serve in legacy systems and precision applications where its simplicity and reliability still resonate.

The following figure illustrates the functional block diagram of this enduring 16-bit sigma-delta ADC.

Figure 1 Functional block diagram of AD7701 showcases its key architectural elements. Source: Analog Devices Inc.

Delta-sigma ADCs and DACs

Delta-sigma converters—both analog-to-digital converters (ADCs) and digital-to-analog converters (DACs)—leverage oversampling and noise shaping to achieve high-resolution signal conversion with relatively simple analog circuitry.

In a delta-sigma ADC, the input signal is sampled at a much higher rate than the Nyquist frequency and passed through a modulator that emphasizes quantization noise at higher frequencies. A digital filter then removes this noise and decimates the signal to the desired resolution.

Conversely, delta-sigma DACs take high-resolution digital data, shape the noise spectrum, and output a high-rate bitstream that is smoothed by an analog low-pass filter. This architecture excels in audio and precision measurement applications due to its ability to deliver robust linearity and dynamic range with minimal analog complexity.

Note that from here onward, the focus is exclusively on delta-sigma ADCs. While DACs share similar architectural elements, their operational context and signal flow differ significantly. To maintain clarity and relevance, DACs are omitted from this discussion—perhaps a topic for a future segment.

Inside the delta-sigma ADC

A delta-sigma ADC typically consists of two core elements: a delta-sigma modulator, which generates a high-speed bitstream, and a low-pass filter that extracts the usable signal. The modulator outputs a one-bit serial stream at a rate far exceeding the converter’s data rate.

To recover the average signal level encoded in this stream, a low-pass filter is essential; it suppresses high-frequency quantization noise and reveals the underlying low-frequency content. At the heart of every delta-sigma ADC lies the modulator itself; its output bitstream represents input signal’s amplitude through its average value.

A block diagram of a simple analog first-order delta-sigma modulator is shown below.

Figure 2 The block diagram of a simple analog first-order delta-sigma modulator illustrates its core components. Source: Author

This modulator operates through a negative feedback loop composed of an integrator, a comparator, and a 1-bit DAC. The integrator accumulates the difference between the input signal and the DAC’s output. The comparator then evaluates this integrated signal against a reference voltage, producing a 1-bit data stream. This stream is fed back through DAC, closing the loop and enabling continuous refinement of the output.

Following the delta-sigma modulator, the 1-bit data stream undergoes decimation via a digital filter (decimation filter). This process involves data averaging and sample rate reduction, yielding a multi-bit digital output. Decimation concentrates the signal’s relevant information into a narrower bandwidth, enhancing resolution while suppressing quantization noise within the band of interest.

It’s no secret to most engineers that second-order delta-sigma ADCs push noise shaping further by using two integrators in the modulator loop. This deeper shaping shifts quantization noise farther into high frequencies, improving in-band resolution at a given oversampling ratio.

While the design adds complexity, it enhances signal fidelity and eases post-filtering demands. Second-order modulators are common in precision applications like audio and instrumentation, though stability and loop tuning become more critical as order increases.

Well, at its core, the delta-sigma ADC represents a seamless integration of analog and digital processing. Its ability to achieve high-resolution conversion stems from the coordinated use of oversampling, noise shaping, and decimation—striking a delicate balance between speed and precision.

Delta-sigma ADCs made approachable

Although delta-sigma conversion is a complex process, several prewired ADC modules—built around popular, low-cost ICs like the HX711, ADS1232/34, and CS1237/38—make experimentation remarkably accessible. These chips offer high-resolution conversion with minimal external components, ideal for precision sensing and weighing applications.

Figure 3 A few widely used modules simplify delta-sigma ADC practice, even for those just starting out. Source: Author

Delta-sigma vs. flash ADCs vs. SAR

Most of you already know this, but flash ADCs are the speed demons of the converter world—using parallel comparators to achieve ultra-fast conversion, typically at the expense of resolution.

Flash ADCs and delta-sigma architectures serve distinct roles, with conversion rates differing by up to two orders of magnitude. Delta-sigma ADCs are ideal for low-bandwidth applications—typically below 1 MHz—where high resolution (12 to 24 bits) is required. Their oversampling approach trades speed for precision, followed by filtering to suppress quantization noise. This also simplifies anti-aliasing requirements.

While delta-sigma ADCs excel in resolution, they are less efficient for multichannel systems. Architecture may use sampled-data modulators or continuous-time filters. The latter shows promise for higher conversion rates—potentially reaching hundreds of Msps—but with lower resolution (6 to 8 bits). Still in early R&D, continuous-time delta-sigma designs may challenge flash ADCs in mid-speed applications.

Interestingly, flash ADCs can also serve as internal building blocks within delta-sigma circuits to boost conversion rates.

Also, successive approximation register (SAR) ADCs sit comfortably between flash and delta-sigma designs, offering a practical blend of speed, resolution, and efficiency. Unlike flash ADCs, which prioritize raw speed using parallel comparators, SAR converters use a binary search approach that is slower but far more power-efficient.

Compared to delta-sigma ADCs, SAR designs avoid oversampling and complex filtering, making them ideal for moderate-resolution, real-time applications. Each architecture has its sweet spot: flash for ultra-fast, low-resolution tasks; delta-sigma for high-precision, low-bandwidth needs; and SAR for balanced performance across a wide range of embedded systems.

Delta-sigma converters elegantly bridge the analog and digital worlds, offering high-resolution performance through clever noise shaping and oversampling. Whether you are designing precision instrumentation or exploring audio fidelity, understanding their principles unlocks a deeper appreciation for modern signal processing.

Curious how these concepts translate into real-world design choices? Join the conversation—share your favorite delta-sigma use case or challenge in the comments. Let us map the noise floor together and surface the insights that matter.

T.K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Delta-sigma demystified: Basics behind high-precision conversion appeared first on EDN.

Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback

Птн, 11/28/2025 - 15:00

Efficient battery management becomes increasingly important as demand for portable power continues to rise, especially since balanced cells help ensure safety, high performance, and a longer battery life. When cells are mismatched, the battery pack’s total capacity decreases, leading to the overcharging of some cells and undercharging of others—conditions that accelerate degradation and reduce overall efficiency. The challenge is how to maintain an equal voltage and charge among the individual cells.

Typically, it’s possible to achieve cell balancing through either passive or active methods. Passive balancing, the more common approach because of its simplicity and low cost, equalizes cell voltages by dissipating excess energy from higher-voltage cells through a resistor or FET networks. While effective, this process wastes energy as heat.

In contrast, active cell balancing redistributes excess energy from higher-voltage cells to lower-voltage ones, improving efficiency and extending battery life. Implementing active cell balancing involves an isolated, bidirectional power converter capable of both charging and discharging individual cells.

This Power Tip presents an active cell-balancing design based on a bidirectional flyback topology and outlines the control circuitry required to achieve a reliable, high-performance solution.

System architecture

In a modular battery system, each module contains multiple cells and a corresponding bidirectional converter (the left side of Figure 1). This arrangement enables any cell within Module 1 to charge or discharge any cell in another module, and vice versa. Each cell connects to an array of switches and control circuits that regulate individual charge and discharge cycles.

Figure 1 A modular battery system block diagram with multiple cells a bidirectional converter where any cell within Module 1 can charge/discharge any cell in another module. Each cell connects to an array of switches and control circuits that regulate individual charge/discharge cycles. Source: Texas Instruments

Bidirectional flyback reference design

The block diagram in Figure 2 illustrates the design of a bidirectional flyback converter for active cell balancing. One side of the converter connects to the bus voltage (18 V to 36 V), which could be the top of the battery cell stack, while the other side connects to a single battery cell (3.0 V to 4.2 V). Both the primary and secondary sides employ flyback controllers, allowing the circuit to operate bidirectionally, charging or discharging the cell as required.

Figure 2 A bidirectional flyback for active cell balancing reference design. Source: Texas Instruments

A single control signal defines the power-flow direction, ensuring that both flyback integrated circuits (ICs) never operate simultaneously. The design delivers up to 5 A of charge or discharge current, protecting the cell while maintaining efficiency above 80% in both directions (Figure 3).

Figure 3 Efficiency data for charging (left) and discharging (right). Source: Texas Instruments

Charge mode (power from Vbus to Vcell)

In charge mode, the control signal enables the charge controller, allowing Q1 to act as the primary FET. D1 is unused. On the secondary side, the discharge controller is disabled, and Q2 is unused. D2 serves as the output diode providing power to the cell. The secondary side implements constant-current and constant-voltage loops to charge the cell at 5 A until reaching the programmed voltage (3.0 V to 4.2 V) while keeping the discharge controller disabled.

Discharge mode (power from Vcell to Vbus)

Just the opposite happens in discharge mode; the control signal enables the discharge controller and disables the charge controller. Q2 is now the primary FET, and D2 is inactive. D1 serves as the output diode while Q1 is unused. The cell side enforces an input current limit to prevent discharge of the cell above 5 A. The Vbus side features a constant-voltage loop to ensure that the Vbus remains within its setpoint.

Auxiliary power and bias circuits

The design also integrates two auxiliary DC/DC converters to maintain control functionality under all operating conditions. On the bus side, a buck regulator generates 10 V to bias the flyback IC and the discrete control logic that determines the charge and discharge direction. On the cell side, a boost regulator steps the cell voltage up to 10 V to power its controller and ensure that the control circuit is operational even at low cell voltages.

Multimodule operation

Figure 4 illustrates how multiple battery modules interconnect through the reference design’s units. The architecture allows an overcharged cell from a higher-voltage module, shown at the top of the figure, to transfer energy to an undercharged cell in any other module. The modules do not need to be connected adjacently. Energy can flow between any combination of cells across the pack.

Figure 4 Interconnection of battery modules using TI’s reference design for bidirectional balancing. Source: Texas Instruments

Future improvements

For higher-power systems (20 W to 100 W), adopting synchronous rectification on the secondary and an active-clamp circuit on the primary will reduce losses and improve efficiency, thus enhancing performance.

For systems exceeding 100 W, consider alternative topologies such as forward or inductor-inductor-capacitor (LLC) converters. Regardless of topology, you must ensure stability across the wide-input and cell-voltage ranges characteristic of large battery systems.

Modern multicell battery systems.

The bidirectional flyback-based active cell balancing approach offers a compact, efficient, and scalable solution for modern multicell battery systems. By recycling energy between cells rather than dissipating this energy as heat, the design improves both energy efficiency and battery longevity. Through careful control-loop optimization and modular scalability, this architecture enables high-performance balancing in portable, automotive, and renewable energy applications.

Sarmad Abedin is currently a systems engineer with Texas Instruments, working in the power design services (PDS) team, working on both automotive and industrial power supplies. He has been designing power supplies for the past 14 years and has experience in both isolated and non-isolated power supply topologies. He graduated from Rochester Institute of Technology in 2011 with his bachelor’s degree.

 

Related Content

The post Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback appeared first on EDN.

Does (wearing) an Oura (smart ring) a day keep the doctor away?

Чтв, 11/27/2025 - 15:00

Before diving into my on-finger impressions of Oura’s Gen3 smart ring, as I’d promised I’d do back in early September, I thought I’d start off by revisiting some of the business-related topics I mentioned in that initial post in the series. First off, I mentioned at the end of that post that Oura had just obtained a favorable final judgment from the United States International Trade Commission (ITC) that both China-based RingConn and India-based Ultrahuman had infringed on its patent portfolio. In the absence of licensing agreements or other compromises, both Oura competitors would be banned from further product shipments to and sales of their products in the US after a final 60-day review period ended on October 21, although retailer partners could continue to sell their existing inventory until it was depleted.

Product evolutions and competition developments

I’m writing these words 10 days later, on Halloween, and there’ve been some interesting developments. I’d intentionally waited until after October 21 in order to see how both RingConn and Ultrahuman would react, as well as to assess whether patent challenges would pan out. As for Ultrahuman, a blog post posted shortly before the deadline (and updated the day after) made it clear that the company wasn’t planning on caving:

  • A new ring design is already in development and will launch in the U.S. as soon as possible.
  • We’re actively seeking clarity on U.S. manufacturing from our Texas facility, which could enable a “Made in USA” Ring AIR in the near future.
  • We also eagerly await the U.S. Patent and Trademark Office’s review of the validity of Oura’s ‘178 patent, which it acquired in 2023, and is central to the ITC ruling. A decision is expected in December.

To wit, per a screenshot I captured the day after the deadline, Wednesday, October 22, sales through the manufacturer’s website to US customers had ceased.

And surprisingly, inventory wasn’t listed as available for sale on Amazon’s website, either.

RingConn conversely took a different tack. On October 22, again, when I checked, the company was still selling its products to US customers both from its own website and Amazon’s:

This situation baffled me until I hit up the company subreddit and saw the following:

Dear RingConn Family,

We’d like to share some positive news with you: RingConn, a leading smart ring innovator, has reached a settlement with ŌURA regarding a patent dispute. Under the terms of the agreement, RingConn’s software and hardware products will remain available in the U.S. market, without affecting its market presence.

See the company’s Reddit post for the rest of the message. And here’s the official press release.

Secondly, as I’d noted in my initial coverage:

One final factor to consider, which I continue to find both surprising and baffling, is the fact that none of the three manufacturers I’ve mentioned here seems to support having more than one ring actively associated with an account, therefore, cloud-logging and archiving data, at the same time. To press a second ring into service, you need to manually delete the first one from your account first. The lack of multi-ring support is a frequent cause of complaints on Reddit on elsewhere, from folks who want to accessorize multiple smart rings just as they do with normal rings, varying color and style to match outfits and occasions. And the fiscal benefit to the manufacturers of such support is intuitively obvious, yes?

It turns out I just needed to wait a few weeks. On October 1, Oura announced that multiple Oura Ring 4 styles would soon be supported under a single account. Quoting the press release, “Pairing and switching among multiple Oura Ring 4 devices on a single account will be available on iOS starting Oct. 1, 2025, and on Android starting Oct. 20, 2025.” That said, a crescendo of complaints on Reddit and elsewhere suggests an implementation delay; I’m 11 days past October 20 at this point and haven’t seen the promised Android app update yet, and at least some iOS users have waited a month at this point. Oura PR told me that I should be up and running by November 5; I’ll follow up in the comments as to whether this actually happened.

Charging options

That same day, by the way, Oura also announced its own branded battery-inclusive charger case, an omission that I’d earlier noted versus competitor RingConn:

 

That said, again quoting from the October 1 press release (with bolded emphasis mine), the “Oura Ring 4 Charging Case is $99 USD and will be available to order in the coming months.” For what it’s worth, the $28.99 (as I write these words) Doohoeek charging case for my Gen3 Horizon:

is working like a charm:

Behind it, by the way, is the upgraded Doohoeek $33.29 charging case for my Oura Ring 4, whose development story (which I got straight from the manufacturer) was not only fascinating in its own right but also gave me insider insight into how Oura has evolved its smart ring charging scheme for the smart ring over time. More about that soon, likely next month.

 

And here’s my Gen3 on the factory-supplied, USB-C-fed standard charger, again with its Ring 4 sibling behind it:

General impressions

As for the ring itself, here’s what it looks like on my left index finger, with my wedding band two digits over from it on the same hand:

And here again are all three rings I’ve covered in in-depth writeups to date: the Oura Gen3 Horizon at left, Ultrahuman Ring AIR in the middle and RingConn Gen 2 at right:

Like RingConn’s product:

both the Heritage:

and my Horizon variant of the Oura Gen3:

 

include physical prompting to achieve and maintain proper placement: sensor-inclusive “bump” guides on both sides of the backside inside, which the Oura Ring 4 notably dispenses with:

 

I’ve already shown you what the red glow of the Gen3 intermediary SpO2 (oxygen saturation) sensor looks like when in operation, specifically when I’m able to snap a photo of it soon enough after waking to catch it still in action before it discerns that I’ve stirred and turns off:

And here’s what the two green-color pulse rate sensors, one on either side of their SpO2 sibling:

look like in action:

Generally speaking, the Oura Gen3 feels a lot like the Ultrahuman Ring AIR; they both drop between 15-20% of battery charge level every 24 hours, leading to a sub-week operating life between recharges. That said, I will give Oura well-deserved kudos for its software user interface, which is notably more informative, intuitive and more broadly easier to use than its RingConn and Ultrahuman counterparts. Then again, Oura’s been around the longest and has the largest user base, so it’s had more time (and more feedback) to fine-tune things. And cynically speaking, given Oura’s $5.99/month or $69.99/year subscription fee, versus competitors’ free, it’d better be better!

Software insights

In closing, and in fairness, regarding that subscription, it’s not strictly required to use an Oura smart ring. That said, the information supplied without it:

is a pale subset of the norm:

What I’m showing in the overview screen images is a fraction of the total information captured and reported, but it’s all well-organized and intuitive. And as you can see on that last one, the Oura smart ring is adept at sensing even brief catnaps 😀

With that, and as I’ve already alluded, I now have an Oura Ring 4 on-finger—two of them, in fact, one of which I’ll eventually be tearing down—which I aspire to write up shortly, sharing my impressions both versus its Gen3 predecessor and its competitors. Until then, I as-always welcome your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Does (wearing) an Oura (smart ring) a day keep the doctor away? appeared first on EDN.

Inside the battery: A quick look at internal resistance

Чтв, 11/27/2025 - 11:14

Ever wondered why a battery that reads full voltage still struggles to power your device? The answer often lies in its internal resistance. This hidden factor affects how efficiently a battery delivers current, especially under load.

In this post, we will briefly examine the basics of internal resistance—and why it’s a critical factor in real-world performance, from handheld flashlights to high-power EV drivetrains.

What’s internal resistance and why it matters

Every battery has some resistance to the flow of current within itself—this is called internal resistance. It’s not a design flaw, but a natural consequence of the materials and construction. The electrolyte, electrodes, and even the connectors all contribute to it.

Internal resistance causes voltage to drop when the battery delivers current. The higher the current draw, the more noticeable the drop. That is why a battery might read 1.5 V at rest but dip below 1.2 V under load—and why devices sometimes shut off even when the battery seems “full.”

Here is what affects it:

  • Battery type: Alkaline, lithium-ion, and NiMH cells all have different internal resistances.
  • Age and usage: Resistance increases as the battery wears out.
  • Temperature: Cold conditions raise resistance, reducing performance.
  • State of charge: A nearly empty battery often shows higher resistance.

Building on that, internal resistance gradually increases as batteries age. This rise is driven by chemical wear, electrode degradation, and the buildup of reaction byproducts. As resistance climbs, the battery becomes less efficient, delivers less current, and shows more voltage drop under load—even when the resting voltage still looks healthy.

Digging a little deeper—focusing on functional behavior under load—internal resistance is not just a single value; it’s often split into two components. Ohmic resistance comes from the physical parts of the battery, like the electrodes and electrolyte, and tends to stay relatively stable.

Polarization resistance, on the other hand, reflects how the battery’s chemical reactions respond to current flow. It’s more dynamic, shifting with temperature, charge level, and discharge rate. Together, these resistances shape how a battery performs under load, which is why two batteries with identical voltage readings might behave very differently in real-world use.

Internal resistance in practice

Internal resistance is a key factor in determining how much current a battery can deliver. When internal resistance is low, the battery can supply a large current. But if the resistance is high, the current it can provide drops significantly. Also, higher the internal resistance, the greater the energy loss—this loss manifests as heat. That heat not only wastes energy but also accelerates the battery’s degradation over time.

The figure below illustrates a simplified electrical model of a battery. Ideally, internal resistance would be zero, enabling maximum current flow without energy loss. In practice, however, internal resistance is always present and affects performance.

Figure 1 Illustration of a battery’s internal configuration highlights the presence of internal resistance. Source: Author

Here is a quick side note regarding resistance breakdown. Focusing on material-level transport mechanisms, battery internal resistance comprises two primary contributors: electronic resistance, driven by electron flow through conductive paths, and ionic resistance, governed by ion transport within the electrolyte.

The total effective resistance reflects their combined influence, along with interfacial and contact resistances. Understanding this layered structure is key to diagnosing performance losses and carrying out design improvements.

As observed nowadays, elevated internal resistance in EV batteries hampers performance by increasing heat generation during acceleration and fast charging, ultimately reducing driving range and accelerating cell degradation.

Fortunately, several techniques are available for measuring a battery’s internal resistance, each suited to different use cases and levels of diagnostic depth. Common methods include direct current internal resistance (DCIR), alternating current internal resistance (ACIR), and electrochemical impedance spectroscopy (EIS).

And there is a two-tier variation of the standard DCIR technique, which applies two sequential discharge loads with distinct current levels and durations. The battery is first discharged at a low current for several seconds, followed by a higher current for a shorter interval. Resistance values are calculated using Ohm’s law, based on the voltage drops observed during each load phase.

Analyzing the voltage response under these conditions can reveal more nuanced resistive behavior, particularly under dynamic loads. However, the results remain strictly ohmic and do not provide direct information about the battery’s state of charge (SoC) or capacity.

Many branded battery testers, such as some product series from Hioki, apply a constant AC current at a measurement frequency of 1 kHz and determine the battery’s internal resistance by measuring the resulting voltage with an AC voltmeter (AC four-terminal method).

Figure 2 The Hioki BT3554-50 employs AC-IR method to achieve high-precision internal resistance measurement. Source: Hioki

The 1,000-hertz (1 kHz) ohm test is a widely used method for measuring internal resistance. In this approach, a small 1-kHz AC signal is applied to the battery, and resistance is calculated using Ohm’s law based on the resulting voltage-to-current ratio.

It’s important to note that AC and DC methods often yield different resistance values due to the battery’s reactive components. Both readings are valid—AC impedance primarily reflects the instantaneous ohmic resistance, while DC measurements capture additional effects such as charge transfer and diffusion.

Notably, the DC load method remains one of the most enduring—and nostalgically favored—approaches for measuring a battery’s internal resistance. Despite the rise of impedance spectroscopy and other advanced techniques, its simplicity and hands-on familiarity continue to resonate with seasoned engineers.

It involves briefly applying a load—typically for a second or longer—while measuring the voltage drop between the open-circuit voltage and the loaded voltage. The internal resistance is then calculated using Ohm’s law by dividing the voltage drop by the applied current.

A quick calculation: To estimate a battery’s internal resistance, you can use a simple voltage-drop method when the open-circuit voltage, loaded voltage, and current draw are known. For example, if a battery reads 9.6 V with no load and drops to 9.4 V under a 100-mA load:

Internal resistance = 9.6 V-9.4 V/0.1 A = 2 Ω

This method is especially useful in field diagnostics, where direct resistance measurements may not be practical, but voltage readings are easily obtained.

In simplified terms, internal resistance can be estimated using several proven techniques. However, the results are influenced by the test method, measurement parameters, and environmental conditions. Therefore, internal resistance should be viewed as a general diagnostic indicator—not a precise predictor of voltage drop in any specific application.

Bonus blueprint: A closing hardware pointer

For internal resistance testing, consider the adaptable e-load concept shown below. It forms a simple, reliable current sink for controlled battery discharge, offering a practical starting point for further refinement. As you know, the DC load test method allows an electronic load to estimate a battery’s internal resistance by observing the voltage drop during a controlled current draw.

Figure 3 The blueprint presents an electronic load concept tailored for internal resistance measurement, pairing a low-RDS(on) MOSFET with a precision load resistor to form a controlled current sink. Source: Author

Now it’s your turn to build, tweak, and test. If you have got refinements, field results, or alternate load strategies, share them in the comments. Let us keep the circuit conversation flowing.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Inside the battery: A quick look at internal resistance appeared first on EDN.

NB-IoT module adds built-in geolocation capabilities

Срд, 11/26/2025 - 16:21

The ST87M01-1301 NB-IoT wireless module from ST provides narrowband cellular connectivity along with both GNSS and Wi-Fi–based positioning for outdoor and indoor geolocation. Its integrated GNSS receiver enables precise location tracking using GPS constellations, while the Wi-Fi positioning engine delivers fast, low-power indoor location services by scanning nearby 802.11b access points and leveraging third-party geocoding providers.

As the latest member of the ST87M01 series of NB-IoT (LTE Cat NB2) industrial modules, this variant supports multi-frequency bands with extended multi-regional coverage. Its compact, low-power design makes it well suited for smart IoT applications such as asset tracking, environmental monitoring, smart metering, and remote healthcare. A 10.6×12.8-mm, 51-pin LGA package further enables miniaturization in space-constrained designs.

ST provides an evaluation kit that includes a ready-to-use Conexa IoT SIM card and two SMA antennas, helping developers quickly prototype and validate NB-IoT connectivity in real-world conditions. This is supported by an expanding ecosystem featuring the Easy-Connect software library and design examples.

ST87M01 series product page

STMicroelectronics

The post NB-IoT module adds built-in geolocation capabilities appeared first on EDN.

Boost controller powers brighter automotive displays

Срд, 11/26/2025 - 16:21

A 60-V boost controller from Diodes, the AL3069Q packs four 80-V current-sink channels for driving LED backlights in automotive displays. Its adaptive boost-voltage control allows operation from a 4.5-V to 60-V input range—covering common automotive power rails at 12 V, 24 V, and 48 V—and its switching frequency is adjustable from 100 kHz to 1 MHz.

The AL3069Q’s four current-sink channels are set using an external resistor, providing typical ±0.5% current matching between channels and devices to ensure uniform brightness across the display. Each channel delivers 250 mA continuous or up to 400 mA pulsed, enabling support for a range of display sizes and LED panels up to 32-inch diagonal, such as those used in infotainment systems, instrument clusters, and head-up displays. PWM-to-analog dimming, with a minimum duty cycle of 1/5000 at 100 Hz, improves brightness control while minimizing LED color shift.

Diode’s AL3069Q offers robust protection and fault diagnostics, including cycle-by-cycle current limit, soft-start, UVLO, programmable OVP, OTP, and LED-open/-short detection. Additional safeguards cover sense resistor, Schottky diode, inductor, and VOUT faults, with a dedicated pin to signal any fault condition.

The automotive-compliant controller costs $0.54 each in 1000-unit quantities.

AL3069Q product page 

Diodes

The post Boost controller powers brighter automotive displays appeared first on EDN.

Hybrid device elevates high-energy surge protection

Срд, 11/26/2025 - 16:21

TDK’s G series integrates a metal oxide varistor and a gas discharge tube into a single device to provide enhanced surge protection. The two elements are connected in series, combining the strengths of both technologies to deliver greater protection than either component can offer on its own. This hybrid configuration also reduces leakage current to virtually zero, helping extend the overall lifetime of the device.

The G series comprises two leaded variants—the G14 and G20—with disk diameters of 14 mm and 20 mm, respectively. G14 models support AC operating voltages from 50 V to 680 V, while G20 versions extend this range to 750 V. They can handle maximum surge currents of 6,000 A (G14) and 10,000 A (G20) for a single 8/20-µs pulse, and absorb up to 200 J (G14) or 490 J (G20) of energy.

Operating over a temperature range of –40 °C to +105 °C, the G series is suitable for use in power supplies, chargers, appliances, smart metering, communication systems, and surge protection devices. Integrating both protection elements into a single, epoxy-coated 2-pin package simplifies design and reduces board space compared to using discrete components.

To access the datasheets for the G14 series (ordering code B72214G) and the G20 series (B72220G), click here.

TDK Electronics 

The post Hybrid device elevates high-energy surge protection appeared first on EDN.

Power supplies enable precise DC testing

Срд, 11/26/2025 - 16:20

R&S has launched the NGT3600 series of DC power supplies, delivering up to 3.6 kW for a wide range of test and measurement applications. This versatile line provides clean, stable power with low voltage and current ripple and noise. With a resolution of 100 µA for current and 1 mV for voltage, as well as adjustable output voltages up to 80 V, the supplies offer both precision and flexibility.

The dual-channel NGT3622 combines two fully independent 1800-W outputs in a single compact instrument. Its channels can be connected in series or parallel, allowing either the voltage or the current to be doubled. For applications requiring even more power, up to three units can be linked to provide as much as 480 V or 300 A across six channels. The NGT3622 supports current and voltage testing under load, efficiency measurements, and thermal characterization of components such as DC/DC converters, power supplies, motors, and semiconductors.

Engineers can use the NGT3600 series to test high-current prototypes such as base stations, validate MPPT algorithms for solar inverters, and evaluate charging-station designs. In the automotive sector, the series supports the transition to 48-V on-board networks by simulating these networks and powering communication systems, sensors, and control units during testing.

All models in the NGT3600 series are directly rack-mountable with no adapter required. They will be available beginning January 13, 2026, from R&S and selected distribution partners. For more information, click here.

Rohde & Schwarz 

The post Power supplies enable precise DC testing appeared first on EDN.

Space-ready Ethernet PHYs achieve QML Class P

Срд, 11/26/2025 - 16:20

Microchip’s two radiation-tolerant Ethernet PHY transceivers are the company’s first devices to earn QML Class P/ESCC 9000P qualification. The single-port VSC8541RT and quad-port VSC8574RT support data rates up to 1 Gbps, enabling dependable data links in mission-critical space applications.

Achieving QML Class P/ESCC 9000P certification involves rigorous testing—such as Total Ionizing Dose (TID) and Single Event Effects (SEE) assessments—to verify that devices tolerate the harsh radiation conditions of space. The certification also ensures long-term availability, traceability, and consistent performance.

The VSC8541RT and VSC8574RT withstand 100 krad(Si) TID and show no single-event latch-up at LET levels below 78 MeV·cm²/mg at 125 °C. The VSC8541RT integrates a single Ethernet copper port supporting MII, RMII, RGMII, and GMII MAC interfaces, while the VSC8574RT includes four dual-media copper/fiber ports with SGMII and QSGMII MAC interfaces. Their low power consumption and wide operating temperature ranges make them well-suited for missions where thermal constraints and power efficiency are key design considerations.

VSC8541RT product page  

VSC8574RT product page 

Microchip Technology 

The post Space-ready Ethernet PHYs achieve QML Class P appeared first on EDN.

Active current mirror

Срд, 11/26/2025 - 15:00

Current mirrors are a commonly useful circuit function, and sometimes high precision is essential. The challenge of getting current mirrors to be precise has created a long list of tricks and techniques. The list includes matched transistors, monolithic transistor multiples, emitter degeneration, fancy topologies with extra transistors, e.g., Wilson, cascode, etc.

But when all else fails and precision just can’t suffer any compromise, Figure 1 shows the nuclear option. Just add a rail-to-rail I/O (RRIO) op-amp!

Figure 1 An active current sink mirror. Assuming resistor equality and negligible A1 offset error, A1 feedback forces Q1 to maintain accurate current sink I/O equality I2 = I1.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The theory of operation of the ACM couldn’t be more straightforward. Vr , which is equal to I1*R, is wired to A1’s noninverting input, forcing it to drive Q1 to conduct I2 such that I2R = I1R.

Therefore, if the resistors are equal, A1’s accuracy limiting parameters, like offset voltage, gain-bandwidth, bias and offset currents, etc., are adequate, and Q1 doesn’t saturate, I1 can be equal to I2 just as precisely as you like.

Obviously, Vr must be >> Voffset, and A1’s output span must be >> Q1’s threshold even after subtracting Vr.

Substitute a PFET for Figure 1’s NFET, and a current-sourcing mirror results, as shown in Figure 2.

Figure 2 An active current source mirror. This is identical to Figure 1, except this Q1 is a PFET and the polarities are swapped.

Active current mirror (ACM) precision can be better than that of easily available sense resistors. So, a bit of post-assembly trimming, as illustrated in Figure 3, might be useful.

Figure 3 If adequately accurate resistors aren’t handy, a trimmer pot might be useful for post-assembly trimming.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Active current mirror appeared first on EDN.

Charting the course for a truly multi-modal device edge

Срд, 11/26/2025 - 13:55

The world is witnessing an artificial intelligence (AI) tsunami. While the initial waves of this technological shift focused heavily on the cloud, a powerful new surge is now building at the edge. This rapid infusion of AI is set to redefine Internet of Things (IoT) devices and applications, from sophisticated smart homes to highly efficient industrial environments.

This evolution, however, has created significant fragmentation in the market. Many existing silicon providers have adopted a strategy of bolting on AI capabilities to legacy hardware originally designed for their primary end markets. This piecemeal approach has resulted in inconsistent performance, incompatible toolchains, and a confusing landscape for developers trying to deploy edge AI solutions.

To unlock the transformative potential of edge AI, industry must pivot. We must move beyond retrofitted solutions and embrace a purpose-built, AI-native approach that integrates hardware and software right from the foundational design.

 

The AI-native mandate

“AI-native” is more than a marketing term; it’s a fundamental architectural commitment where AI is the central consideration, not an afterthought. Here’s what it looks like.

  • The hardware foundation: Purpose-built silicon

As IoT workloads evolve to handle data across multiple modalities, from vision and voice to audio and time series, the underlying silicon must present itself as a flexible, secure platform capable of efficient processing. Core to such design considerations include NPU architectures that can scale, and are supported by highly integrated vision, voice, video and display pipelines.

  • The software ecosystem: Openness and portability

To accelerate innovation and combat fragmentation for IoT AI, the industry needs to embrace open standards. While the ‘language’ of model formats and frameworks is becoming more industry-standard, the ecosystem of edge AI compilers is largely being built from vendor-specific and proprietary offerings. Efficient execution of AI workloads is heavily dependent on optimized data movement and processing across scalar, vector, and matrix accelerator domains.

By open-sourcing compilers, companies encourage faster innovation through broader community adoption, providing flexibility to developers and ultimately facilitating more robust device-to-cloud developer journeys. Synaptics is encouraging broader adoption from the community by open-sourcing edge AI tooling and software, including Synaptics’ Torq edge AI platform, developed in partnership with Google Research.

  • The dawn of a new device landscape

AI-native silicon will fuel the creation of entirely new device categories. We are currently seeing the emergence of a new class of devices truly geared around AI, such as wearables—smart glasses, smartwatches, and wristbands. Crucially, many of these devices are designed to operate without being constantly tethered to a smartphone.

Instead, they soon might connect to a small, dedicated computing element, perhaps carried in a pocket like a puck, providing intelligence and outcomes without requiring the user to look at a traditional phone display. This marks the beginning of a more distributed intelligence ecosystem.

The need for integrated solutions

This evolving landscape is complex, demanding a holistic approach. Intelligent processing capabilities must be tightly coupled with secure, reliable connectivity to deliver a seamless end-user experience. Connected IoT devices need to leverage a broad range of technologies from the latest Wi-Fi and Bluetooth standards to Thread and ZigBee.

Chip, device and system-level security are also vital, especially considering multi-tenant deployments of sensitive AI models. For intelligent IoT devices, particularly those that are battery-powered or wearable, security must be maintained consistently as the device transitions in and out of different power states. The combination of processing, security, and power must all work together effectively.

Navigating this new era of the AI edge requires a fundamental shift in mindset, a change from retrofitting existing technology to building products with a clear, AI-first mission. Take the case of Synaptics SL2610 processor, one of the industry’s first AI-native, transformer-capable processors designed specifically for the edge. It embodies the core hardware and software principles needed for the future of intelligent devices, running on a Linux platform.

By embracing purpose-built hardware, rallying around open software frameworks, and maintaining a strategy of self-reliance and strategic partnerships, the industry can move past the current market noise and begin building the next generation of truly intelligent, powerful, and secure devices.

Mehul Mehta is a Senior Director of Product Marketing at Synaptics Inc., where he is responsible for defining the Edge AI IoT SoC roadmap and collaborating with lead customers. Before joining Synaptics, Mehul held leadership roles at DSP Group spanning product marketing, software development, and worldwide customer support.

Related Content

The post Charting the course for a truly multi-modal device edge appeared first on EDN.

Infused concrete yields greatly improved structural supercapacitor

Втр, 11/25/2025 - 22:01

A few years ago, a team at MIT researched and published a paper on using concrete as an energy-storage supercapacitor (MIT engineers create an energy-storing supercapacitor from ancient materials) (also called an ultracapacitor), which is a battery based on electric fields rather than electrochemical principles. Now, the same group has developed a battery with ten times the storage per volume of that earlier version, by using concrete infused with various materials and electrolytes such as (but not limited to) nano-carbon black.

Concrete is the world’s most common building material and has many virtues, including basic strength, ruggedness, and longevity, and few restrictions on final shape and form. The idea of also being able to use it as an almost-free energy storage system is very attractive.

By combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, their electron-conducting carbon concrete (EC3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy, Figure 1.

Figure 1 As with most batteries, schematic diagram and physical appearance are simple, and it’s the details that are the challenge. Source: Massachusetts Institute of Technology

This greatly improved energy density was made possible by their deeper understanding of how the nanocarbon black network inside EC3 functions and interacts with electrolytes, as determined using some sophisticated instrumentation. By using focused ion beams for the sequential removal of thin layers of the EC3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the joint EC³ Hub and MIT Concrete Sustainability Hub team was able to reconstruct the conductive nanonetwork at the highest resolution yet. The analysis showed that the network is essentially a fractal-like “web” that surrounds EC3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system. 

A cubic meter of this version of EC3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy, which is enough to power an actual modest-sized refrigerator for a day. Via extrapolation (always the tricky aspect of these investigations), they say that 45 cubic meters of EC3 with an energy density of 0.22 kWh/m3 – a typical house-sided foundation—would have enough capacity to store about 10 kilowatt-hours of energy, the average daily electricity usage for a household, Figure 2.

Figure 2 These are just a few of the many performance graphs that the team developed. Source: Massachusetts Institute of Technology

They achieved highest performance with organic electrolytes, especially those that combined quaternary ammonium salts—found in everyday products like disinfectants—with acetonitrile, a clear, conductive liquid often used in industry, Figure 3.

Figure 3 They also identified needed properties for the electrolyte and investigated many possibilities for this critical component. Source: Massachusetts Institute of Technology

If this all sounds only like speculation from a small-scale benchtop lab project, it is, and it isn’t. Much of the work was done in cooperation with the American Concrete Institute, a research and promotional organization that studies all aspects of concrete, including formulation, application, standardized tests, long-term performance, and more.

While the MIT team, perhaps not surprisingly, is positioning this development as the next great thing—and it certainly gets a lot of attention in the mainstream media due to its tantalizing keywords of “concrete” and “battery”—there are genuine long-term factors to evaluate related to scaling up to a foundation-sized mass:

  • Does the final form of the concrete matter, such a large cube versus flat walls?
  • What are the partial and large-scale failure modes?
  • What are the long-term effects of weather exposure, as this material is concrete (which is well understood) but with an additive?
  • What happens when an EC3 foundation degrades or fails—do you have to lift the house and replace the foundation?
  • What are the short and long-term influences on performance, and how does the formulation affect that performance?

The performance and properties of the many existing concrete formulations have been tested in the lab and in the field over decades, and “improvements” are not done casually, especially in consideration of the end application.

Since demonstrating this concrete battery in structural mode lacks visual impact, the MIT team built a more attention-grabbing demonstration battery of stacked cells to provide 12-V of power. They used this to operate a 12-V computer fan and a 5-V USB output (via a buck regulator) for a handheld gaming console, Figure 4.

Figure 4 A 12-V concrete battery powering a small fan and game console provides a visual image which is more dramatic and attention-grabbing. Source: Massachusetts Institute of Technology

The work is detailed in their paper “High energy density carbon–cement supercapacitors for architectural energy storage,” published in Proceedings of the National Academy of Sciences (PNAS). It’s behind a paywall, but there is a posted student thesis, “Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases.” Finally, there’s also a very informative 18-slide, 21-minute PowerPoint presentation  at YouTube (with audio), “Carbon-cement supercapacitors: A disruptive technology for renewable energy storage,” that was developed by the MIT team for the ACI.

What’s your view? Is this a truly disruptive energy-storage development? Or will the realities of scaling up in physical volume and long-term performance, as well as “replacement issues,” make this yet another interesting advance that falls short in the real world?

Check back in five to ten years to find out. If nothing else, this research reminds us that there is potential for progress in power and energy beyond the other approaches we hear so much about.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

The post Infused concrete yields greatly improved structural supercapacitor appeared first on EDN.

A simpler circuit for characterizing JFETs

Втр, 11/25/2025 - 15:00

The circuit presented by Cor Van Rij for characterizing JFETs is a clever solution. Noteworthy is the use of a five-pin test socket wired to accommodate all of the possible JFET pinout arrangements.

This idea uses that socket arrangement in a simpler circuit. The only requirement is the availability of two digital multimeters (DMMs), which add the benefit of having a hold function to the measurements. In addition to accuracy, the other goals in developing this tester were:

  • It must be simple enough to allow construction without a custom printed circuit board, as only one tester was required.
  • Use components on hand as much as possible.
  • Accommodate both N- and P-channel devices while using a single voltage supply.
  • Use a wide range of supply voltages.
  • Incorporate a current limit with LED indication when the limit is reached.
The circuit

The resulting circuit is shown in Figure 1.

Figure 1 Characterizing JFETs using a socket arrangement. The fixture requires the use of two DMMs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Q1, Q2, R1, R3, R5, D2, and TEST pushbutton S3 comprise the simple current limit circuit (R4 is a parasitic Q-killer).

S3 supplies power to S1, the polarity reversal switch, and S2 selects the measurement. J1 and J2 are banana jacks for the DMM set to read the drain current. J3 and J4 are banana jacks for the DMM set to read Vgs(off). 

Note the polarities of the DMM jacks. They are arranged so that the drain current and Vgs(off) read correctly for the type of JFET being tested—positive IDSS and negative Vgs(off) for N-channel devices and negative IDSS and positive Vgs(off) for P-channel devices.

R2 and D1 indicate the incoming power, while R6 provides a minimum load for the current limiter. Resistor R8 isolates the DUT from the effects of DMM-lead parasitics, and R9 provides a path to earth ground for static dissipation.

Testing JFETs

Figure 2 shows the tester setup measuring Vgs(off) and IDSS for an MPF102, an N-channel device. The specified values of this device are Vgs(off) of -8v maximum and IDSS of 2 to 20 mA. Note that the hold function of the meters was used to maintain the measurements for the photograph. The supply for this implementation is a nominal 12-volt “wall wart” salvaged from a defunct router. 

Figure 2 The test of an MPF302 N-Channel JFET using the JFET characterization circuit.

Figure 3 shows the current limit in action by setting the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102. The limit is 52.2 mA, and the I-LIMIT LED is brightly lit. 

Figure 3 The current limit test that sets the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102.

John L. Waugaman’s love of electronics began when I built a crystal set at age 10 with my father’s help. Earning a BSEE from Carnegie-Mellon University led to a 30-year career in industry designing product inspection equipment and four patents. After being RIF’d, I spent the next 20 years as a consultant specializing in analog design in industrial and military projects. Now I’m retired, sort of, but still designing.  It’s in my blood, I guess.

Related Content

The post A simpler circuit for characterizing JFETs appeared first on EDN.

Gold-plated PWM-control of linear and switching regulators

Втр, 11/25/2025 - 15:00
“Gold-plated” without the gold plating

Alright, I admit that the title is a bit over the top. So, what do I mean by it? I mean that:

(1) The application of PWM control to a regulator does not significantly degrade the inherent DC accuracy of its output voltage,

(2) Any ability of the regulator’s output voltage to reach below that of its internal reference is supported, and

(3) This is accomplished without the addition of a new reference voltage.

Refer to Figure 1.

Figure 1 This circuit meets the requirements of “Gold-Plated PWM control” as stated above.

Wow the engineering world with your unique design: Design Ideas Submission Guide

How it works

The values of components Cin, Cout, Cf, and L1 are obtained from the regulator’s datasheet. (Note that if the regulator is linear, L1 is replaced with a short.)

The datasheet typically specifies a preferred value of Rg, a single resistor between ground and the feedback pin FB. 

Taking the DC voltage VFB of the regulator’s FB pin into account, R3 is selected so that U2a supplies a V_sup voltage greater than or equal to 3.0 V. C7 and R3 ensure that the composite is non-oscillatory, even with decoupling capacitor C6 in place.C6 is required for the proper operation of the SN74AC04 IC U1.

The following equations govern the circuit’s performance, where Vmax is the desired maximum regulator output voltage:

R3   = ( Vsup / VFB – 1 ) · 10k
Rg1 = Rg / ( 1 – ( VFB / Vsup ) / ( 1 – VFB/Vmax ))
Rg2 = Rg · Rg1 / ( Rg1 – Rg )
Rf = Rg · ( Vmax / VFB – 1 )

They enable the regulator output to reach zero volts (if it is capable of such) when the PWM inputs are at their highest possible duty cycle. 

U1 is part of two separate PWMs whose composite output can provide up to 16 bits of resolution. Ra and Rb + Rc establish a factor of 256 for the relative significance of the PWMs.

If eight bits or less of resolution is required, Rb and Rc, and the least significant PWM, can be eliminated, and all six inverters can be paralleled.

The PWMs’ minimum frequency requirements shown are important because when those are met, the subsequent filter passes a peak-to-peak ripple less than 2-16 of the composite PWM’s full-scale range. This filter consists of Ra, Rb + Rc, R5 to R7, C3 to C5, and U2b.

Errors

The most stringent need to minimize errors comes from regulators with low and highly accurate reference voltages. Let’s consider 600 mV and 0.5% from which we arrive at a 3-mV output error maximum inherent to the regulator. (This is overly restrictive, of course, because it assumes zero-tolerance resistors to set the output voltage. If 0.1% resistors were considered, we’d add 0.2% to arrive at 0.7% and more than 4 mV.)

Broadly, errors come from imperfect resistor ratios and component tolerances, op-amp input offset voltages and bias currents, and non-linear SN74AC04 output resistances. The 0.1% resistors are reasonably cheap.

Resistor ratios

If nominally equal in value, such resistors, forming a ratio, contribute a worst-case error of ± 0.1%. For those of different values, the worst is ± 0.2%. Important ratios involve:

  • Rg1, Rg2, and Rf
  • R3 and R4
  • Ra and Rb + Rc

Various Rf, Rg ratios are inherent to regulator operation.

The Rg1, Rg2; R3, R4; and Ra, Rb + Rc pairs have been introduced as requirements for PWM control.

The Ra / (Rb + Rc) error is ± 0.2%, but since this involves a ratio of 8-bit PWMs at most, it incurs less than 1 least significant bit (LSbit) of error.

The Rg1, Rg2 pair introduces an error of ±0.2 % at most.

The R3, R4 pair is responsible for a worst-case ±0.2 %. All are less than the 0.5% mentioned earlier.

Temperature drift

The OPA2376 has a worst-case input offset voltage of 25 µV over temperature. Even if U2a has a gain of 5 to convert FB’s 600 mV to 3 V, this becomes only 125 µV.

Bias current is 10-pA maximum at 25°C, but we are given a typical value only at 125°C of 250 pA.

Of the two op-amps, U2b sees the higher input resistance. But its current would have to exceed 6 nA to produce even 1-mV of offset, so these op-amps are blameless.

To determine U1’s output resistance, its spec shows that its minimum logic high voltage for a 3-V supply is 2.46 V under a 12-mA load. This means that the maximum for each inverter is 45 Ω, which gives us 9 Ω for five in parallel. (The maximum voltage drop is lower for a logic low 12 mA, resulting in a lower resistance, but we don’t know how much lower, so we are forced to worst-case it at a ridiculous 0 V!)

Counting C3 as a short under dynamic conditions, the five inverters see a 35-kΩ load, leading to a less than 0.03% error.

Wrapping up

The regulator and its output range might need an even higher voltage, but the input voltage IN has been required to exceed 3.2 V. This is because U1 is spec’d to swing to no further than 80 mV from its supply rails under loads of 2 kΩ or more. (I’ve added some margin, but it’s needed only for the case of maximum output voltage.)

You should specify Vmax to be slightly higher than needed so that U2b needn’t swing all the way to ground. This means that a small negative supply for U2 is unnecessary. IN must also be less than 5.5 V to avoid exceeding U2’s spec. If a larger value of IN is required by the regulator, an inexpensive LDO can provide an appropriate U2 supply.

I grant that this design might be overkill, but I wanted to see what might be required to meet the goals I set. But who knows, someone might find it or some aspect of it useful.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Gold-plated PWM-control of linear and switching regulators appeared first on EDN.

The role of AI processor architecture in power consumption efficiency

Втр, 11/25/2025 - 10:27

From 2005 to 2017—the pre-AI era—the electricity flowing into U.S. data centers remained remarkably stable. This was true despite the explosive demand for cloud-based services. Social networks such as Facebook, Netflix, real-time collaboration tools, online commerce, and the mobile-app ecosystem all grew at unprecedented rates. Yet continual improvements in server efficiency kept total energy consumption essentially flat.

In 2017, AI deeply altered this course. The escalating adoption of deep learning triggered a shift in data-center design. Facilities began filling with power-hungry accelerators, mainly GPUs, for their ability to crank through massive tensor operations at extraordinary speed. As AI training and inference workloads proliferated across industries, energy demand surged.

By 2023, U.S. data centers had doubled their electricity consumption relative to a decade earlier with an estimated 4.4% of all U.S. electricity now feeding data-center racks, cooling systems, and power-delivery infrastructure.

According to the Berkeley Lab report, data-center load growth has tripled over the past decade and is projected to double or triple again by 2028. The report estimates that AI workloads alone could by that time consume as much electricity annually as 22% of all U.S. households—a scale comparable to powering tens of millions of homes.

Total U.S. data-center electricity consumption increased ten-fold from 2014 through 2028. Source: 2024 U.S. Data Center Energy Usage Report, Berkeley Lab

This trajectory raises a question: What makes modern AI processors so energy-intensive? Whether rooted in semiconductor physics, parallel-compute structures, memory-bandwidth bottlenecks, or data-movement inefficiencies, understanding the causes becomes a priority. Analyzing the architectural foundations of today’s AI hardware may lead to corrective strategies to ensure that computational progress does not come at the expense of unsustainable energy demand.

What’s driving energy consumption in AI processors

Unlike traditional software systems—where instructions execute in a largely sequential fashion, one clock cycle and one control-flow branch at a time—large language models (LLMs) demand massively parallel elaboration of multiple-dimensional tensors. Matrices many gigabytes in size must be fetched from memory, multiplied, accumulated, and written back at amazing rates. In state-of-the-art models, this process encompasses hundreds of billions to trillions of parameters, each of which must be evaluated repeatedly during training.

Training models at this scale require feeding enormous datasets through racks of GPU servers running continuously for weeks or even months. While the computational intensity is extreme, so is the energy footprint. For example, the training run for OpenAI’s GPT-4 is estimated to have consumed around 50 gigawatt-hours of electricity. That’s roughly equivalent to powering the entire city of San Francisco for three days.

This immense front-loaded investment in energy and capital defines the economic model of leading-edge AI. Model developers must absorb stunning training costs upfront, hoping to recover them later through the widespread use of the inferred model.

Profitability hinges on the efficiency of inference, the phase during which users interact with the model to generate answers, summaries, images, or decisions. “For any company to make money out of a model—that only happens on inference,” notes Esha Choukse, a Microsoft Azure researcher who investigates methods for improving the efficiency of large-scale AI inference systems. His quote appeared in the May 20, 2025, MIT Technology Review article “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.”

Indeed, experts across the industry consistently emphasize that inference not training is becoming the dominant driver of AI’s total energy consumption. This shift is driven by the proliferation of real-time AI services—millions of daily chat sessions, continuous content generation pipelines, AI copilots embedded into productivity tools, and ever-expanding recommender and ranking systems. Together, these workloads operate around the clock, in every region, across thousands of data centers.

As a result, it’s now estimated that 80–90% of all compute cycles serve AI inference. As models continue to grow, user demand accelerates, and applications diversify, further widening this imbalance. The challenge is no longer merely reducing the cost of training but fundamentally rethinking the processor architectures and memory systems that underpin inference at scale.

Deep dive into semiconductor engineering

Understanding energy consumption in modern AI processors requires examining two fundamental factors: data processing and data movement. In simple terms, this is the difference between computing data and transporting data across a chip and its surrounding memory hierarchy.

At first glance, the computational side seems conceptually straightforward. In any AI accelerator, sizeable arrays of digital logic—multipliers, adders, accumulators, activation units—are orchestrated to execute quadrillions of operations per second. Peak theoretical performance is now measured in petaFLOPS with major vendors pushing toward exaFLOP-class systems for AI training.

However, the true engineering challenge lies elsewhere. The overwhelming contributor to energy consumption is not arithmetic—it is the movement of data. Every time a processor must fetch a tensor from cache or DRAM, shuffle activations between compute clusters, or synchronize gradients across devices, it expends orders of magnitude more energy than performing the underlying math.

A foundational 2014 analysis by Professor Mark Horowitz at Stanford University quantified this imbalance with remarkable clarity. Basic Boolean operations require only tiny amounts of energy—on the order of picojoules (pJ). A 32-bit integer addition consumes roughly 0.1 pJ, while a 32-bit multiplication uses approximately 3 pJ.

By contrast, memory operations are dramatically more energy hungry. Reading or writing a single bit in a register costs around 6 pJ, and accessing 64 bits from DRAM can require roughly 2 nJ. This represents nearly a 10,000× energy differential between simple computation and off-chip memory access.

This discrepancy grows even more pronounced at scale. The deeper a memory request must travel—from L1 cache to L2, from L2 to L3, from L3 to high-bandwidth memory (HBM), and finally out to DRAM—the higher the energy cost per bit. For AI workloads, which depend on massive, bandwidth-intensive layers of tensor multiplications, the cumulative energy consumed by memory traffic considerably outstrips the energy spent on arithmetic.

In the transition from traditional, sequential instruction processing to today’s highly parallel, memory-dominated tensor operations, data movement—not computation—has emerged as the principal driver of power consumption in AI processors. This single fact shapes nearly every architectural decision in modern AI hardware, from enormous on-package HBM stacks to complex interconnect fabrics like NVLink, Infinity Fabric, and PCIe Gen5/Gen6.

Today’s computing horsepower: CPUs vs. GPUs

To gauge how these engineering principles affect real hardware, consider the two dominant processor classes in modern computing:

  • CPUs, the long-standing general-purpose engines of software execution
  • GPUs, the massively parallel accelerators that dominate AI training and inference today

A flagship CPU such as AMD’s Ryzen Threadripper PRO 9995WX (96 cores, 192 threads) consumes roughly 350 W under full load. These chips are engineered for versatility—branching logic, cache coherence, system-level control—not raw tensor throughput.

AI processors, in contrast, are in a different league. Nvidia’s latest B300 accelerator draws around 1.4 kW on its own. A full Nvidia DGX B300 rack unit, housing eight accelerators plus supporting infrastructure, can reach 14 kW. Even in the most favorable comparison, this represents a 4× increase in power consumption per chip—and when comparing full server configurations, the gap can expand to 40× or more.

Crucially, these raw power numbers are only part of the story. The dramatic increases in energy usage are multiplied by AI deployments in data centers where tens of thousands of such GPUs are running around the clock.

Yet hidden beneath these amazing numbers lies an even more consequential industry truth, rarely discussed in public and almost never disclosed by vendors.

The well-kept industry secret

To the best of my knowledge, no major GPU or AI accelerator vendor publishes the delivered compute efficiency of their processors defined as the ratio of actual throughput achieved during AI workloads to the chip’s theoretical peak FLOPS.

Vendors justify this absence by noting that efficiency depends heavily on the software workload; memory access patterns, model architecture, batch size, parallelization strategy, and kernel implementation can all impact utilization. This is true, and LLMs place extreme demands on memory bandwidth causing utilization to drop substantially.

Even acknowledging these complexities, vendors still refrain from providing any range, estimate, or context for typical real-world efficiency. The result is a landscape where theoretical performance is touted loudly, while effective performance remains opaque.

The reality, widely understood among system architects but seldom stated plainly is simple: “Modern GPUs deliver surprisingly low real-world utilization for AI workloads—often well below 10%.”

A processor advertised at 1 petaFLOP of peak AI compute may deliver only ~100 teraFLOPS of effective throughput when running a frontier-scale model such as GPT-4. The remaining 900 teraFLOPS are not simply unused—they are dissipated as heat requiring extensive cooling systems that further compound total energy consumption.

In effect, much of the silicon in today’s AI processors is idle most of the time, stalled on memory dependencies, synchronization barriers, or bandwidth bottlenecks rather than constrained by arithmetic capability.

This structural inefficiency is the direct consequence of the imbalance described earlier: arithmetic is cheap, but data movement is extraordinarily expensive. As models grow and memory footprints balloon, this imbalance worsens.

Without a fundamental rethinking of processor architecture—and especially of the memory hierarchy—the energy profile of AI systems will continue to scale unsustainably.

Rethinking AI processors

The implications of this analysis point to a clear conclusion: the architecture of AI processors must be fundamentally rethought. CPUs and GPUs each excel in their respective domains—CPUs in general-purpose control-heavy computation, GPUs in massively parallel numeric workloads. Neither was designed for the unprecedented data-movement demands imposed by modern large-scale AI.

Hierarchical memory caches, the cornerstone of traditional CPU design, were originally engineered as layers to mask the latency gap between fast compute units and slow external memory. They were never intended to support the terabyte-scale tensor operations that dominate today’s AI workloads.

GPUs inherited versions of these cache hierarchies and paired them with extremely wide compute arrays, but the underlying architectural mismatch remains. The compute units can generate far more demand for data than any cache hierarchy can realistically supply.

As a result, even the most advanced AI accelerators operate at embarrassingly low utilization. Their theoretical petaFLOP capabilities remain mostly unrealized—not because the math is difficult, but because the data simply cannot be delivered fast enough or close enough to the compute units.

What is required is not another incremental patch layered atop conventional designs. Instead, a new class of AI-oriented processor architecture must emerge, one that treats data movement as the primary design constraint rather than an afterthought. Such architecture must be built around the recognition that computation is cheap, but data movement is expensive by orders of magnitude.

Processors of the future will not be defined by the size of their multiplier arrays or peak FLOPS ratings, but by the efficiency of their data pathways.

Lauro Rizzatti is a business advisor at VSORA, a company offering silicon solutions for AI inference. He is a verification consultant and industry expert on hardware emulation.

Related Content

The post The role of AI processor architecture in power consumption efficiency appeared first on EDN.

Resonant inductors offer a wide inductance range

Пн, 11/24/2025 - 23:30
ITG Electronics' RL858583 Series of resonant inductors.

ITG Electronics launches the RL858583 Series of resonant inductors, delivering a wide inductance range, high current, and high efficiency in a compact DIP package. The family of ferrite-based, high-current inductors target demanding power electronics applications.

ITG Electronics' RL858583 Series of resonant inductors.(Source: ITG Electronics)

The RL858583 Series features an inductance range of 6.8 μH to 22.0 μH with a tight 5% tolerance. Custom inductance values are available.

The series supports currents up to 39 A, with approximately 30% roll-off, in a compact 21.5 × 21.0 × 21.5-mm footprint. This provides exceptional current handling in a compact DIP package, ITG said.

Designed for reliability in high-stress operating conditions, the inductors offer a rated voltage of 600 VAC/1,000 VDC and dielectric strength up to 4,500 VDC. The devices feature low DC resistance (DCR) from 3.94 mΩ to 17.40 mΩ and AC resistance (ACR) values from 70 mΩ to 200 mΩ, which helps to minimize power losses and to ensure high efficiency across a range of frequencies. The operating temperature ranges from -55℃ to 130℃.

The combination of high current capability, compact design, and customizable inductance options makes them suited for resonant converters, inverters, and other high-performance power applications, according to ITG Electronics. The RL858583 Series resonant inductors are RoHS-compliant and halogen-free.

The post Resonant inductors offer a wide inductance range appeared first on EDN.

Сторінки