EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 27 min ago

Can microchannels, manifolds, and two-phase cooling keep chips happy?

Fri, 07/04/2025 - 14:03
Two-phase cooling

Thermal management is an ongoing concern for many designs. The process usually begins with a tactic for dissipating or removing heat from the primary sources (mostly but not exclusively “chips”), then progresses to keeping the circuit-board assembly cool, and finally getting the heat out of the box and “away” to where it becomes someone else’s problem. Passive and active approaches are employed, involving some combination of active or passive convection, conduction (in air or liquid), and radiation principles.

The search for an effective cooling and thermal transfer solution has inspired considerable research. One direct approach uses microchannels embedded within the chip itself. This allows coolant, usually water, to flow through, efficiently absorbing and transferring heat away.

The efficiency of this technique is constrained, however, by the sensible heat of water. (“Sensible heat” refers to the amount of heat needed to increase the temperature of a substance without inducing a phase change, such as from liquid to vapor.) In contrast, the latent heat of phase change of water—the thermal energy absorbed during boiling or evaporation—is around seven times greater than its sensible heat.

Two-phase cooling with water can be achieved by using the latent heat transition, resulting in a significant efficiency enhancement in terms of heat dissipation. Maximizing the efficiency of heat transfer depends on a variety of factors. These include the geometry of the microchannels, the two-phase flow regulation, and the flow resistance; adding to the task, there are challenges in managing the flow of vapor bubbles after heating.

Novel water-cooling system

Now, a team at the Institute of Industrial Science at the University of Tokyo has devised a novel water-cooling system comprising three-dimensional microfluidic channel structures, using a capillary structure and a manifold distribution layer. The researchers designed and fabricated various capillary geometries and studied their properties across a range of conditions to enhance thin-film evaporation.

Although this is not the first project to use microchannels, it presents an alternative physical arrangement that appears to offer superior results.

Not surprisingly, they found that both the geometry of the microchannels through which the coolant flows and the manifold channels that control the distribution of coolant influence the thermal and hydraulic performance of the system. Their design centered on using a microchannel heat sink with micropillars as the capillary structure to enhance thin-film evaporation, thus controlling the chaotic two-phase flow to some extent and mitigating local dry-out issues.

This was done in conjunction with three-dimensional manifold fluidic passages for efficient distribution of coolant into the microchannels, Figure 1.

Figure 1 Microfluidic device combining a microchannel layer and a manifold layer. (A) Schematic diagrams of a microfluidic device. Scale bar: 5 mm. (B) Exploded view of microchannel layer and manifold layer. The heater is located on the backside of the substrate with parallel microchannels. Both the microchannel layer and manifold layer are bonded with each other to constitute the flow path. (C) The coolant flows between the manifolds and microchannels to form an N-shaped flow path. The capillary structures separate the vapor flow from the liquid thin film along the sidewall. The inset schematic shows the ordered two-phase flow under ideal conditions. Scale bar: 50 mm. (D) Cross-sectional schematic view of bonded device showing the heat and fluid flow directions. (E) Clamped device is mechanically tightened using bolts and nuts. (F) Images of clamped device showing the isometric, top, and side views. Scale bar, 1 cm. Source: Institute of Industrial Science at the University of Tokyo

Testing this arrangement requires a complicated electrical, thermal, and fluid arrangement, with clamps to put just the right calibrated pressure on the assembly for a consistent thermal impedance, Figure 2. They also had to allow time for start-up thermal transients to reach steady-state and take other test subtleties into account.

Figure 2 The test setup involved a complicated arrangement of electrical, thermal, mechanical, and fluid inputs and sensors, all linked by a LabVIEW application; top: system diagram; bottom: the actual test bench. Source: Institute of Industrial Science at the University of Tokyo

Their test process included varying key physical dimensions of the micropillars, capillary microchannels, and manifolds to determine optimum performance points.

It’s difficult to characterize performance with a single metric, Figure 3.

Figure 3 Benchmark of experimentally demonstrated critical heat flux and COP of two-phase cooling in microchannel using water. Zone 1 indicates the results in this work achieving efficient cooling by using a mass flow rate of 2.0 g/min with an exit vapor quality of 0.54. The other designs using manifolds marked by solid symbols in zone 2 consume hundreds of times of water with an exit vapor quality of around 0.1. The results of microstructure-enhanced designs are marked by open symbols in zone 3. Zone 4 shows the performance of typical single-phase cooling techniques. Source: Institute of Industrial Science at the University of Tokyo

One such number, the measured ratio of useful cooling output to the required energy input (the dimensionless coefficient of performance, or COP) reached up to 105, representing a meaningful advance over other water-channel cooling techniques that are cited in the references.

Details including thermal modeling, physics analysis, device fabrication, test arrangement, full data, results, and data discussion are in their paper “Chip cooling with manifold-capillary structures enables 105 COP in two-phase systems” published in Cell Reports Physical Science.

As noted earlier, this is not the first attempt to use microchannels to cool chips; it represents another approach to implementing this tactic. Do you think this will be viable outside of a lab environment in the real world of mass-volume production and liquid interconnections? Or will it be limited to a very small subset, if any, of enhanced chip-cooling solutions?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

The post Can microchannels, manifolds, and two-phase cooling keep chips happy? appeared first on EDN.

AI-focused MCUs embed neural processor

Thu, 07/03/2025 - 23:27

Aimed at AI/ML applications, Renesas’ RA8P1 MCUs leverage an Arm Ethos-U55 neural processing unit (NPU) delivering 256 GOPS at 500 MHz. The 32-bit devices also integrate dual CPU cores—a 1-GHz Arm Cortex-M85 and a 250-MHz Cortex-M33­—that together achieve over 7300 CoreMark points.

The NPU supports most commonly used neural networks, including DS-CNN, ResNet, Mobilenet, and TinyYolo. Depending on the neural network used, the Ethos-U55 provides up to 35× more inferences per second than the Cortex-M85 processor on its own.

RA8P1 microcontrollers provide up to 2 MB of SRAM and 1 MB of MRAM, which offers faster write speeds and higher endurance than flash memory. System-in-package options include 4 MB or 8 MB of external flash memory for more demanding AI tasks.

Dedicated peripherals and advanced security features support voice and vision AI, as well as real-time analytics. For vision AI, a 16-bit camera engine (CEU) handles image sensors up to 5 megapixels, while a separate two-lane MIPI CSI-2 interface provides a low pin-count connection at up to 720 Mbps per lane. Audio interfaces including I²S and PDM enable microphone input for voice AI. To protect edge AI and IoT systems, the devices integrate cryptographic IP, enforce immutable storage, and monitor for physical tampering.

The RA8P1 MCUs are available now in 224-pin and 289-pin BGA packages.

RA8P1 product page

Renesas Electronics 

The post AI-focused MCUs embed neural processor appeared first on EDN.

Off-line converter trims component count

Thu, 07/03/2025 - 23:27

ST’s VIPER11B voltage converter integrates an 800-V avalanche-rugged MOSFET with PWM current-mode control to power smart home and lighting applications up to 8 W. On-chip high-voltage startup circuitry, a senseFET, error amplifier, and frequency-jittered oscillator help minimize external components. The MOSFET requires only minimal snubbing, while the senseFET enables nearly lossless current sensing without external resistors.

As an off-line converter operating from 230 VAC, the VIPER11B consumes less than 10 mW at no load and under 400 mW with a 250-mW load. Under light-load conditions, it operates in pulse frequency modulation (PFM) mode with pulse skipping to enhance efficiency and support energy savings. The controller runs from an internal VDD supply ranging from 4.5 V to 30 V.

Housed in a compact 10-pin SSOP package, the converter conserves space—especially in designs with strict form factors like LED lighting drivers and smart bulbs. It’s also well suited for home appliances, low-power adapters, and smart meters. The device includes output overload and overvoltage protection with automatic restart, along with VCC clamping, thermal shutdown, and soft-start features.

In production now, VIPER11B voltage converters are priced from $0.56 each in lots of 1000 units.

VIPER11B product page

STMicroelectronics

The post Off-line converter trims component count appeared first on EDN.

MLCC saves board space in vehicle designs

Thu, 07/03/2025 - 23:27

Designed for automotive use, Murata’s 50-V multilayer ceramic capacitor (MLCC) delivers higher capacitance in a compact 0805-size (2.0×1.25-mm) SMD package. With a rated capacitance value of 10 µF, the GCM21BE71H106KE02 ranks among the smallest in its class.

The capacitor operates on 12-V automotive power lines while conserving PCB space and reducing the overall capacitor count. It delivers approximately 2.1 times the capacitance of Murata’s earlier 4.7-µF/50-V model in the same 0805 footprint. Compared to the previous 10-µF/50-V MLCC in the larger 1206 size (3.2×1.6 mm), it occupies about 53% less board area, offering significant space savings for automotive designs.

The GCM21BE71H106KE02 10-µF/50-V capacitor in the 0805 package is now in production. Use the product page link below to request samples, get a quote, or check availability.

GCM21BE71H106KE02 product page

Murata Manufacturing 

The post MLCC saves board space in vehicle designs appeared first on EDN.

Isolated driver enables fast, stable GaN control

Thu, 07/03/2025 - 23:26

Rohm has introduced the BM6GD11BFJ-LB, an isolated gate driver optimized for 600-V-class GaN HEMTs in industrial equipment such as motors and server power supplies. When paired with GaN transistors, the single-channel driver maintains stable operation under high-frequency, high-speed switching conditions.

The device ensures safe signal transmission by galvanically isolating the control circuitry during switching events with fast voltage rise and fall times. Its 4.5-V to 6.0-V gate drive range and 2500-Vrms isolation rating support a broad selection of high-voltage GaN devices, including Rohm’s 650-V EcoGaN HEMT. Low output-side current consumption—0.5 mA maximum—helps reduce standby power and improve overall system efficiency.

The BM6GD11BFJ-LB uses proprietary on-chip isolation to reduce parasitic capacitance, enabling high-frequency operation up to 2 MHz and reducing external component count. Enhanced CMTI of 150 V/ns—reportedly 1.5× higher than conventional products—prevents malfunctions during fast GaN switching. A reduced minimum pulse width of 65 ns improves duty cycle control, allowing stable, efficient operation at higher frequencies.

The BM6GD11BFJ-LB isolated gate driver is now available through online distributors including DigiKey and Mouser. Samples are priced at $4 each.

BM6GD11BFJ-LB product page

Rohm Semiconductor 

The post Isolated driver enables fast, stable GaN control appeared first on EDN.

Primemas unveils CXL 3.0 SoC controller

Thu, 07/03/2025 - 23:26

Primemas, a fabless company specializing in SoC Hublets (hub chiplets), is now sampling its Compute Express Link (CXL) 3.0 memory controller. The company is collaborating with Micron through its CXL ASIC Validation Lab (AVL) program to accelerate the commercialization of next-generation CXL controllers compatible with Micron’s advanced DRAM modules.

Hublets are SoC modules in a pluggable chiplet format, offering a range of IP infrastructure, including CPU, network-on-chip bus, memory controllers, and resource schedulers, along with high-bandwidth, low-latency die-to-die interfaces.

Unlike conventional CXL memory expansion controllers constrained by fixed form factors and limited DRAM capacity, Primemas says its chiplet technology offers greater scalability and modularity. Working with Micron, the company aims to deliver a reliable CXL 3.0 controller paired with Micron’s high-capacity 128-GB RDIMM modules.

The semiconductor startup has delivered engineering samples and development boards to strategic customers and partners, who have helped validate the performance and capabilities of its Hublet versus alternative CXL controllers. Building on this early success, Primemas is now ready to ship Hublet product samples to memory vendors, customers, and ecosystem partners.

Learn more about Primemas Hublets here.

Primemas

The post Primemas unveils CXL 3.0 SoC controller appeared first on EDN.

Push ON, Push OFF for AC voltages

Thu, 07/03/2025 - 17:22

Stephen Woodward’s DI, “Flip ON Flop OFF” does a wonderful job for DC voltages. I thought of extending this idea to much-needed AC voltages, as all our gadgets work with AC voltages.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the compact circuitry using a simple counter IC. This circuit utilizes a single push-button (PB) to switch between ON and OFF states for AC voltages. When you push PB once, the output terminal J2 gets 230V/110V AC. For the next push, output at J2 becomes zero. This action continues for subsequent pushes. Accordingly, the gadget connected to J2 will be ON or OFF.

Figure 1 Pushbutton circuit that switches on ON and OFF for AC voltages using electromechanical relay (RL1).

In Figure 1’s circuit, when PB is momentarily pushed once, U1’s(counter 4024) Q1 goes HIGH, counting one input pulse, which makes the Darlington pair Q1 and Q2 conduct. Relay RL1 gets energized. Its NO contact closes and passes 230V/110V AC connected to J1 to J2. The gadget connected to J2 turns ON.

When you push PB again, the second pulse is generated and counted by U1. It’s Q1 (LSB of counter) becomes LOW, making Q1 and Q2 OFF. The relay gets de-energized, and the AC voltage to J2 gets disconnected, making the gadget turn off. R2 and C2 are for the power-on reset of U1.

If you prefer not to use an electromechanical relay, a solid-state relay can be used, as shown in Figure 2. In this circuit, when you push PB once, the Q1, Q2 pair starts conducting, current flows through the LED of U3, an optically coupled TRIAC, causing it to conduct. Due to this, U4 TRIAC conducts, passing 230V/110V to J2. When you push PB again, the Q1, Q2 pair opens, stopping current flow through the LED of U3. The TRIACs of U3 and U4 stop conducting, disconnecting power to J2.

Figure 2 Circuit switches AC power on and off for output-connected gadgets using a solid-state relay formed by U3 and U4.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Push ON, Push OFF for AC voltages appeared first on EDN.

Tenstorrent’s Blue Cheetah deal a harbinger of chiplet acquisition spree

Thu, 07/03/2025 - 10:58

Less than a month after Qualcomm announced its acquisition of Alphawave Semi, another chiplet deal is in play. Artificial intelligence (AI) chip developer Tenstorrent has snapped up Blue Cheetah Analog Design after licensing its die-to-die (D2D) interconnect IP for AI and RISC-V chiplet solutions.

Blue Cheetah was founded in 2018 with an initial investment from Marvell co-founders Sehat Sutardja and Weili Dai and their pioneering vision for chiplets. Its BlueLynx D2D interconnect subsystem IP provides physical (PHY) and link layer chiplet interfaces compatible with both Open Compute Project (OCP) Bunch of Wires (BoW) and Universal Chiplet Interconnect Express (UCIe) standards.

Blue Cheetah also brings a wealth of analog mixed-signal expertise in developing D2D, DDR, SerDes, and other technologies critical in chiplet design. It’s co-founder and CEO Elad Alon is an expert in analog and mixed-signal design. He is also the technical lead of the Bunch of Wires PHY standard.

In addition to chiplet designers, Blue Cheetah offers chiplet interconnect IP solutions to various foundries and process nodes. Earlier this year, it announced the successful tape-out of its BlueLynx D2D PHY on Samsung Foundry’s 4-nm SF4X process node.

The latest version of BlueLynx PHY supports both advanced and standard chiplet packaging with an aggregate throughput exceeding 100 Tbps. As a result, the BlueLynx subsystem IP enables chip architects to meet the bandwidth density and environmental robustness necessary to ensure successful production deployment.

Qualcomm’s acquisition of Alphawave Semi and Tenstorrent buying Blue Cheetah mark an important step in the consolidation of the chiplet ecosystem. With the acquisition of Blue Cheetah, Tenstorrent will gain in-house capabilities for advanced interconnects and other analog and mixed-signal components.

Will 2025 be the year of chiplets? Are there more chiplet acquisitions in the works? There are several chiplet upstarts, such as Baya Systems and Chipuller, and likely, larger semiconductor outfits are currently eyeing them to acquire chiplet design capabilities.

Related Content

The post Tenstorrent’s Blue Cheetah deal a harbinger of chiplet acquisition spree appeared first on EDN.

A hands-on guide for RC snubbers and inductive load suppression

Wed, 07/02/2025 - 18:01

The other day, I was casually scrolling through Google when I stumbled upon a flood of dirt-cheap RC snubber circuit modules on various online stores. That got me thinking—it’s high time we talk about these little circuits and their real-world applications.

This post will offer some insights on RC snubber circuits along with a few handy tips for inductive load suppression. Whether you are a newbie looking to learn the ropes or an expert in need of a quick refresher, there is something in here for you. Let us dive in…

On paper, RC snubber circuits function as protective measures in switching applications, utilizing a resistor and capacitor together to mitigate voltage spikes and transient noise. But the commonly available RC snubber circuit module, sometimes referred to as an RC absorption circuit module by certain vendors, only contains a resistor, a capacitor and a varistor—just three basic components.

According to most vendors, the prewired module is suitable for AC/DC 5-400 V inductive loads (<1,000 W) to protect relay contacts and triacs. I could not find an actual schematic of it anywhere on the web, but since it’s pretty easy to prepare it through physical inspection, I drew it myself. Here is that diagram.

Figure 1 The block diagram represents the RC snubber module circuit. Source: Author

The components in the module are:

  • R = 220 Ω/2 W Resistor (MFR 1%)
  • C = 104 J/630 V Capacitor (CBB22)
  • MOV = 10 D/471 K Metal Oxide Varistor (10 mm/470 V ±10%)

The R-C values used in the snubber are by necessity compromises. In practice, the resistor value (R) must be large enough to limit the capacitive discharge current when the switch contacts close, but small enough to adequately limit the voltage when the switch contacts open. Larger capacitor value (C) decreases the voltage when the switch contacts open but it increases the capacitive discharge energy when the switch contacts close.

Furthermore, when the switch contacts are open, a current will be flowing through the snubber network. It should be verified that this leakage current does not cause issues in the application and that the power dissipation in the snubber resistor does not exceed its power rating.

A quick design insight

The optimal approach to determining the R-C values involves using an oscilloscope to trial various R-C combinations while monitoring spike reduction (or turn-off transient reduction). Then adjust the R and C values as needed until the desired reduction is achieved. Based on my practical experience, for most relays and triacs, 100 nF + 100 Ω values provide an acceptable suppression.

The above-mentioned RC snubber module, intended to be wired across a switching point as shown below, is a simplified resistor-capacitor snubber circuit made up of a resistor and a capacitor connected in series. Here, the resistor helps to absorb the energy from the voltage spikes, while the capacitor provides short-run storage for this energy. This way, the risk of a harm due to sudden change in electrical flow is minimized.

Figure 2 The RC snubber module is wired across a switching point. Source: Author

Most snubber circuits also include a metal oxide varistor (MOV) along with the RC circuit by placing the metal oxide varistor across the input line. An MOV is a specialized type of voltage dependent resistor (VDR) that uses a metal oxide, most commonly zinc oxide, as its non-linear resistor material.

The MOV will then protect the parallel circuit and the load. The MOV will set the maximum input voltage and di/dt through the load while the RC snubber sets the maximum dv/dt and peak voltage across the switching element like a triac; di/dt and dv/dt values should be considered when handling non-resistive loads.

At this point, it’s worth noting that when a triac drives an inductive load, the mains voltage and the load current are not in phase. To limit the slope of the reapplied voltage and ensure the right triac turn-off, a snubber circuit is usually connected in parallel with the triac. The snubber circuit can also be used to improve triac immunity to fast transient voltages.

Summed up briefly, the generic RC snubber circuit module covered in this post is suitable for certain circuits with inductive loads and switching devices such as triacs, thyristors, and power relays. When used, the two input screw terminals of the module are connected to the two contacts of the relay (such as common and normally open contacts), or it’s connected in parallel with the triac/thyristor (Figure 3).

Figure 3 The above image offers application hints for RC snubber modules. Source: Author

Inductive load suppression

Let it be known that inductive load suppression encompasses methodologies designed to mitigate the adverse effects of potential backlashes, which manifests when an inductive load—such as a solenoid or motor—is abruptly de-energized.

Moving on to additional guideposts for inductive load suppression, suppressor circuits are commonly used with inductive loads to control voltage spikes when a control output switches off. These circuits help prevent premature failure of outputs by mitigating the high-voltage transients that occur when current flow through an inductive load is interrupted.

The randomly selected sample voltage waveforms shown below illustrate this more clearly.

Figure 4 Here is a comparison between unsuppressed and snubber-suppressed voltage waveforms. Source: Paktron

In addition, suppressor circuits play a crucial role in reducing electrical noise/arc generated during the switching of inductive loads. Poorly suppressed inductive loads can make subtle noise that may interfere with the operation of delicate electronic components and circuits. The most effective way to reduce interference is to install an external suppressor circuit electrically across the load or switch element, as required by the setup, and position it in close physical proximity.

Listed below are some fine-tuned inductive load suppression application hints. The corresponding figures helps to visualize them.

  1. In most applications, placing a standard diode across a DC inductive load provides sufficient protection for DC or relay outputs that control DC inductive loads. However, if your application demands faster turn-off times, incorporating a properly sized Zener diode is a recommended approach.
  2. For relay outputs controlling AC inductive loads, an MOV can be paired with a parallel RC circuit. At this stage, ensure that MOV’s working voltage is at least 20% higher than the nominal line voltage.
  3. In DC voltage applications, the RC snubber network is typically wired across the relay contacts, whereas in standard AC voltage applications, it’s placed across the load. To reinforce the point, the RC snubber mechanism must be wired across the triac in phase control circuits.

Figure 5 The above image offers AC/DC application hints for inductive load suppression. Source: Author

Well, to wrap things up, RC snubbers help control voltage spikes and scale down noise in circuits, making them essential in power electronics. This quick guide provides only a glimpse into the complex topic, leaving plenty more to uncover—from diverse design configurations to their wide-ranging applications.

When dealing with power electronics systems, a thorough understanding of snubber behavior is essential for engineers and enthusiasts alike.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post A hands-on guide for RC snubbers and inductive load suppression appeared first on EDN.

The CyberPower DBH361D12V2: An UPS That Goes Old-School

Tue, 07/01/2025 - 16:22

Normally, when I cover the topic of uninterruptible power supplies (UPSs), I’m talking about devices containing rechargeable battery packs based on either sealed lead-acid (SLA) or one of several newer lithium-based charge storage technology alternatives. But what if the backup-power unit’s batteries aren’t rechargeable…and are lowly alkaline D cells (aka, IEC 420s)?

Normally, I’d probably take a pass on the editorial opportunity. But, given that this particular proposal came from my long-time colleague, mentor, and former bossBill Schweber —a name with which many of you are already familiar from his ongoing coverage in EDNEE TimesPlanet Analog, and other AspenCore properties —I couldn’t resist. Here are some (lightly edited) excerpts from his original email to me, titled “Teardown product?”:

Would you like to do a teardown on a Verizon-supplied battery holder/power pack? It holds 12 standard D cells; when AC power fails, you manually switch it on and it powers the Fios box (you also have to remember to switch if back off after AC power comes back on – yeah, right, as if that’s going to happen).

It’s a fairly simple device; has some LEDS to indicate battery condition, not much more. Supposedly powers the Fios box for 24-36 hours.

The unit is model DBH36D12V2, made by CyberPower Systems, is NOT listed on their site (I assume it’s custom for Verizon), and looks like this:

but replacements are available from Verizon as a spare part for end users:

 https://www.verizon.com/home/accessories/powerreserve/?&skuParam=sku190001Comes

 It comes with a skimpy manual showing the line crew how to install it, not much else.

So why do I have this? Verizon was here a month ago, replaced our copper drop from the street pole with fiber but left the copper landline in the house. They installed an AC (normally)-powered fiber-copper converter box, which they brought on their truck.

They mailed me its associated battery box, which they also installed, a few days before they came—except they mailed me two. No idea why they didn’t just bring it on the truck, too.

I called and emailed and wasted time trying to return it, but there is seemingly no way to do that. The local Verizon store said, “go away”. I even went over to a Verizon truck that was in the area, but the guys on the truck wouldn’t take it, either.

I enthusiastically accepted Bill’s offer. The note he included inside the shipment box was priceless and resonated with my own longstanding repair-and-reuse-or-donate aspirations:

Thanks for agreeing to take this off my hands. Whether or not you are able to do something with it, at least I won’t feel guilty leaving it in my basement for the next few years, or throwing it out to add to the electronic waste mountain.

Keep doing those great teardowns…

Aww 🥹 Let’s start with those stock photos from the Verizon product page (the device, with battery compartment door closed, has Dipert-tape-measured approx. dimensions of 10”x6”x2”):

Now, for our specific patient. I won’t bore you with photos of the light brown (save for Verizon logos on two of the sides) cardboard box that it came in, save for sharing a closeup of the product label attached to one of the other sides:

Nokia? Really?

Onward. Flip open the top flaps, remove a piece of retaining cardboard inside:

followed by several pieces of literature as usual, and as with other photos in this piece, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

And our victim comes into initial view:

Let’s tackle the literature first. I’m also not going to bore you with the original from-factory shipping slip included in the box. But there was also a wall-mounting template in there: 

along with two mounting screws:

Plus the “skimpy manual” that Bill’s initial email to me had mentioned, and which I’ve scanned for your convenience as a PDF: Skimpy UPS Manual

Now let’s get the device out of the box and out of its clear plastic protective baggie. Front view (orientation references that follow assume it’s wall-mounted per the template):

Here’s a close-up of the connector on the end of the cable coming out of the battery box, which ends up plugged into (and powering) the fiber-copper converter box:

Back:

Plus a close-up of that backside label:

Top:

Right side (note the latch, which I’ll be springing shortly):

Bottom, revealing Bill’s aforementioned power switch, plus a battery-test button and remaining-charge indicator LEDs that appropriately illuminate when the button is pressed (or not, if the D cells are drained; see the user manual for specifics):

And left side (note the hinges; I bet you can already tell which way the battery compartment door swings when opened!):

Another label closeup (again…Nokia?):

And finally, open sesame! Were you correct with your earlier door-swing-direction forecast?

Note that the stamped instructions explicitly warn against using rechargeable batteries:

And yep, a dozen will get one-time drained, not to mention irresponsibly discarded (likely, vs responsibly recycled) and added to the electronic waste mountain, each time the device is used:

On that note, by the way, Bill was spot-on (no surprise) that a web search on “CyberPower DBH36D12V2” was fruitless from a results standpoint. The outcome from dropping the “2” on the end wasn’t much better….that said, it did indirectly lead me to the scanned PDF of a user manual for a conceptually similar CyberPower product, the DTC36U12V, which dispenses with the D cells and instead embeds a conventional UPS-reminiscent SLA battery inside it.

Again, onward. At the bottom of the earlier back-view photo, you might have noticed two holes, one in each corner. Embedded within each is, unsurprisingly, a screw head. Removing them:

enables pull-out of the panel at the bottom of the device’s front side:

Underneath it, again unsurprisingly, is the humble-function PCB, intended fundamentally to regulate-then-output the electrons coming from the dozen-battery array that powers it:

All those caps you see are, I suspect, intended (among other things) to augment the batteries’ innate output power to address the fiber-copper converter box’s startup-surge current needs:

The PCB pulls right out of the enclosure without much fuss:

Once removed, and since we’re already at the side of the PCB, let’s do all four perspectives:

Shall I flip it over next? Yes, I shall. My, little PCB, what thick traces have thee!

One more PCB topside view, this time, the enclosure unencumbered. Note the three battery pack charge strength indicator LEDs and, to their right, the test switch:

More views of the front panel underside, this time with the battery spring contacts temporarily detached:

Speaking of which, here’s a close-up of the other (permanently mounted) spring contacts at the top of the battery compartment:

Here are the light pipe structures and the mechanical button that correspond to the LEDs and switch on the PCB:

And now, unlike Humpty Dumpty and all the king’s horses and men, I’ll put the DBH36D12V2 back together again:

That’s all I’ve got for you today! Bill, I hope I once again met (and even, stretch goal, exceeded?) your expectations. Reader thoughts are as-always welcomed in the comments!

p.s…anyone have a need for a disassembled-then-reassembled but functionally unused CyberPower (or is that Verizon? Or Nokia?) DBH36D12V2?

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post The CyberPower DBH361D12V2: An UPS That Goes Old-School appeared first on EDN.

Inherently DC accurate 16-bit PWM TBH DAC

Tue, 07/01/2025 - 16:19

The 16-bit DACs are a de facto standard for high DC accuracy and precision domain conversion, but surprisingly few are fully 16-bit (0.0015%) precise. Even when described as “high precision,” some have inaccuracy and integral nonlinearity (INL) that significantly exceed 1 LSB. The TBH PWM-based design detailed here, by contrast, has inherent 16-bit DC accuracy and integral linearity limited only by the quality of the voltage reference. And it gets them without fancy, pricey, high-accuracy components (e.g., no 0.0015% resistors need apply).

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows its underlying nonlinearity-correcting Take-Back-Half (TBH) topology, as explained in: “Take back half improves PWM integral linearity and settling time.”

Figure 1 The INL is canceled by the TBH topology.

Figure 1 relies on two differential relationships that effectively subtract out (take back) integral nonlinearity and attenuate ripple.

  1. For signal frequencies less than or equal to the reciprocal of settling time = 1/Ts (including DC) Xc >> R and Z = 2(Xavg – Yavg/2).
  2. For frequencies greater than or equal to Fpwm, Xc << R and Z = Xripple – Yripple.

Because only one switch drives node Y while two in parallel drive X, INL due to switch loading at Y is twice that at X. Therefore, since Z = 2(Xavg – Yavg/2), A1’s differential RC network actively subtracts (takes back) the INL error component, resulting in (theoretically) zero net INL.

 Figure 2 illustrates how these elements can fit together in a robust 16-bit DAC circuit design. Here’s how it works.

Figure 2 TBH principle sums two 8-bit PWM signals in one 16-bit DAC = Vref(MSBY+LSBY/256)/256. The asterisked resistors are 0.25% precision types. It is assumed that the PWM frequency (Fpwm) is ~10 kHz.

Two 8-bit resolution PWM signals with a rep rate of ~10 kHz serve as inputs, one for the most significant byte (MSBY) of the setting and the other for the least significant byte (LSBY). The MSBY signal drives R2 and R3, while the LSBY drives the R4, R5, and R7 network. The (R4+R5+4R7)/(R2+R3) = 256:1 ratio of the summing network accommodates the relative significance of the PWM signals. It also enables true 16-bit (15 ppm) conversion precision and differential nonlinearity (DNL) from only 8-bit (2500 ppm) resistor matching.

R6C3 suppresses small nanosecond duration ripple spikes on A1’s output caused by the super-fast output transitions of the U1 switches leaking past A1’s 10 MHz gain-bandwidth product.

The ultimate conversion accuracy is limited almost solely by the 5-V voltage reference quality, so this should be a premium component. Its job is made a little bit (pun) easier by the fact that the maximum current drawn by U1 is a modest 640 µA, which allows for true 16-bit INL with reference impedances up to 0.11 Ω. A maximum reference loading occurs at MSBY duty factor = 50%. The loading falls to near zero at Df = 0 and 100%.

The maximum ripple amplitude also occurs at 50%. The output ripple and DAC settling time are illustrated as the red curve in Figure 3.

Figure 3 Settling time to full precision requires ~100 PWM cycles = 10 ms for Fpwm = 10 kHz.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Inherently DC accurate 16-bit PWM TBH DAC appeared first on EDN.

Reducing manual effort in coverage closure using CCF commands

Tue, 07/01/2025 - 11:00

Ensuring the reliability and performance of complex digital systems has two fundamental aspects: functional verification and digital design. Digital Design predominantly focuses on the architecture of the system that involves logic blocks, control flow units, and data flow units. However, design alone is not enough.

Functional verification plays a critical role in confirming if the design (digital system) behaves as intended in all expected conditions. It involves writing testbenches and running simulations that test the functionality of the design and catch bugs as early as possible. Without proper verification, even the most well-designed system can fail in real world use.

Coverage is a set of metrics/criteria that determines how thoroughly a design has been exercised during a simulation. It identifies and checks if all required input combinations have been exercised in the design.

There are several types of coverage used in modern verification flows, the first one being code coverage, which analyzes the actual executed code and its branches in the design. Functional coverage, on the other hand, is user-defined and tests the functionality of the design based on the specification and the test plan.

Coverage closure is a crucial step in the verification cycle. This step ensures that the design is robust and has been tested thoroughly. With an increase in scale and complexity of modern SoC/IP architectures, the processes required to achieve coverage closure become significantly difficult, time-consuming, and resource intensive.

Traditional verification involves a high degree of manual intervention, especially if the design is constantly evolving. This makes the verification cycle recursive, inefficient, and prone to human errors. Manual intervention in coverage closure remains a persistent challenge when dealing with complex subsystems and large SoCs.

Automation is not just a way to speed up the verification cycle; it gives us the bandwidth to focus on solving strategic design problems rather than repeating the same tasks over and over. This research is based on the same idea; it turns coverage closure from a tedious task to a focused, strategic part of the verification cycle.

This paper focuses on leveraging automation provided by the Cadence Incisive Metric Center (IMC) tool to minimize the need for manual effort in the coverage closure process. With the help of configurable commands in the Coverage Configuration File (CCF), we can exercise fine control in coverage analysis, reducing the chances of manual adjustments and making the flow dynamic.

Overview of Cadence IMC tool

IMC stands for Incisive Metrics Center, which is a coverage analysis tool designed by Cadence to help design and verification engineers evaluate the completeness of verification efforts. It works across the design and testbench during simulation to collect coverage data stored in a database. This database is later analyzed to identify the areas of design that have been tested and those which have not met the desired coverage goals.

IMC uses well defined metrics or commands for both code and functional coverage, which provide a detailed view of coverage results and identify any gaps to improve testing. The application includes the creation of a user-defined file called CCF, which includes these commands to control the type of coverage data that should be collected, excluded, or refined.

This paper offers several commands—such as “select_coverage”, “deselect_coverage”, “set_com”,”set_fsm_arc_scoring” and “set_fsm_reset_scoring”—which handle different genres of coverage aspects. The “select_coverage” and “deselect_coverage” commands automate the inclusion and exclusion activity by selecting specific sections of code as per the requirement, thus eliminating the manual exclusion process.

The “set_com” command provides a simple approach to avoid the manual efforts by automatically excluding coverage for constant variables. Meanwhile, the “set_fsm_arc_scoring” and “set_fsm_reset_scoring” commands focus more on enhancement of finite state machine (FSM) coverage by identifying state and reset transitions for the FSMs present in the design.

By using this precise and command-driven approach, the techniques discussed in this paper improve productivity and coverage accuracy. That plays a crucial role in today’s fast-paced complex chip development cycles.

Selecting/deselecting modules and covergroups for coverage analysis

The RTL design is a hierarchical structure which consists of various design units like modules, packages, instances, interfaces, and program blocks. It can be a mystifying exercise to exclude a specific code coverage section (block, expr, toggle, fsm) for the various design units in IMC tool.

The exercise to select/deselect any design units for code coverage can be implemented in a clean manner by using the commands mentioned below. These commands also provide support to select/deselect any specific covergroups (inside classes).

  • deselect_coverage

The command can enable the code coverage type (block, expr, toggle, fsm) for the given design unit and can also enable covergroups which are present in the given class.

Syntax:

select_coverage <-metrics> [-module | -instance | -class] <list_of_module/instance/class>

Figure 1 The above snapshot shows an example of select_coverage command. Source: eInfochips

This command is to be passed in CCF with the appropriate set of switches; <-metrics> defines the type of coverage metric like block, expr, toggle, fsm, and covergroup. According to the coverage metric, -module or -instance or -class is passed and then the list of module/instance/class is to be mentioned.

  • deselect_coverage

The command can disable the code coverage type (block, expr, toggle, fsm) for the given design unit or can disable covergroups which are present in the given class.

Syntax:

deselect_coverage <-metrics> [-module | -instance | -class] <list_of_module/instance/class>

Figure 2 This snapshot highlights how deselect_coverage command works. Source: eInfochips

The combination of these two commands can be used to control/manage several types of code coverage metrics scoring throughout the design hierarchy, as shown in Figure 4, and functional coverage (covergroup) scoring throughout the testbench environment, as shown in Figure 7.

The design has hierarchical structure of modules, sub-modules, and instances (Figure 3). Here, no commands in CCF are provided and the code coverage scoring for all the design units is enabled, as shown in the figure below.

Figure 3 Code coverage scoring is shown without CCF Commands. Source: eInfochips

For example, let us assume code coverage (block, expr, toggle) scoring in ‘ctrl_handler’ module is not required and block coverage scoring in ‘memory_2’ instance is also not required; then in CCF, the deselect_coverage commands mentioned in Figure 4 will be used. To deselect all the code coverage metrics (block, expr, fsm, toggle), ‘-all’ option is used. Figure 4 also depicts the outcome of the commands used for disabling the assumed coverage.

Figure 4 Code coverage scoring is shown with deselect_coverage CCF commands. Source: eInfochips

In another scenario, the code coverage scoring is required for the ‘design_top’ module, and the toggle coverage scoring is required for the ‘memory_3’ instance. Code coverage for the rest of the design units is not required. So, the whole design hierarchy will be de-selected and only the two design units in which the code coverage scoring is required are selected, as shown in Figure 5. The code coverage scoring generated as per the CCF commands is also shown in Figure 5.

Figure 5 Code coverage scoring is shown with deselect_coverage/select_coverage CCF commands. Source: eInfochips

The two covergroups (cg1, cg2) in class ‘tb_func_class ’ are scored when no commands in CCF are mentioned, as shown in Figure 6. In case functional coverage scoring of ‘cg2’ covergroup is not required, the CCF command mentioned in Figure 7 is used. For de-selecting any specific covergroup in a class, the ‘-cg_name’ <covergroup name> option is used

Figure 6 Functional verification is conducted without CCF command. Source: eInfochips

Figure 7 Functional verification is conducted with CCF command. Source: eInfochips

It’s important to note that both commands ‘select_coverage/deselect_coverage’ will have a cumulative effect on the coverage analysis. In <metrics> sub-option, ‘-all’ will include all the code coverage metrics (block, expr, toggle, fsm) but will not include -covergroup metric.

In the final analysis, by using the ‘select_coverage/deselect_coverage’ commands, code/functional coverage in the design hierarchy and from the testbench environment can be enabled and disabled from the CCF directly, which makes the coverage flow neat. If these commands are not used, to obtain a similar effect, manual exclusions from design hierarchy and testbench environment need to be performed in the IMC tool.

Smart exclusions of constants in a design

In many projects, there are some signals or codes of a design that are not exercised throughout the simulation. Such signals or codes of design create unnecessary gaps in the coverage database. To manually add the exclusion of such constant objects in all the modules/instances of design is an exhausting job.

Cadence IMC provides a command which smartly identifies the constant objects in the design and ignores them from the coverage database. It’s described below.

set_com

When the set_com command is enabled in the CCF, it identifies the coverage items such as an inactive block, the constant signals, and constant expressions, which remain unexercised throughout the simulation; it omits them from coverage analysis and marks them IGN in the output generated file.

Syntax:

set_com [-on|-off] [<coverages>] [-log | -logreuse] [-nounconnect] [-module | -instance]

To enable the Constant Object Marking (COM) analysis, provide the [- on] option with the set_com command. When the COM analysis is done, the IMC generates an output file named “icc.com” which captures all the objects that are marked as constant.

By providing the [-log] option, it creates the icc.com file and ensures that the icc.com is updated each time for all the simulations. This icc.com file is created in the path “cov_work/scope/test/icc.com.” The COM analysis for specific module/instance is enabled by providing the [-module | -instance] option with the set_com command.

Figure 8 The above image depicts the design hierarchy. Source: eInfochips

Figure 9 The COM analysis command is shown as mentioned in CCF. Source: eInfochips

Consider that the “chip_da” variable of the design remains constant throughout the simulation. By enabling the set_com command as shown in Figure 9, the variable chip_da will be ignored from the coverage database, which is shown in Figure 10 and Figure 11.

Figure 10 The icc.com output file is shown in the coverage database. Source: eInfochips

Figure 11 Constant variable chip_da is ignored with set_com command enabled. Source: eInfochips

COM analysis

In the CCF, the set_com command is enabled for the addr_handler_instance1 instance.

  • Here, as the set_com command is enabled, the “chip_da” signal, which remains constant throughout the simulation, will be ignored from coverage analysis for the defined instances. As shown in Figure 10, in every submodule where the chip_da signal is passed, it gets ignored as the chip_da signal is the port signal, and the COM analysis is done based on the connectivity (top-down/bottom-up).
  • Along with the port signals, the internal signals which remain constant, are also ignored from the coverage database. In Figure 10, the “wr” signal is an internal signal and it’s ignored from the coverage database (also reflected in Figure 11).
  • The signal chip_da is constant for this simulation (which is IGN) while if chip_da is variable for some other simulation (which is covered/uncovered) and these two simulations are merged. Then the signal chip_da will be considered as a variable (covered/uncovered) and not an ignored constant.

It’s worth noting that when the set_com command is enabled for a module/instance, and if the signal is port signal and is marked as IGN, then the port signals of other sub-modules, which are directly connected to this signal, are also IGN irrespective of the command enabled for that module/instance.

Finally, to avoid the unnecessary coverage that is captured for constant objects and to save time in adding exclusion for such constant objects, the set_com command is extremely useful.

Detailed analysis of FSM coverage

A coverage-driven verification approach gives assurance that the design is exercised thoroughly. For the FSM-based design, there are several types of coverage analysis available. FSM state and transition coverage analysis are the two ways that help to perform the coverage-driven verification of FSM designs, but it’s not a complete verification of FSM designs.

FSM arc coverage provides a comprehensive analysis to ensure that the design is exercised thoroughly. To do that, Cadence IMC provides some CCF commands, which are described below.

set_fsm_arc_scoring

The FSM arc coverage is disabled by default in ICC. It can be enabled by using the set_fsm_arc_scoring command in the CCF. The set_fsm_arc_scoring enables the FSM arcs, which are nothing but all the possible input conditions for which transitions take place between the two FSM states.

Syntax:

set_fsm_arc_scoring [-on|-off] [ -module <modules> | -tag <tags>] [-no_delay_check]

To enable the FSM arc coverage, provide the [-on] option in the set_fsm_arc_scoring. The FSM arc coverage can be encompassed for all the FSMs defined in a module by providing the [-module <module_name>] option.

If the FSM arc coverage needs to be captured for specific FSM in the module, it can be achieved by providing the tag name to FSM using the set_fsm_attribute command in the CCF. By providing tag name option with set_fsm_arc_scoring, FSM arc coverage can be captured for the FSM in design.

set_fsm_reset_scoring

A state is considered a reset state if the transition to that state is not dependent on the current state of the FSM; for example, in the code shown below.

Figure 12 Here is an example of a reset state. Source: eInfochips

State “Zero” is a reset state because the transition to this state is independent of the current state (ongoing_state). By default, the FSM reset state and transition coverage are disabled in ICC, as shown in Figure 13. They can be enabled using the set_fsm_reset_scoring command in the CCF. This command enables scoring for all the FSM reset states and transitions leading to reset states that are defined within the design module.

Figure 13 FSM coverage is shown without set_fsm_arc_scoring command. Source: eInfochips

Syntax:

set_fsm_reset_scoring

In the design, there are two FSMs defined—fsm_design_one and fsm_design_two—and we are enabling the FSM arc and reset state and transition coverage for fsm_design_two only. If the set_fsm_arc_scoring and set_fsm_reset_scoring commands are not provided in the CCF, the FSM arc, FSM reset state and transition coverage are not enabled, as shown in Figure 13.

If the set_fsm_arc_scoring and set_fsm_reset_scoring commands are provided in the CCF, as shown in Figure 14, then the FSM arc, the FSM reset state, and the transition coverage are enabled as shown in Figure 15.

Figure 14 The set_fsm_arc_scoring and set_fsm_reset_scoring commands are provided in CCF. Source: eInfochips

Figure 15 FSM coverage is shown with set_fsm_arc_scoring and set_fsm_reset_scoring commands. Source: eInfochips

In case the design consists of FSM(s), and to ensure that the FSM design is exercised thoroughly, and it’s verified based on a coverage-driven approach, one should enable the set_fsm_arc_scoring and set_fsm_reset_scoring commands in the CCF files.

Efficient coverage closure

Efficient coverage closure is essential for ensuring thorough verification of complex SoC/IP designs. This paper builds on prior work by introducing Cadence IMC commands that automate key aspects of coverage management, significantly reducing manual effort.

The use of select_coverage and deselect_coverage enables precise control over module and covergroup coverage, while set_com intelligently excludes constant objects, improving the coverage accuracy. Furthermore, set_fsm_arc_scoring and set_fsm_reset_scoring enhance the FSM verification, ensuring that all state transitions and reset conditions are thoroughly exercised.

By adopting these automation-driven techniques, verification teams can streamline the coverage closure process, enhance efficiency, and maintain high verification quality, improving productivity in modern SoC/IP development.

Rohan Zala, a senior verification engineer at eInfochips, has expertise in in IP/SoC verification for sensor-based chips, sub-system verification for fabric-based design, and NoC systems.

Khushbu Nakum, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips and sub-system verification for NoC design.

Jaini Patel, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips and SoC verification for signal processing design.

Dhruvesh Bhingradia, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips, sub-system verification for fabric-based design, and NoC systems.

Related Content

The post Reducing manual effort in coverage closure using CCF commands appeared first on EDN.

Power Tips #142: A comparison study on a floating voltage tracking power supply for ATE

Mon, 06/30/2025 - 15:05

In order to test multiple ICs simultaneously with different test voltages and currents, semiconductor automatic test equipment (ATE) uses multiple source measurement units (SMUs). Each SMU requires its own independent floating voltage tracking power supply to ensure clean measurements.

Figure 1 shows the basic structure of the SMU power supply. The voltage tracking power supplies need to supply the power amplifiers with a wide voltage range (±15 V to ±50 V) and a constant power capability.

Figure 1 A simplified power-supply block diagram in an ATE. Source: Texas Instruments

Figure 2 illustrates the maximum steady-state voltage and current that the SMU requires in red and the pulsed maximums in blue.

Figure 2 An example voltage-current profile for a voltage tracking power supply. Source: Texas Instruments

The ICs under test require a low-noise power supply with minimal power loss. In order to manage the power dissipation in a linear power device and deliver constant power under the conditions shown in Figure 2, it is required that the power supply be able to generate a pulsating output with high instantaneous power.

In addition to power dissipation considerations, it is essential that the power supply has a sufficient efficiency and thermal management to accommodate as many test channels as possible.

Four topologies are studied and compared to see which one best meets the voltage tracking power supply requirements. Table 1 lists the electrical and mechanical specifications for the power supply. The four topologies under consideration are: hard-switching full bridge (HSFB), full-bridge inductor-inductor-capacitor (FB-LLC) resonant converter, dual active half bridge (DAHB), and a two-stage approach composed of a four-switch buck-boost (4sw-BB) plus half-bridge LLC resonant converter (HB-LLC).

Parameter

Minimum

Maximum

Vin

15V

45V

Vout

±15V

±45V

Iout

0A

±2.0A

Pout,pulse

N/A

150W

Height

N/A

4mm

Width

N/A

14mm

Length

N/A

45mm

PCB layers

N/A

18

Table 1 Electrical and mechanical SMU requirements. Source: Texas Instruments

Topology comparison

Figure 3 shows the schematic for each of the four power supplies.

Figure 3 The four topologies evaluated to see which one best meets the voltage tracking power supply requirements listed in Table 1. Source: Texas Instruments

Each topology was evaluated on two essential requirements: small size and minimizing the thermal footprint. Efficiency is only important in as far as heat management is concerned.

Table 2 summarizes the potential benefits and challenges of each topology. In addition to size, the maximum height constraint necessitates a printed circuit board (PCB)-based transformer design.

Topology

Benefits

Challenges

HSFB

  • Single power conversion stage.
  • Simple well-known control.
  • Hard switching will limit the operating frequency.
  • A wide input and output range will be difficult for a single stage.

FB-LLC

  • Single power conversion stage.
  • Capable of high switching frequency because of zero voltage switching (ZVS).
  • A wide input and output range will be difficult for a single stage.
  • Low Lm may result in low efficiency because of high root-mean-square (RMS) currents.

DAHB

  • Single power conversion stage.
  • Capable of high switching frequency because of ZVS.
  • A wide input and output range will be difficult for a single stage.
  • Complex control is required to deliver the required power and maintain ZVS.

Two-stage

  • Optimized preregulator for power delivery over a wide range.
  • Optimizing the LLC for a single operating frequency makes the resonant tank design straightforward.
  • Heat is spread out over a larger area.
  • Two stages will increase the required space.

Table 2 The benefits and challenges of the four different SMU power supply topologies. Source: Texas Instruments

In order to understand the size implications for the HSFB, it is necessary to start out by examining the structure of the transformer. Equation 1 calculates the turns ratio for the HSFB as:

Using the requirements listed in Table 1 gives a result of . Because a practical design will require a PCB with no more than 18 layers, the maximum required primary turns on a center-tapped design is 2:8:8. With this information, you can use Equation 2 to estimate the center leg core diameter:

Hard switching losses in the FETs will keep the frequency no higher than 500 kHz, resulting in a 12 mm diameter of the center leg. The resulting core will be at least twice this size. The end result is that the HSFB solution is just too large for any serious practical consideration.

The single-stage FB-LLC enables a higher switching frequency by solving the hard-switching concerns found in the HSFB. However, the broad input and output voltage range will require a small magnetizing inductance. The best design identified used a turns ratio of 4:5, Lm = 2 µH, Lr = 1 µH, and fr = 800 kHz. This design addresses the issues with the HSFB by incorporating more primary turns, achieving a high operating frequency for minimal size, and requiring only 14 layers. However, the design suffers from several operating points that result in ZVS loss and an inability to generate the necessary output voltage under pulsed load conditions.

Figure 4 shows the equation and plots of the maximum gain of the system. Supporting the requirements outlined in Table 1 requires a gain of at least 3. Figure 4 shows that this is only possible by drastically decreasing one or more of Lr, Lm, or fr. Decreasing Lr will result in a loss of ZVS from the rapid change in the inductor current. Reducing fr will drive up the size of the transformer and the required primary turns. Decreasing Lm will significantly increase losses from additional circulating current. Given these factors, the single-stage FB-LLC is also not an option.

Figure 4 Maximum fundamental harmonic approximation (FHA) gain plots. Source: Texas Instruments

DAHB

The DAHB [1] is an interesting option that also attempts to solve the hard-switching concerns. One area of concern is the requirement to have active control of the secondary FETs. This kind of control will require additional circuitry to translate the control across the isolation boundary. Equation 3 predicts the resulting power delivery capability of the DAHB:

Table 3 lists the results for the full requirements outlined in Table 1. Notice that there are several problematic conditions, most notably one condition where the required peak current is 80 A. The FETs used in the design cannot accommodate this current.

Table 3 DAHB operating points with several problematic conditions that cannot be designed. Source: Texas Instruments

The two-stage approach pushes the voltage regulation problem to the 4sw-BB and operates the HB-LLC at a fixed frequency at resonance, which allows the HB-LLC to run at high frequency and more easily achieve ZVS under all conditions. The obvious downside of this approach is that it uses two power stages instead of one. However, the reduced currents in the HB-LLC and its ability to run at higher frequencies enable you to minimize the size of the transformer.

Table 4 summarizes the comparison between the four topologies, highlighting the reasons for selecting the two-stage approach. References [2] and [3] describe some essential control parameters used for the buck-boost and LLC.

Topology

Comparison results

HSFB

  • Hard switching losses keep the switching frequency low.
  • Large secondary turns and low operating frequency result in a large magnetic core.

FB-LLC

  • Parasitic capacitance requires a larger resonant inductor.
  • Gain requires a smaller resonant inductor.
  • The design cannot provide the required voltages.

DAHB

  • Complex multimode control.
  • Active secondary-side FET control.
  • Large RMS currents.

Two stage

  • Including a preregulator optimizes power delivery over the wide range.
  • The LLC can be optimized for a single operating frequency.
  • Heat is spread out over a larger thermal footprint

 Table 4 Comparison between the four different topologies, highlighting the reasons for selecting the two-stage approach. Source: Texas Instruments

Test results

Based on the comparison results, I built a high-power-density (14 mm by 45 mm) 4sw-BB plus HB-LLC prototype. Figure 5 shows an image of the hardware prototype of the final design that fits in the space outlined by Table 1.

Figure 5 The top-side layout of the high-power density 4sw-BB + HB-LLC test board. Source: Texas Instruments

Figure 6 shows both efficiency and thermal performance of the LLC converter.

Figure 6 The LLC efficiency curve and a thermal scan of the LLC converter. Source: Texas Instruments

Two-stage approach

After considering four topologies to meet ATE SMU requirements, the two-stage approach with the four-switch buck boost and fixed-frequency LLC was the smallest overall solution capable of meeting the system requirements.

Brent McDonald works as a system engineer for the Texas Instruments Power Supply Design Services team, where he creates reference designs for a variety of high-power applications. Brent received a bachelor’s degree in electrical engineering from the University of Wisconsin-Milwaukee, and a master’s degree, also in electrical engineering, from the University of Colorado Boulder.

Related Content

References

  1. Laturkar, N. Deshmukh and S. Anand. “Dual Active Half Bridge Converter with Integrated Active Power Decoupling for On-Board EV Charger.” 2022 IEEE International Conference on Power Electronics, Smart Grid, and Renewable Energy (PESGRE), Trivandrum, India, 2022, pp. 1-6, doi: 10.1109/PESGRE52268.2022.9715900.
  2. McDonald and F. Wang.” LLC performance enhancements with frequency and phase shift modulation control.” 2014 IEEE Applied Power Electronics Conference and Exposition – APEC 2014, Fort Worth, TX, USA, 2014, pp. 2036-2040, doi: 10.1109/APEC.2014.6803586.
  3. Sun, B. “Multimode control for a four-switch buck-boost converter.” Texas Instruments Analog Design Journal, literature No. SLYT765, 1Q 2019.

The post Power Tips #142: A comparison study on a floating voltage tracking power supply for ATE appeared first on EDN.

Accelerating time-to-market as cars become software defined

Mon, 06/30/2025 - 08:41

Automakers have always raced to get the latest models to market. The shift to software-defined vehicles (SDVs) has turned that race into a sprint. It’s not a simple shift, however.

Building cars that can evolve constantly demands an overhaul of development practices, tools, and even team culture. From globally distributed engineering teams and cloud-based workflows to virtual testing and continuous integration pipelines, automakers are adopting new approaches to shrink development timelines without compromising safety or quality. These shifts are enabling the industry to move faster.

In older vehicles, after a car leaves the factory, code is rarely changed over its lifetime. In contrast, SDVs are designed for continuous improvement. Manufacturers can push over the-air (OTA) updates to add features, fix bugs, or enhance performance throughout a car’s life.

However, delivering continuous upgrades requires development cycles to speed up dramatically. Instead of a process measured in years for the next model refresh, software updates often need to be developed, tested, and rolled out in a matter of months—sometimes less. The cadence of innovation in automotive is shifting, and time-to-market for each new enhancement has become paramount.

Figure 1 Software-defined vehicles (SDVs) are designed for continuous improvement. Source: NXP

This new pace is a profound change for automakers, and calls for a far more agile, software-centric mindset. Companies that successfully shrink their cycle times can deliver constant improvements; those that cannot risk their vehicles quickly becoming outdated.

Distributed teams, unified development

Managing the massive, distributed development teams behind SDVs is another challenge when it comes to speeding up software delivery. Where a car’s software was previously handled by small in-house teams, today it takes hundreds or thousands of engineers spread around the globe.

This international talent pool enables 24-hour development, but it also introduces fragmentation. Different groups may use different tools or processes, and not everyone can access the same test hardware. Without a coordinated approach, a large, distributed team can prove a bottleneck rather than a benefit.

Automotive manufacturers are tackling the issue by uniting teams in cloud-based development environments. Instead of each engineer working in isolation, everyone accesses a standardized virtual workspace in the cloud pre-configured with every necessary tool. This ensures code runs the same for each developer, eliminating the “works on my machine” syndrome.

It also means updates to the toolchain or libraries can be rolled out to all engineers at once. Onboarding new team members becomes much faster as well—rather than spending days installing software, a new hire can start coding within hours by logging into the cloud environment. With a shared codebase and common infrastructure, a dispersed team can collaborate as one, keeping productivity high and projects on schedule.

Virtual testing: From months to minutes

Rethinking how and when software testing happens is critical to the acceleration of SDV development. In the past, software testing depended heavily on physical prototypes—electronic control units (ECUs) or test vehicles that developers needed to use in person, often creating idle time and long delays that are unacceptable in a fast-moving SDV project. The solution is to virtualize as much of the testing as possible.

Virtual prototypes of automotive hardware enable software development to begin long before physical parts are available. If new hardware won’t arrive until next year, engineers can work with a digital twin today. By the time actual prototypes come in, much of the software will already be validated in simulation, potentially accelerating time to market by months.

Figure 2 Virtual prototypes can be developed in parallel to hardware development. Source: NXP

Even when real hardware testing is required, remote access is speeding things up. Many companies now host “hardware-in-the-cloud” labs—racks of ECUs and other devices accessible online. Instead of waiting their turn or traveling to a test site, developers anywhere can deploy code to these remote rigs and see the results in real time. This approach compresses the validation cycle, catching issues earlier and proving out new features in weeks rather than months.

Embracing CI/CD for rapid releases

Accelerating time-to-market also requires the software release process itself to be reengineered. Modern development teams are increasingly adopting continuous integration and continuous delivery (CI/CD) pipelines to keep code flowing smoothly from development to deployment. In a CI/CD approach, contributions from all developers are merged and tested continuously rather than in big infrequent batches.

Automated build and test systems catch integration bugs or regressions a lot sooner in the development process, making fixes a lot easier to handle. This reduces last-minute scrambles that often plagued traditional, slower development cycles. With a robust CI/CD pipeline, software is always in a deployable state.

Of course, moving at such speed in a safety-critical industry requires care. CI/CD’s built-in rigor ensures each change passes all quality and safety checks before it ever reaches a car.

Driving into the future, faster

The push to accelerate vehicle software development is reshaping automotive engineering. Building cars that are defined by software forces automakers to adopt the tools, practices, and culture of software companies. Investments in cloud-based development environments, virtual testing frameworks, and CI/CD pipelines are quickly becoming the norm for any automaker that wants to stay competitive.

Ultimately, as cars increasingly resemble computers on wheels, time-to-market for software-driven features has become a make-or-break factor. The race is on for automakers to deliver new capabilities faster than ever, without hitting the brakes on safety or quality.

Those who successfully integrate distributed teams with cloud-first workflows, leverage virtual testing, and adopt continuous delivery practices will be perfectly placed to win over automakers with vehicles that keep improving over time.

Curt Hillier is technical director for automotive solutions at NXP Semiconductors.

Razvan Ionescu is automotive software and tools architect at NXP Semiconductors.

Related Content

The post Accelerating time-to-market as cars become software defined appeared first on EDN.

Computer and network-attached storage: Capacity optimization and backup expansion

Fri, 06/27/2025 - 16:25
Motivations for bolstering backup

Last March, I documented my travails striving to turn a 2018-era x86-based Apple Mac mini into a workable system in spite of its diminutive, non-upgradeable 128 GByte SSD’s internal capacity:

Eight months later (last November), I discussed how, motivated by the lightning-induced last-summer demise of one of my network-attached storage (NAS) devices:

I actualized longstanding aspirations to bolster my backup stratagem in order to better protect my precious data from failed hardware and other catastrophes (a virus or hack, for example).

Updating the initial approach

This writeup acts as an update to the initial approaches (and results) that I documented in the two prior pieces. Available-capacity optimization first. For pretty much the entirety of the time I’d owned the Mac mini, each time I needed to install an update to the currently installed version of MacOS and/or the Safari browser, I’d first temporarily need to uninstall a few particularly large apps to free up sufficient SSD space, then install the update, and then reinstall the apps afterwards. This got, as you can probably imagine, really tedious really quickly.

Salvation came by accident. Microsoft is pushing users to convert from the “Legacy (also referred to as “Classic”) Outlook” PIM (personal information management) app, along with the “Mail”, “Calendar”, and “People” apps originally bundled with Windows 10, to “New Outlook”. The latter successor is a PWA (progressive web app) that, simplistically speaking, acts as a locally installed “wrapper” for the cloud-based version of Outlook. Since the initial public release of “New Outlook” a couple of years ago, Microsoft has evolved the Windows variant of the app to optionally support local storage of some-to-all user data, thereby enabling access when offline. The MacOS version, conversely, still does only limited, temporary local caching.

Why is this important? As Microsoft has increasingly focused its development attention on “New Outlook”, I’d been noticing an increasing prevalence of bugs in “Legacy Outlook”, along with lengthening delays until they were eventually fixed. A month or so ago, after upgrading my MacBook Pro to MacOS 10.15 “Sequoia”, I also noticed that the “Legacy Outlook” search facility (which leverages Apple’s Spotlight system service) was no longer giving me results. Frustrated, I decided to “throw in the towel” and switch to “New Outlook” instead. I eventually ended up switching back to “Legacy Outlook”, after realizing that the lack of a locally stored full sync of my database would be unpalatable for offline use while traveling, for example (after doing so, by the way, Spotlight-based Outlook search magically started working again). But in the process, I learned something that in retrospect should have been obvious (but then again, what isn’t?).

After converting to “New Outlook”, I came across a settings option to delete “Legacy Outlook” data. Doing so freed up nearly 30 GBytes of storage capacity. This improvement was a nice-to-have on my laptop, which has a 512 GByte SSD (with ~25% still free even after converting back to “Legacy Outlook”). On the Mac mini, which I subsequently also converted to “New Outlook” and where the “Legacy Outlook” database represented ~25% of the SSD’s total capacity, it was a fundamental breakthrough. Regarding the database’s formidable size and in my (slight) defense:

  • I use Outlook for my “day job’s” multiple accounts’ emails, contacts and calendars
  • I’ve been employed there for three months shy of 14 years as I write this
  • Note that this payload includes not only emails (and calendar entries and contacts) but also email file attachments, which in my organization are frequent and can be sizeable
  • I’m an admitted digital packrat 😉 (which has saved me numerous times in the past, as I’ve been able to pull up archived content to substantiate or refute my own memory)

Due in part to the storage capacity savings (alas, unlike with my Mozilla Firefox and Thunderbird profiles, for examples, there doesn’t seem to be a straightforward way to relocate the “Classic Outlook” profile to an external drive), coupled with the fact that the Mac mini is perpetually sitting on my desk and connected to broadband (and that if broadband goes down, my email-related productivity won’t be the only thing that suffers), I’ve kept “New Outlook” on it. The freed-up room on the SSD enabled me to also full-version upgrade the Mac mini from MacOS 10.14 “Mojave” to MacOS 10.15 “Sequoia”, delivering another (modest, in my case) benefit.

As with Adobe Creative Suite, which lets you optionally install apps to a different (attached, obviously) storage device (as long as you install them one at a time, that is), with MacOS 10.15 you can optionally install App Store-sourced programs elsewhere as long as they’re 1 GByte or larger in size…because, after all, Apple doesn’t want to discourage you from buying more profitable-to-them computers with larger internal storage capacities, right? In my particular case, that relocation was only relevant for Luminar AI, 2.64 GBytes in size. But I’ll take it.

Backup Expansion

Last November’s write-up on storage backup showcased the “3-2-1 rule”. Here again is Wikipedia’s concise summary:

The 3-2-1 rule…states that there should be at least 3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite, in a remote location (this can include cloud storage). 2 or more different media should be used to eliminate data loss due to similar reasons (for example, optical discs may tolerate being underwater while LTO tapes may not, and SSDs cannot fail due to head crashes or damaged spindle motors since they do not have any moving parts, unlike hard drives). An offsite copy protects against fire, theft of physical media (such as tapes or discs) and natural disasters like floods and earthquakes.

That said, as I noted then in the introduction to my stratagem explanation:

As you’ll see in the paragraphs to follow, I’m not following the 3-2-1 rule to the most scrupulous degree—all of my storage devices are HDD-based, for example, and true offside storage would be bandwidth-usage prohibitive with conventional home broadband service.

While I can’t vouch for what storage technologies my more recently added “cloud” backup providers are using, they are off-site, so I feel pretty good about that. First off there’s my four-drive QNAP TS-453Be NAS, which, as I mentioned last time, holds “my music and photo libraries, along with decades’ worth of other accumulated personal files”:

The other two NASs on my LAN are purely used for backup purposes (both from computer sources and of each other’s contents), so losing them isn’t the end of the world. But the TS-453Be’s stored information is pretty darn important. What I learned is that QNAP’s HBS 3 Hybrid Backup Sync utility supports backups not only to USB- and Ethernet-connected local external storage but also to a variety of “cloud” storage services, among them Microsoft OneDrive. Also, since mine’s an Office 365 Family plan, I can associate it with up to five additional Microsoft accounts (beyond my own), each of which gets up to 1 GByte of OneDrive storage.

So, I created a second Microsoft account for myself, associated with a different email address of my many, and with its OneDrive storage capacity devoted exclusively to TS-453Be backups. HBS 3 backups are incremental—files that haven’t changed since the previous backup aren’t unnecessarily backed up again—so the bandwidth (and destination storage) “hit” wasn’t excessive after the initial backup session. I do one “cloud” backup a month (LAN backups are weekly), further limiting the incremental impact on my monthly 1.2 TByte usage allocation from Comcast/Xfinity (with added-cost unlimited usage always an option if ever needed). And I run them at midnight, when slumber means I don’t notice their LAN-bandwidth packet presence.

The other particularly important storage device here at the home office is the SSD inside my current “daily driver” computer, a 2020-era x86-based 13” Apple MacBook Pro.

Here, my cloud storage solution involves a highly regarded service called Backblaze, whose periodic Drive Stats storage reliability reports I’ve mentioned before. At the end of last year, Backblaze ran a promotion on its Personal Backup service tier: two years for $151.20 (normally $189), with Extended Version History (enabling access to older backed-up versions of files, too) also included. The incremental-backup Backblaze client app constantly runs silently in the background and by default focuses its attention only on data files, to optimize both its use of cloud storage and upload bandwidth, and under the assumption that you can reinstall programs if necessary. Except for the initial backup, along with the ~30 GByte data file outcome of my previously mentioned more recent reinstall of “Classic Outlook”, bandwidth usage has been scant. And functionality has to date been flawless. Highly recommended!

Thoughts on the concepts I introduced in my first two posts and augmented with this one? Sound off in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Computer and network-attached storage: Capacity optimization and backup expansion appeared first on EDN.

Authentication IC strengthens IoT security

Fri, 06/27/2025 - 16:25

Microchip has added secure code signing, firmware over-the-air (FOTA) updates, and CRA compliance to its ECC608 TrustMANAGER authentication IC. Additional enhancements include remote management of firmware images, cryptographic keys, and digital certificates. These capabilities help developers meet the cybersecurity requirements of the European Union’s Cyber Resilience Act (CRA) and prepare for similar regulations expected to emerge in other regions.

The ECC608 TrustMANAGER pairs with Kudelski IoT’s keySTREAM Software as a Service (SaaS) to securely store and manage cryptographic keys and certificates across IoT ecosystems. FOTA services support real-time firmware updates, enabling remote vulnerability patching and helping devices comply with evolving cybersecurity regulations.

Further enhancing cybersecurity compliance, Microchip’s WINC1500 Wi-Fi network controller in the TrustMANAGER development kit is now RED-certified for secure, reliable cloud connectivity. The EU’s Radio Equipment Directive (RED) sets strict requirements for network security, data protection, and fraud prevention. Starting August 1, 2025, all wireless devices sold in the EU must meet RED cybersecurity rules.

The ECC608 TrustManager is available for purchase from Microchip and its authorized distribution partners.

TrustManager product page

Microchip Technology 

The post Authentication IC strengthens IoT security appeared first on EDN.

Clock ICs drive low-jitter Ethernet and PCIe

Fri, 06/27/2025 - 16:25

The SKY63104/5/6 family of jitter-attenuating clocks and the SKY62101 clock generator simultaneously generate ultra-low jitter clocks for synchronous Ethernet and spread-spectrum PCIe. Built on Skyworks’ fifth-generation DSPLL and MultiSynth technologies, these devices support flexible input-to-output frequency mapping and enable single-IC clock tree designs for demanding networking, data center, and industrial applications.

Typical DSPLL RMS jitter is as low as 18 fs (12 kHz–20 MHz at 625 MHz), with total output jitter around 55 fs RMS—well suited for 224G PAM4 Ethernet SerDes. The devices also meet spread spectrum clocking requirements for PCIe Gen 1 through Gen 6.

The jitter attenuators and clock generator provide 12 outputs in a compact 8×8-mm QFN package with wettable flanks. The SKY63104 includes one DSPLL with two MultiSynths; the SKY63105 has two DSPLLs and one MultiSynth; and the SKY63106 features three DSPLLs with no MultiSynth. Output frequencies span 8 kHz to 3.2 GHz, with configurable formats including LVDS, HCSL, LVPECL, LVCMOS, S-LVDS, and CML. Output-to-output skew is tightly controlled at ±50 ps, with per-output delay adjustable in 50-ps steps.

SKY63104/5/6 product page 

SKY62101 product page 

Skyworks Solutions 

The post Clock ICs drive low-jitter Ethernet and PCIe appeared first on EDN.

Space-ready GaN FET endures harsh radiation

Fri, 06/27/2025 - 16:25

EPC Space has announced the EPC7030MSH, a radiation-hardened 300-V GaN FET for high-voltage, high-power space applications. It supports front-end DC/DC converters in satellite power systems, power conversion for high-voltage distribution buses, and electric propulsion platforms requiring high-performance switching.

With a maximum RDS(on) of 35 mΩ and a typical gate charge of 25 nC (30 nC max), the EPC7030MSH delivers a continuous drain current of 50 A at 5 V and a single-pulse drain current of 150 A for 300 µs. According to EPC Space, these specifications place it among the highest-performing rad-hard FETs in its class.

The EPC7030MSH is rated for 300-V operation at a linear energy transfer (LET) of 63 MeV·cm²/mg and maintains single-event effect (SEE) immunity up to 250 V at an LET of 84.6 MeV·cm²/mg. It is also immune to total ionizing dose (TID) effects under both low and high dose rate conditions.

Engineering models of the EPC7030MSH cost $236 each in quantities of 500 units, while rad-hard space-qualified devices are priced at $349 each.

EPC7030MSH product page

EPC Space 

The post Space-ready GaN FET endures harsh radiation appeared first on EDN.

8-channel driver manages diverse automotive loads

Fri, 06/27/2025 - 16:25

ST’s L9800 combines eight low-side drivers with diagnostics and protection in a compact leadless package for tight automotive spaces. The ISO 26262 ASIL-B-compliant device drives resistive, capacitive, or inductive loads—such as relays and LEDs—in body-control modules, HVAC systems, and power-domain controls.

Output channels can be controlled via the SPI port or two dedicated parallel inputs that map to selected outputs. These inputs enable emergency hardware control of two default channels even if the digital supply voltage is not available. This allows the L9800 to enter limp-home mode, maintaining essential safety and convenience functions during system failures, such as microcontroller faults or supply undervoltage.

The L9800 enhances vehicle reliability with real-time diagnostics and per-channel protection against open-circuit, short-circuit, overcurrent, and overtemperature faults. Diagnostic signals are accessible over the SPI bus, which also allows access to internal configuration registers for device setup. Additionally, the driver ensures safe operation during engine cranking, supporting battery voltages as low as 3 V.

Housed in a 4×4-mm TFQFN24 package, the L9800 low-side driver costs $0.52 each in lots of 1000 units.

L9800 product page 

STMicroelectronics

The post 8-channel driver manages diverse automotive loads appeared first on EDN.

Solid-state fan chip reduces heat in XR glasses

Fri, 06/27/2025 - 16:24

xMEMS is bringing its µCooling fan-on-chip platform to AI-driven extended reality (XR) smart glasses. The silicon-based solid-state micro cooling chip provides localized, precision-controlled active cooling from within the glasses frame—without compromising form factor or aesthetics.

As smart glasses integrate more advanced AI processors, cameras, sensors, and high-resolution displays, total device power (TDP) is expected to rise from today’s 0.5–1 W to 2 W and beyond. This increase pushes more heat into the frame materials that rest directly against the skin, exceeding what passive heat sinking can effectively dissipate.

According to xMEMS, thermal modeling and physical verification of µCooling in smart glasses operating at 1.5 W TDP has demonstrated a 60–70% improvement in power overhead, allowing up to 0.6 W additional thermal margin. It also showed up to a 40% reduction in system temperatures and up to a 75% reduction in thermal resistance.

Built on a piezoMEMS architecture with no motors or bearings, the fan chip delivers silent, vibration-free operation in a package as small as 9.3×7.6×1.13 mm. 

µCooling samples for XR smart glasses are available now, with volume production planned for Q1 2026. To learn more about xMEMS active µCooling, click here.

xMEMS Labs

The post Solid-state fan chip reduces heat in XR glasses appeared first on EDN.

Pages