Українською
  In English
EDN Network
Crowbar circuits: Revisiting the classic protector

Crowbar circuits have long been the go-to safeguard against overvoltage conditions, prized for their simplicity and reliability. Though often overshadowed by newer protection schemes, the crowbar remains a classic protector worth revisiting.
In this quick look, we will refresh the fundamentals, highlight where they still shine, and consider how their enduring design continues to influence modern power systems.
Why “crowbar”?
The name comes from the vivid image of dropping a metal crowbar across live terminals to force an immediate short. That is exactly what the circuit does—when an overvoltage is detected, it slams the supply into a low-resistance state, tripping a fuse or breaker and protecting downstream electronics. The metaphor stuck because it captures the brute-force simplicity and fail-safe nature of this classic protection scheme.

Figure 1 A crowbar protection circuit responds to overvoltage by actively shorting the power supply and disconnecting it to protect the load from damage. Source: Author
Crowbars in the CRT era: When fuses took the fall
In the era of bulky cathode-ray tube (CRT) televisions, power supply reliability was everything. Designers knew that a single regulator fault could unleash destructive voltages into the horizontal output stage or even the CRT itself. The solution was elegantly brutal: the crowbar circuit. Built around a thyristor or silicon-controlled rectifier (SCR), it sat quietly until the supply exceeded the preset threshold.
Then, like dropping a literal crowbar across the rails, it slammed the output into a dead short, blowing the fuse and halting operation in an instant. Unlike softer clamps such as Zener diodes or metal oxide varistors, the crowbar’s philosophy was binary—either safe operation or total shutdown.
For service engineers, this protection often meant the difference between replacing a fuse and replacing an entire deflection board. It was a design choice that reflected the pragmatic toughness of the CRT era: it’s better to sacrifice a fuse than a television.
Beyond CRT televisions, crowbar protection circuits find application in vintage computers, test and measurement instruments, and select consumer products.
Crowbar overvoltage protection
A crowbar circuit is essentially an overvoltage protection mechanism. It remains widely used today to safeguard sensitive electronic systems against transients or regulator failures. By sensing an overvoltage condition, the circuit rapidly “crowbars” the supply—shorting it to ground—thereby driving the source into current limiting or triggering a fuse or circuit breaker to open.
Unlike clamp-type protectors that merely limit voltage to a safe threshold, the crowbar approach provides a decisive shutdown. This makes it particularly effective in systems where even brief exposure to excessive voltage can damage semiconductors, memory devices, or precision analog circuitry. The simplicity of the design, often relying on a silicon-controlled rectifier or triac, ensures fast response and reliable action without adding significant cost or complexity.
For these reasons, crowbar protection continues to be a trusted safeguard in both legacy and modern designs—from consumer electronics to laboratory instruments—where resilience against unpredictable supply faults is critical.

Figure 2 Basic low-power DC crowbar illustrates circuit simplicity. Source: Author
As shown in Figure 2, an overvoltage across the buffer capacitor drives the Zener diode into conduction, triggering the thyristor. The capacitor is then shorted, producing a surge current that blows the local fuse. Once latched, the thyristor reduces the rail voltage to its on-state level, and the sustained current ensures safe disconnection.
Next is a simple practical example of a crowbar circuit designed for automotive use. It protects sensitive electronics if the vehicle’s power supply voltage, such as from a load dump or alternator regulation failure, rises above the safe setpoint. The circuit monitors the supply rail, and when the voltage exceeds the preset threshold, it drives a dead short across the rails. The resulting surge current blows the local fuse, shutting down the supply before connected circuitry can be damaged.

Figure 3 Practical automotive crowbar circuit protects connected device via local fuse action. Source: Author
Crowbar protection: SCR or MOSFET?
Crowbar protection can be implemented with either an SCR or a MOSFET, each with distinct tradeoffs.
An SCR remains the classic choice: once triggered by a Zener reference, it latches into conduction and forces a hard short across the supply rail until the local fuse opens. This rugged simplicity is ideal for high-energy faults, though it lacks automatic reset capability.
A MOSFET-based crowbar, by contrast, can be actively controlled to clamp or disconnect the rail when overvoltage is detected. It offers faster response and lower on-state voltage, which is valuable for modern low-voltage digital rails, but requires more complex drive circuitry and may be less tolerant of large surge currents.
Now I remember working with the LTM4641 μModule regulator, notable for its built-in N-channel overvoltage crowbar MOSFET driver that safeguards the load.
GTO thyristors and active crowbar protection
On a related note, gate turn-off (GTO) thyristors have also been applied in crowbar protection, particularly in high-power systems. Unlike a conventional SCR that latches until the fuse opens or power is removed, a GTO can be actively turned off through its gate, allowing controlled reset after an overvoltage event. This capability makes GTO-based crowbars attractive in industrial and traction applications where sustained shorts are undesirable.
Importantly, GTO thyristors enable “active” crowbars, in contrast to conventional SCRs that latch until power is removed. That is, an active crowbar momentarily shorts the supply during a transient, and gate-controlled turn-off then restores normal operation without intervention. In practice, asymmetric GTO (A-GTO) thyristors are preferred in crowbar protection, while symmetric (S-GTO) types see limited use due to higher losses.
However, their demanding gate-drive requirements and limited surge tolerance have restricted their use in low-voltage supplies, where SCRs remain dominant and MOSFETs or IGBTs now provide more practical and controllable alternatives.

Figure 4 A fast asymmetric GTO thyristor exemplifies speed and strength for demanding power applications. Source: ABB
A wrap-up note
Crowbar circuits may be rooted in classic design, but their relevance has not dimmed. From safeguarding power supplies in the early days of solid-state electronics to standing guard in today’s high-density systems, they remain a simple yet decisive protector. Revisiting them reminds us that not every solution needs to be complex—sometimes, the most enduring designs are those that do one job exceptionally well.
As engineers, we often chase innovation, but it’s worth pausing to appreciate these timeless building blocks. Crowbars embody the principle that reliability and clarity of purpose can outlast trends. Whether you are designing legacy equipment or modern platforms, the lesson is the same: protection is not an afterthought, it’s a foundation.
I will close for now, but there is more to explore in the enduring story of circuit protection. Stay tuned for future posts where we will continue connecting classic designs with modern challenges.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- SCR Crowbar
- Where is my crowbar?
- Crowbar Speaker Protection
- Overvoltage-protection circuit saves the day
- How to prevent overvoltage conditions during prototyping
The post Crowbar circuits: Revisiting the classic protector appeared first on EDN.
Does the cold of deep space offer a viable energy-harvesting solution?

I’ve always been intrigued by “small-scale” energy harvest where the mechanism is relatively simple while the useful output is modest. These designs, which may be low-cost but may also use sophisticated materials and implementations, often make creative use of what’s available, generating power on the order of about 50 milliwatts.
These harvesting schemes often have the first-level story of getting “a little something for almost nothing” until you look more deeply in the detail. Among the harvestable sources are incidental wind, heat, vibration, incremental motion, and even sparks.
The most recent such harvesting arrangement I saw is another scheme to exploit the thermal differential between the cold night sky and Earth’s warmer surface. The principle is not new at all (see References)—it has been known since the mid-18th century—but it returns in new appearances.
This approach, from the University of California at Davis, uses a Stirling engine as the transducer between thermal energy and mechanical/electrical energy, Figure 1. It was mounted on a flat metal plane embedded into the Earth’s surface for good thermal contact while pointing at the sky.
Figure 1 Nighttime radiative cooling engine operation. (A) Schematic of engine operation at night. Top plate radiatively couples to the night sky and cools below ambient air temperature. Bottom plate is thermally coupled to the ground and remains warmer, as radiative access to the night sky is blocked by the aluminum top plate. This radiative imbalance creates the temperature differential that drives the engine. (B) Downwelling infrared radiation from the sky and solar irradiance are plotted throughout the evening and into the night on 14 August 2023. These power fluxes control the temperature of the emissive top plate. The fluctuations in the downwelling infrared are caused by passing clouds, which emit strongly in the infrared due to high water content. (C) Temperatures of the engine plates compared to ambient air throughout the run. The fluctuations in the top plate and air temperature match the fluctuations in the downwelling infrared. The average temperature decreases as downwelling power decreases. (D) Engine frequency and temperature differential remain approximately constant. Temporary increases in downwelling infrared, which decrease the engine temperature differential, are physically manifested in a slowing of the engine.
Unlike other thermodynamic cycles (such as Rankine, Brayton, Otto, or Diesel), which require phase changes, combustion, or pressurized systems, the Stirling engine can operate passively and continuously with modest temperature differences. This makes them especially suitable for demonstrating mechanical power generation using passive thermal heat from the surroundings and radiative cooling without the need for fuels or active control systems.
Most engines which use thermal differences first generate heat from some source to be used against the cooler ambient side. However, there’s nothing that says the warmer side can’t be at the ambient temperature while the other side is colder relative to the ambient one.
Their concept and execution are simple, which is always attractive. The Stirling engine (essentially a piston driving a flywheel), is put on a 30 × 30 centimeter flat-metal panel that acts as a heat-radiating antenna. The entire assembly sits on the ground outdoors at night; the ground acts as the warm side of the engine as the antenna channels the cold of space.
Under best-case operation, the system delivered about 400 milliwatts of electrical power per square meter, and was used to drive a small motor. That is about 0.4% efficiency compared to theoretical maximum. Depending on your requirements, that areal energy density is somewhere between not useful and useful enough for small tasks such as charging a phone or powering a small fan to ventilate greenhouses, Figure 2.
Figure 2 Power conversion analysis and applications of radiative cooling engine. (A) Mechanical power plotted against temperature differential for various cold plate temperatures (TC). (Error bars show standard deviation.). Solid lines represent potential power corresponding to different quality engines denoted by F, the West number. (B) Voltage sweep across the attached DC motor shows maximum power point for extraction of mechanical to electrical power conversion at various engine temperature differentials (note: typical passive sign convention for electrical circuits is used). Solid red lines are quadratic fits of the measured data points (colored circles). Inset shows the dc motor mounted to the engine. (C) Bar graph denotes the remaining available mechanical power and the electrical power extracted (plus motor losses) when the DC motor is attached. (D) Axial fan blade attachment shown along with the hot-wire anemometer used to measure air speed. (E) Air speed in front of the fan is mapped for engine hot and cold plate temperatures of 29°C and 7°C, respectively. White circles indicate the measurement points. (F) Maximum air speed (black dots) and frequency (blue dots) as a function of engine temperature differential. Shaded gray regions show the range of air speeds necessary to circulate CO2 to promote plant growth inside greenhouses and the ASHRAE-recommended air speed for thermal comfort inside buildings.
Of course, there are other considerations such as harvesting only at night (hmmm…maybe as a complement to solar cells?) are needing a clear sky with dry air for maximum performance. Also, the assembly is, by definition, fully exposed to rain, sun, and wind, which will likely shorten its operation life.
The instrumentation they used was also interesting, as was their thermal-physics analysis they did as part of the graduate-level project. The flywheel of the engine was not only an attention-getter, its inherent “chopping” action also made it easy to count motor revolutions using a basic light-source and photosensor arrangement. The analysis based on the thermal cycle of the Stirling engine concluded that its Carnot-cycle efficiency was about 13%.
This is all interesting, but where does it stand on the scale of viability and utility? On one side, it is a genuine source of mechanical and follow-up electrical energy at very low cost. But that is only under very limited conditions with real-world limitations.
I think this form of harvesting gets attention because, as I noted upfront, it offers some usable energy at little apparent cost. Further, it’s very understandable, requires exotic materials or components, and comes with dramatic visual of the Stirling engine and its flywheel. It tells a good story that gets coverage and likely those follow-on grants. They have also filed a provisional patent related to the work; I’d like to see the claims they make.
But when you look at its numbers closely and reality becomes clearer, some of that glamour fades. Perhaps it could be used for a one-time storyline in a “McGyver-like” TV show script where the hero improvises such a unit, uses it to charge a dead phone, and is able to call for help. Screenwriters out there, are you paying attention?
Until then, you can read their full, readable technical paper “Mechanical power generation using Earth’s ambient radiation” published in the prestigious journal Science Advances from the American Association for the Advancement of Science; it was even featured on their cover, Figure 3, proving The “free” aspects of this harvesting and its photo-friendly design really do get attention!

Figure 3 The harvesting innovation was considered sufficiently noteworthy to be featured as the cover and lead story of Science Advances.
What’s your view on the utility and viability of this approach? Do you see any strong, ongoing applications?
Related Content
- Nothing new about energy harvesting
- An energy-harvesting scheme that is nearly useless?
- Niche Energy Harvesting: Intriguing, Innovative, Probably Impractical
- Underwater Energy Harvesting with Data-Link Twist
- Clever harvesting scheme takes a deep dive, literally
- Tilting at MEMS Windmills for Energy Harvesting?
- Energy Harvesting Gets Really Personal
- Lightning as an energy harvesting source?
- What’s that?…A fuel cell that harvests energy from…dirt?
References
- Applied Physics Letters, “Nighttime electric power generation at a density of 50 mW/m2 via radiative cooling of a photovoltaic cell”
- Nature Photonics, “Direct observation of the violation of Kirchhoff’s law of thermal radiation”
The post Does the cold of deep space offer a viable energy-harvesting solution? appeared first on EDN.
Ignoring the regulator’s reference redux

Stephen Woodward’s “Ignoring the regulator’s reference” Design Ideas (DI) (see Figure 1) is an excellent, working example of how to include a circuit in the feedback loop of an op amp to support the stabilization of the circuit’s operating point. This is also previously seen in “Improve the accuracy of programmable LM317 and LM337-based power sources” and numerous other places[1][2][3]). I’ll refer to his DI as “the DI” in subsequent text.
Figure 1 The DI’s Figure 1 schematic has been redrawn to emphasize the positioning of the U1 regulator in the A1 op amp’s feedback loop. The Vdac signal controls U1 while ignoring its internal reference voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
A few minor tweaks optimize this circuit’s dynamic performance and leave the design equations and comments in the DI unchanged. Let’s consider the case in which U1’s reference voltage is 0.6 V, Vdac varies from 0 to 3 V, and Vo varies from 5 to 0 V.
The DI tells us that in this case, R1a is not populated and that R1b is 150k. It also mentions driving Figure 1’s Vdac from the DACout signal of Figure 2, also found in “A nice, simple, and reasonably accurate PWM-driven 16-bit DAC.”

Figure 2 Each PWM input is an 8-bit DAC. VREF should be at least 3.0 V to support the SN74AC04 output resistances calculable from its datasheet. Ca and C1 – C3 are COG/NPO.
The Figure 2 PWMs could produce a large step change, causing DACout and therefore Vdac to quickly change from 0 to 3 V.
Figure 3 shows how Vo and the output of A1 react to this while driving a hypothetical U1, which is capable of producing an anomaly-free [4] 0-volt output.

Figure 3 Vo and A1’s output from Figure 1 react to a step change in Vdac.
Even though Vo eventually does what it is supposed to, there are several things not to like about these waveforms. Vo exhibits an overshoot and would manifest an undershoot if it didn’t clip at the negative rail (ground). The output of A1 also exhibits clipping and overshooting. Why are these things happening?
The answer is that the current flowing through R5 also flows through R3, causing an immediate change in the output voltage of A1. That change causes a proportional current to flow through R4. However, the presence of C2 prevents an immediate change in Vo and delays compensatory feedback from arriving at A1’s non-inverting input. How can this delay be avoided?
Shorting out R3 makes matters worse. The solution is to remove C2, speeding up the ameliorative feedback. Figure 4 shows the results.

Figure 4 With C2 eliminated, so are the clipping and the over- and undershoots. The A1 output moves only a few millivolts because of the large DC gain of the regulator, and because it is no longer necessary to charge C2 through R4 in response to an input change.
Vo now settles to ½ LSbit of a 16-bit source in 2.5 ms. Changing C3 to 510 pF (10% COG/NPO) reduces that time to 1.4 ms. Smaller values of C3 provide little further advantage.
The Vo-to-VSENSE feedback becomes mostly resistive above 0.159 / (R · C3) Hz, where:
R = R3 + R5 · R1a / (R5 + R1a)
In this case, that’s 1600 Hz, well below the unity gain frequency of pretty much any regulator, and so there should be no stability issues for the overall circuit. Note that A1’s output remains almost exactly equal to the regulator’s reference voltage. This, and the freedom to choose the R5/R1a and R2/R1b ratios, leaves open the option of using an op amp whose inputs and output needn’t approach its positive supply rail.
The (original) DI is a solid design that obtains some dynamic performance benefits from reducing the value of one capacitor and eliminating another.
Related Content
- Ignoring the regulator’s reference
- Improve the accuracy of programmable LM317 and LM337-based power sources
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Enabling a variable output regulator to produce 0 volts? Caveat, designer!
References
- https://en.wikipedia.org/wiki/Current_mirror#Feedback-assisted_current_mirror
- https://www.onsemi.com/pdf/datasheet/sa571-d.pdf see section on compandor
- https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/14/CircuitCookbook_2D00_OpAmps.pdf see triangle wave generator, page 90
- Enabling a variable output regulator to produce 0 volts? Caveat, designer!
The post Ignoring the regulator’s reference redux appeared first on EDN.
Active two-way current mirror

EDN Design Ideas (DI) published a design of mine in May of 2025 for a passive two-way current mirror topology that, in analogy to optical two-way mirrors, can reflect or transmit.
That design comprises just two BJTs and one diode. But while its simplicity is nice, its symmetry might not be. That is to say, not precise enough for some applications.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Fortunately, as often happens when the precision of an analog circuit falls short, and the required performance can’t suffer compromise, a fix can consist of adding an RRIO op amp. Then, if we substitute two accurately matched current-sensing resistors and a single MOSFET for the BJTs, the result is the active two-way current mirror (ATWCM) as shown in Figure 1.
Figure 1 The active two-way current sink/source mirror. The input current source is mirrored as a sink current when D1 is forward biased, and transmitted as a source current when D1 is reverse biased.
Figure 2 shows how the ATWCM operates when D1 is forward-biased, placing it in mirror mode.

Figure 2 ATWCM in mirror mode, I1 sink current generates Vr, forcing A1 to coax Q1 to mirror I2 = I1.
The operation of the ATWCM in mirror mode couldn’t be more straightforward. Vr = I1R wired to A1’s noninverting input forces it to drive Q1 to conduct I2 such that I2R = I1R.
Therefore, if the resistors are equal, A1’s accuracy-limiting parameters (offset voltage, gain-bandwidth, bias and offset currents, etc.) are adequately small, and Q1 does not saturate, I1 = I2 just as precisely as you like.
Okay, so I lied. Actually, the operation of the ATWCM in transmission mode is even simpler, as Figure 3 shows.

Figure 3 ATWCM in transmission mode. A reverse-biased D1 means I1 has nowhere to go except through the resistors and (saturated and inverted) Q1, where it is transmitted back out as I2.
I1 flowing through the 2R net resistance forces A1 to rail positive, saturating Q1 and providing a path back to the I2 pin. Since Q1 is biased inverted, its body diode will close the circuit from I1 to I2 until A1 takes over. A1 has nothing to do but act as a comparator.
Flip D1 and substitute a PFET for Q1, and of course, a source/sink will result, shown in Figure 4.

Figure 4 Source/sink two-way mirror with a D1 flipped the opposite direction, and Q1 replaced with a PFET.
Figure 5 shows the circuit in Figure 4 running a symmetrical rail-to-rail tri-wave and square-wave output multivibrator.

Figure 5 Accurately symmetrical tri-wave and square-wave result from inherent A1Q2 two-way mirror symmetry.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A two-way mirror—current mirror that is
- Active current mirror
- A current mirror reduces Early effect
- A two-way Wilson current mirror
The post Active two-way current mirror appeared first on EDN.
Aiding drone navigation with crystal sensing

Designers are looking to reduce the cost of drone systems for a wide range of applications but still need to provide accurate positioning data. This however is not as easy is it might appear.
There are several satellite positioning systems, from the U.S.-backed GPS and European Galileo to NavIC in India and Beidou in China, providing data down to the meter. However, these need to be augmented by an inertial measurement unit (IMU) that provides more accurate positioning data that is vital.

Figure 1 An IMU is vital for the precision control of the drone and peripherals like gimbal that keeps the camera steady. Source: Epson
An IMU is typically a sensor that can measure movement in six directions, along with an accelerometer to detect the amount of movement. The data is then used by the developer of an inertial measurement system (IMS) with custom algorithms, often with machine learning, combined with the satellite data and other data from the drone system.
The IMU is vital for the precision control of the drone and peripherals such as the gimbal that keeps the camera steady, providing accurate positioning data and compensating for the vibration of the drone. This stability can be implemented in a number of ways with a variety of sensors, but providing accurate information with low noise and high stability for as long as possible has often meant the sensor is expensive with high power consumption.
This is increasingly important for medium altitude long endurance (MALE) drones. These aircraft are designed for long flights at altitudes of between 10,000 and 30,000 feet, and can stay airborne for extended periods, sometimes over 24 hours. They are commonly used for military surveillance, intelligence gathering, and reconnaissance missions through wide coverage.
These MALE drones need a stable camera system that is reliable and stable in operation and a wide range of temperatures, providing accurate tagging of the position of any data captured.
One way to deliver a highly accurate IMU with lower cost is to use a piezoelectric quartz crystal. This is well established technology where an oscillating field is applied across the crystal and changes in motion are picked up with differential contacts across the crystal.
For a highly stable IMU for a MALE drone, three crystals are used, one for each axis, stimulated at different frequencies in the kilohertz range to avoid crosstalk. The differential output cancels out noise in the crystal and the effect of vibrations.
Precision engineering of piezoelectric crystals for high-stability IMUs
Using a crystal method provides data with low noise, high stability, and low variability. The highly linear response of the piezoelectric crystal enables high-precision measurement of various kinds of movement over a wide range from slow to fast, allowing the IMU to be used in a broad array of applications.
An end-to-end development process allows the design of each crystal to be optimized for the frequencies used for the navigation application along with the differential contacts. These are all optimized with the packaging and assembly to provide the highly linear performance that remains stable over the lifetime of the sensor.
It uses 25 years of experience with wet etch lithography for the sensors across dozens of patents. That produces yields in the high nineties with average bias variations, down to 0.5% variant from unit to unit.
An initial cut angle on the quartz crystal achieves the frequency balance for the wafer, then the wet etch lithography is applied to the wafer to create a four-point suspended cantilever structure that is 2-mm long. Indentations are etched into the structure for the wire bonds to the outside world.
The four-point structure is a double tuning fork with detection tines and two larger drive tines in the centre. The differential output cancels out spurious noise or other signals.
This is simpler to make than micromachined MEMS structures and provides more long-term stability and less variability across the devices.
The differential structure and low crosstalk allow three devices to be mounted closely together without interfering with each other, which helps to reduce the size of the IMU. A low pass filter helps to reduce any risk of crosstalk.
The six-axis crystal sensor is then combined with an accelerometer for the IMU. For the MALE drone gimbal applications, this accelerometer must have a high dynamic range to handle the speed and vibration effects of operation in the air. The linearity advantage of using a piezoelectric crystal provides accuracy for sensing the rotation of the sensor and does not degrade with higher speeds.

Figure 2 Piezoelectric crystals bolster precision and stability in IMUs. Source: Epson
This commercial accelerometer is optimized to provide the higher dynamic range and sits alongside a low power microcontroller and temperature sensors, which are not common in low-cost IMUs currently used by drone makers.
The microcontroller technology has been developed for industrial sensors over many years and reduces the power consumption of peripherals while maintaining high performance.
The microcontroller is used to provide several types of compensation, including temperature and aging, and so provides a simple, stable, and high-quality output for the IMU maker. Quartz also provides very predictable operation across a wide temperature range from -40 ⁰C to +85 ⁰C, so the compensation on the microcontroller is sufficient and more compensation is not required in the IMU, reducing the compute requirements.
All of this is also vital for the calibration procedure. Ensuring that the IMU can be easily calibrated is key to keeping the cost down and comes from the inherent stability of the crystal.
Calibration-safe mounting
The mounting technology is also key for the calibration and stability of the sensor. A part that uses surface mount technology (SMT), such as a reflow oven, for mounting to a board, which is exposed to high temperatures that can disrupt the calibration and alter the lifetime of the part in unexpected ways.
Instead, a module with a connector is used, so the 1-in (25 x 25 x 12 mm) part can be soldered to the printed circuit board (PCB). This avoids the need to use the reflow assembly for surface mount devices where the PCB passes through an oven, which can upset the calibration of the sensor.
Space-grade IMU design
A higher performance variant of the IMU has been developed for space applications. Alongside the quartz crystal sensor, a higher performance accelerometer developed in-house is used in the IMU. The quartz sensor is inherently impervious to radiation in low and medium earth orbits and is coupled with a microcontroller that handles the temperature compensation, a key factor for operating in orbits that vary between the cold of the night and the heat of the sun.
The sensor is mounted in a hermetically sealed ceramic package that is backfilled with helium to provide higher levels of sensitivity and reliability than the earth-bound version. This makes the quartz-based sensor suitable for a wide range of space applications.
Next-generation IMU development
The next generation of etch technology being explored now promises to enable a noise level 10 times lower than today with improved temperature stability. These process improvements enable cleaner edges on the cantilever structure to enhance the overall stability of the sensor.
Achieving precise and reliable drone positioning requires the integration of advanced IMUs with satellite data. The use of piezoelectric quartz crystals in IMUs for drone systems offers significant benefits, including low noise, high stability, and reduced costs, while commercial accelerometers and optimized microcontrollers further enhance performance and minimize power consumption.
Mounting and calibration procedures ensure long-term accuracy and reliability to provide stable and power-efficient control for a broad range of systems. All of this is possible through the end-to-end expertise in developing quartz crystals, and designing and implementing the sensor devices, from the etch technology to the mounting capabilities.
David Gaber is group product manager at Epson.
Related Content
- Exploring ceramic resonators and filters
- Drone design: An electronics designer’s point of view
- How to design an ESC module for drone motor control
- Keep your drone flying high with the right circuit protection design
- ST Launches AI-Enabled IMU for Activity Tracking and High-Impact Sensing
The post Aiding drone navigation with crystal sensing appeared first on EDN.
Tuneful track-tracing

Another day, another dodgy device. This time, it was the continuity beeper on my second-best DMM. Being bored with just open/short indications, I pondered making something a little more informative.
Perhaps it could have an input stage to amplify the voltage, if any, across current-driven probes, followed by a voltage-controlled tone generator to indicate its magnitude, and thus the probed resistance. Easy! . . . or maybe not, if we want to do it right.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 shows the (more or less) final result, which uses a carefully-tweaked amplifying stage feeding a pitch-linear VCO (PLVCO). It also senses when contact has been made, and so draws no power when inactive.
Most importantly, it produces a tone whose musical pitch is linearly related to the sensed resistance: you can hear the difference between fat power traces and long, thin signal ones while probing for continuity or shorts on a PCB without needing to look at a meter.
Figure 1 A power switch, an amplifying stage with some careful offsets, and a pitch-linear VCO driving an output transducer make a good continuity tester. The musical pitch of the tone produced is proportional to the resistance across the probe tips.
This is simpler than it initially looks, so let’s dismantle it. R1 feeds the test probes. If they are open-circuited, p-MOSFET Q1 will be held off, cutting the circuit’s power (ignoring <10 nA leakage).
Any current flowing through the probes will bring Q1.G low to turn it on, powering the main circuit. That also turns Q2 on to couple the probe voltage to A1a.IN+ via R2. Without Q2, A1a’s input protection diodes would draw current when power was switched off.
R1 is shown as 43k for an indication span of 0 to ~24 Ω, or 24 semitones. Other values will change the range, so, for example, 4k3 will indicate up to 2.4 Ω with 0.1-Ω semitones. Adding a switch gave both ranges. (The actual span is up to ~30 Ω—or 3.0 Ω—but accuracy suffers.) Any other values can be used for different scales; the probe current will, of course, change.
A1a amplifies the probe voltage by 1001-ish, determined by R3 and R4. We are working right down to 0 V, which can be tricky. R5 offsets A2a.IN- by ~5 mV, which is more than the MCP6002’s quoted maximum input offset of 3.5 mV. R2 and R6–8 help to add a slightly greater bias to A1a.IN+ that both null out any offset and set the operating point. This scheme may avert the need for a negative rail in other applications.
Tuning the tones
The A1b section is yet another variant on my basic pitch-linear VCO, the reset pulse being generated by Q4/C3/R13. (For more informative details of the circuit’s general operation, see the original Design Idea.) The ’scope traces in Figure 2 should clarify matters.

Figure 2. Waveforms within the circuit to show its operation while probing different resistances.
This type of PLVCO works best with a control voltage centered between the supply rails and swinging by ±20% about that datum, giving a bipolar range of ~±1 octave. Here, we need unipolar operation, starting around that -20% lowest-frequency point.
Therefore, 0 Ω on the input must give ~0.3 Vcc to generate a ~250 Hz tone; 12 Ω, 0.5 Vcc (for ~500 Hz); and 24 Ω, ~0.7 Vcc (~1 kHz). Anything above ~0.8 Vcc will be out of range—and progressively less accurate—and must be ignored.
The output is now a tone whose pitch corresponds to the resistance across the probes, scaled as one semitone per ohm and spanning two octaves for a 24 Ω range (if R1 is 43k).
The modified exponential ramp on C2 is now sliced by A2b, using a suitable fraction of the control voltage as a reference, to give a “square” wave at its output—truly square at one point only, but it sounds OK, and this approach keeps the circuit simple. A2a inverts A2b’s output, so they form a simple balanced (or bridge-tied load) driver for an earpiece. (There are problems here, but they can wait.)
R9 and R10 reduce A1a’s output a little as high resistances at the input cause it to saturate, which would otherwise stop A1b’s oscillation. This scheme means that out-of-range resistances still produce an audio output, which is maxed out at ~1.6 kHz, or ~30 Ω. Depending on Q1’s threshold voltage, several tens of kΩs across the probes are enough to switch it on—a tad outside our indication range.
Loud is allowed
Now for that earpiece, and those potential problems. Figure 1’s circuit worked well enough with an old but sensitive ~250-Ω balanced-armature mic/’phone but was fairly hopeless when trying to drive (mostly ~32 Ω) earphones or speakers.
For decent volume, try Figure 4, which is beyond crude, but functional. Note the separate battery, whose use avoids excessive drain on the main one while isolating the main circuit from the speaker’s highish currents.
Again, no power is drawn when the unit is inactive. (Reused batteries—strictly, cells—from disposed-of vapes are often still half-full, and great for this sort of thing! And free.) A2a is now spare . . .

Figure 3 A simple, if rather nasty, way of driving a loudspeaker.
Setting-up is necessary, because offsets are unpredictable, but simple. With a 12-Ω resistance across the probes, adjust R7 to give Vcc/2 at A1b.5. Done!
Comments on the components
The MCP6002 dual op-amp is cheap and adequate. (The ’6022 has a much lower offset but a far higher price, as well as drawing more current. “Zero-offset” devices are yet more expensive, and trimmer R7 would probably still be needed.)
Q3, and especially Q1, must have a low RDS(on) and VGS(th); my usual standby ZVP3306As failed on both counts, though ZVN3306As worked well for Q2/4/5. (You probably have your own favorite MOSFETs and low-voltage RRIO op-amps.) To alter the frequency range, change C2. Nothing else is critical.
As noted above, R1 sets the unit’s sensitivity and can be scaled to suit without affecting anything else. With 43k, the probe current is ~70 µA, which should avoid any possible damage to components on a board-under-test.
(Some ICs’ protection diodes are rated at a hopefully-conservative 100 µA, though most should handle at least 10 mA.) R2 helps guard against external voltage insults, as well as being part of the biasing network.
And that newly-spare half of A2? We can use it to make an active clamp (thanks, Bob Dobkin) to limit the swing from A1a rather than just attenuating it. R1 must be increased—51k instead of 43k—because we no longer need extra gain.
Figure 4 shows the circuit. When A2a’s inverting input tries to rise higher than its non-inverting one—the reference point—D1 clamps it to that reference voltage.

Figure 4. An active clamp is a better way of limiting the maximum control voltage fed to the PLVCO.
The slight frequency changes with supply voltage can be ignored; a 20°C temperature rise gave an upward shift of about a semitone. Shame: with some careful tuning, this could otherwise also have done duty as a tuning fork.
“Pitch-perfect” would be an overstatement, but just like the original PLVCO, this can be used to play tunes! A length of suitable resistance wire stretched between a couple of drawing pins should be a good start . . . now, where’s that half-dead wire-wound pot? Trying to pick out a seasonal “Jingle Bells” could keep me amused for hours (and leave the neighbors enraged for weeks).
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Power amplifiers that oscillate— Part 2: A crafty conclusion.
- Revealing the infrasonic underworld cheaply, Part 2
- A pitch-linear VCO, part 2: taking it further
- 5-V ovens (some assembly required)—part 2
The post Tuneful track-tracing appeared first on EDN.
Exploring ceramic resonators and filters

Ceramic resonators and filters occupy a practical middle ground in frequency control and signal conditioning, offering designers cost-effective alternatives to quartz crystals and LC networks. Built on piezoelectric ceramics, these devices provide stable oscillation and selective filtering across a wide range of applications—from timing circuits in consumer electronics to noise suppression in RF designs.
Their appeal lies in balancing performance with simplicity: easy integration, modest accuracy, and reliable operation where ultimate precision is not required.
Getting started with ceramic resonators
Ceramic resonators offer an attractive alternative to quartz crystals for stabilizing oscillation frequencies in many applications. Compared with quartz devices, their ease of mass production, low cost, mechanical ruggedness, and compact size often outweigh the reduced precision in frequency control.
In addition, ceramic resonators are better suited to handle fluctuations in external circuitry or supply voltage. By relying on mechanical resonance, they deliver stable oscillation without adjustment. These characteristics also enable faster rise times and performance that remains independent of drive-level considerations.
Recall that ceramic resonators utilize the mechanical resonance of piezoelectric ceramics. Quartz crystals remain the most familiar resonating devices, while RC and LC circuits are widely used to produce electrical resonance in oscillating circuits. Unlike RC or LC networks, ceramic resonators rely on mechanical resonance, making them largely unaffected by external circuitry or supply-voltage fluctuations.
As a result, highly stable oscillation circuits can be achieved without adjustment. Figure below shows two types of commonly available ceramic resonators.

Figure 1 A mix of common 2-pin and 3-pin ceramic resonators demonstrates their typical package styles. Source: Author
Ceramic resonators are available in both 2-pin and 3-pin versions. The 2-pin type requires external load capacitors for proper oscillation, whereas the 3-pin type incorporates these capacitors internally, simplifying circuit design and reducing component count. Both versions provide stable frequency control, with the choice guided by board space, cost, and design convenience.

Figure 2 Here are the standard circuit symbols for 2-pin and 3-pin ceramic resonators. Source: Author
Getting into basic oscillating circuits, these can generally be grouped into three categories: positive feedback, negative resistance elements, and delay of transfer time or phase. For ceramic resonators, quartz crystal resonators and LC oscillators, positive feedback is the preferred circuit approach.
And the most common oscillator circuit for a ceramic resonator is the Colpitts configuration. Circuit design details vary with the application and the IC employed. Increasingly, oscillation circuits are implemented with digital ICs, often using an inverter gate. A typical practical example (455 kHz) with a CMOS inverter is shown below.

Figure 3 A practical oscillator circuit employing a CMOS inverter and ceramic resonator shows its typical configuration. Source: Author
In the above schematic, IC1A functions as an inverting amplifier for the oscillating circuit, while IC1B shapes the waveform and buffers the output. The feedback resistor R1 provides negative feedback around the inverter, ensuring oscillation starts when power is applied.
If R1 is too large and the input inverter’s insulation resistance is low, oscillation may stop due to loss of loop gain. Excessive R1 can also introduce noise from other circuits, while being too small a value reduces loop gain.
The load capacitors C1 and C2 provide a 180° phase lag. Their values must be chosen carefully based on application, integrated circuit, and frequency. Undervalued capacitors increase loop gain at high frequencies, raising the risk of spurious oscillation. Since oscillation frequency is influenced by loading capacitance, caution is required when tight frequency tolerance is needed.
Note that the damping resistor R2, sometimes omitted, loosens the coupling between the inverter and feedback circuit, reducing the load on the inverter output. It also stabilizes the feedback phase and limits high-frequency gain, helping prevent spurious oscillation.
Having introduced the basics of ceramic resonators (just another surface scratch), we now shift focus to ceramic filters. The deeper fundamentals of resonator operation can be addressed later or explored through further discussion; for now, the emphasis turns to filter applications.
Ceramic filters and their practical applications
A filter is an electrical component designed to pass or block specific frequencies. Filters are classified by their structures and the materials used. A ceramic filter employs piezoelectric ceramics as both an electromechanical transducer and a mechanical resonator, combining electrical and mechanical systems within a single device to achieve its characteristic response.
Like other filters, ceramic filters possess unique traits that distinguish them from alternatives and make them valuable for targeted applications. They are typically realized in bandpass configurations or as duplexers, but not as broadband low-pass or high-pass filters, since ceramic resonators are inherently narrowband.
In practice, ceramic filters are widely used in IF and RF bandpass applications for radio receivers and transmitters. These RF and IF ceramic filters are low-cost, easy to implement, and well-suited for many designs where the precision and performance of a crystal filter are unnecessary.

Figure 4 A mix of ceramic filters presents examples of their available packages. Source: Author
A quick theory talk: A 455-kHz ceramic filter is essentially a bandpass filter with a sharp frequency response centered at 455 kHz. In theory, attenuation at the center frequency is 0 dB, though in practice insertion loss is typically 2–6 dB. As the input frequency shifts away from 455 kHz, attenuation rises steeply.
Depending on the filter grade, the effective passband spans from about 455 kHz ± 2 kHz for narrow designs and up to ±15 kHz for wider types (in theory often cited as ±10 kHz). Signals outside this range are strongly suppressed, with stopband attenuation reaching 40 dB or more at ±100 kHz.
On a related note, ceramic discriminators function by converting frequency variations into voltage signals, which are then processed into audio detection method widely used in FM receivers. FM wave detection is achieved through circuits where the relationship between frequency and output voltage is linear. Common FM detection methods include ratio detection, Foster-Seeley detection, quadrature detection, and differential peak detection.
Now I recall the CDB450C24, a ceramic discriminator designed for FM detection at 450 kHz. Employing piezoelectric ceramics, it provides a stable center frequency and linear frequency-to-voltage conversion, making it well-suited for quadrature detection circuits such as those built with the nostalgic Toshiba TA31136F FM IF detector IC for cordless phones. Compact and cost‑effective, the CDB450C24 exemplifies the role of ceramic discriminators in reliable FM audio detection.

Figure 5 TA31136F IC application circuit shows the practical role of the CDB450C24. Source: Toshiba
As a loosely connected observation, the choice of 450 kHz for ceramic discriminators reflected receiver design practices of the time. AM radios had long standardized on 455 kHz as their intermediate frequency (IF), while FM receivers typically used 10.7 MHz for selectivity.
To achieve cost-effective FM detection, however, many designs employed a secondary IF stage around 450 kHz, where ceramic discriminators could provide stable, narrowband frequency-to-voltage conversion.
This dual-IF approach balanced the high-frequency selectivity of 10.7 MHz with the practical detection capabilities of 450 kHz, making ceramic discriminators like the CDB450C24 a natural fit for FM audio demodulation.
Thus, ceramic filters remain vital for compact, reliable frequency selection, valued for their stability and low cost. Multipole ceramic filters extend this role by combining multiple resonators to sharpen selectivity and steepen attenuation slopes, their real purpose being to separate closely spaced channels and suppress adjacent interference.
Together, they illustrate how ceramic technology continues to balance simplicity with performance across consumer and professional communication systems.
Closing thoughts
Time for a quick pause—but before you step away, consider how ceramic resonators and filters continue to anchor reliable frequency control and signal shaping across modern designs. Their balance of simplicity, cost-effectiveness, and performance makes them a quiet force behind countless applications.
Share your own experiences with these components and keep an eye out for more exploration into the fundamentals that drive today’s electronics.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- SAW-filter lead times stretching
- Murata banks on ceramic technology
- Ceramic packages stage a comeback
- Multilayer ceramics ups performance
- SAW filters and resonators provide cheap and effective frequency control
The post Exploring ceramic resonators and filters appeared first on EDN.
Beats’ Studio Buds Plus: Tangible improvements, not just marketing fluff

“Plus” tacked onto a product name typically translates into a product-generation extension with little (if any) tangible enhancement. Beats has notably bucked that trend.
I’ve decided that I really like transparent devices:

Not only do they look cool (at least in my opinion; yours might differ), since I can see inside them, I’m able to do “pseudo teardowns” without needing to actually take them apart (inevitably destroying them in the process). Therein my interest in the May 2023-unveiled “Plus” spin of Apple subsidiary Beats’ original Studio Buds earbuds, introduced two years earlier:

As you can see, these are translucent; it’d be a stretch to also call them transparent. Still, I can discern a semblance of what’s inside both the earbuds and their companion storage-and-charging case. And in combination with Beats’ spec-improvement claims:
along with a thorough and otherwise informative teardown video I found of first-gen units:
I think I’ve got a pretty good idea of what’s inside these.
Cool (again, IMHO) looks and an editorial-coverage angle aside, why’d I buy them? After all, I already owned a first-generation Studio Buds set (at left in the following shots, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, and which you’ll also see in other photos in this piece):


Reviewers’ assertions of significant improvements in active noise cancellation (ANC) and battery life with the second-generation version were admittedly tempting:
and like their forebears (and unlike Apple’s own branded earbuds, that is unless you hack’ em), they’re multi-platform compatible versus Apple ecosystem-only, key for this Android guy:
That all said, I confess that what sealed the deal for me was the 10%-off-$84.95 promo price I came across on Woot back in mid-August. Stack that up against the $169.99 MSRP and you can see why I bit on the bait…I actually sprung for two sets, in fact.
An expanded tip suiteHere’s an official unboxing video:
Followed by my own still shots of the process:









Beats added a fourth tip-size option—extra-small—this time around, and the software utility now supports a “fit test” mode to help determine which tip option is optimum for your ears:

Upon pairing them with my Google Pixel 7 smartphone, I was immediately alerted to an available firmware update:



However, although the earbuds themselves were still nearly fully charged, lengthy time spent in the box (on the shelf at the retailer warehouse) had nearly drained the cells in the case. I needed to recharge the latter before I was allowed to proceed (wise move, Beats!):



With the case (and the buds within it) now fully charged, the update completed successfully:




The first- and second-generation cases differ in weight by 1 gram (48 vs 49), according to my kitchen scale:


With the second-generation earbuds set incrementing the total by another gram (58 vs 60):


In both cases, I suspect, the weight increment is associated with increased battery capacity. The aforementioned teardown video indicates that the cells in the first-generation case have a capacity of 400 mAh (1.52 Wh @ 3.8V). The frosty translucence in the second-generation design almost (but doesn’t quite) enable me to discern the battery cell markings inside:

But Apple conveniently stamped the capacity on the back this time: 600 mAh, matching the 50% increase statistic in Beats’ promotional verbiage:

The “button” cells in the earbuds themselves supposedly have a 16% higher capacity than those in the first-generation predecessors. Given that the originals, again per the teardown video, had the model name M1254S2, translating to a 3.7V operating voltage and 60 mAh capacity, I’m guessing that these are the same-dimension 70-mAh M1254S3 successors.
Microphone upgradesAs for inherent output sound quality, I can discern no difference between the two generations:
A result with which Soundguys’ objective (vs my subjective) analysis concurs:

That said, I can happily confirm that the ability to discern music details in high ambient noise environments, not to mention to conduct discernible phone conversations (at both ends of the connection), is notably enhanced with the second-generation design. Beats claims that all three microphones are 3x larger this time around, a key factor in the improvement. Here (at bottom left in each case) are the first- and second-generation feedforward microphone port pairs:

Along with the ANC feedback mics alongside the two generations’ speaker ports:

The main “call” mics are alongside the touch-control switch in the “button” assembly still exposed when the buds are inserted in the wearer’s ears:


I’m guessing an integrated audio DSP upgrade was also a notable factor in the claimed “up to 1.6x” improved ANC (along with up to 2x enhanced transparency). The first-gen Studio Buds leveraged a Cirrus Logic CS47L66 (along with a MediaTek MT2821A to implement Bluetooth functionality); reader guesses as to what’s in use this time are welcome in the comments!
The outcome of these mic and algorithm upgrades? Over to Soundguys again for the results!

The final update is a bit of an enigma, at least to me. Beats has added what it claims are three acoustic vents to the design. Here’s an excerpt from a fuller writeup on the topic:
You’ve probably noticed how some wearables feel more comfortable than others. That’s where acoustic vents come in. They help equalize pressure, reducing that uncomfortable “plugged ear” sensation you might experience with earbuds or other in-ear devices. By doing this, they make your listening experience not only better but also more natural.
The thing is, though, Beats’ own associated image only shows two added vents:

And that’s all I can find, too:

So…
In closing, for your further translucency-blurring visual-inspection purposes, here are some additional standalone images of the right-side earbud, this time standalone and from various positional perspectives:







including another one minus the tip:

and of both the top and bottom of the case:


“Plus” mid-life updates to products are typically little more than new colorway options, or (for smartphones) bigger-sized displays and batteries, but otherwise identical hardware allocations. It’s nice to see Beats do a more substantive “Plus” upgrade with their latest Studio Buds. And the $75-and-change promo price was also very nice. Reader thoughts are as-always welcomed in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Apple’s Beats Powerbeats Pro: a repair attempt blow-by-blow
- Apple’s Beats Powerbeats Pro: No repair…a teardown, though!
- What’s inside: Beats Powerbeats2
- Apple set to acquire Beats headphone producer for $3.2 billion
The post Beats’ Studio Buds Plus: Tangible improvements, not just marketing fluff appeared first on EDN.
Tiny LCOS microdisplay drives next-gen smart glasses

Omnivision’s OP03021 liquid crystal on silicon (LCOS) panel integrates the display array, driver, and memory into a low-power, single-chip design. The full-color microdisplay delivers a resolution of 1632×1536 pixels at 90 Hz in a compact 0.26-in. optical format, enabling smart glasses to achieve higher resolution and a wider field of view.

The microdisplay features a 3.0-µm pixel pitch and operates with a 90-Hz field-sequential input using a MIPI C-PHY trio interface. Panel dimensions are just 7.986×25.3×2.116 mm, saving board space in wearables such as augmented reality (AR), extended reality (XR), and mixed-reality (MR) smart glasses and head-mounted displays.
The OP03021 is offered in a compact 30-pin FPCA package. Samples are available now, with mass production scheduled to begin in the first half of 2026. For more information, contact a sales representative here.
The post Tiny LCOS microdisplay drives next-gen smart glasses appeared first on EDN.
FMCW LiDAR delivers 4D point clouds

Voyant has announced the Helium family of fully solid-state 4D FMCW LiDAR sensors and modules for simultaneous depth and velocity measurement. Based on a proprietary silicon photonic chip, the platform provides scalable sensing and high-resolution point-cloud data.

Helium employs a dense 2D photonic focal plane array with integrated 2D on-chip beam steering, enabling fully electronic scanning. A 2D array of surface emitters implements FMCW operation in a compact, solid-state architecture with no moving parts.
Key advantages of Helium include:
- Configurable planar array resolution: 12,000–100,000 pixels
- FMCW operation with per-pixel radial velocity measurement
- Software-defined LiDAR enabling adaptive scan patterns and regions of interest
- Ultra-compact form factor: <150 g mass, <50 cm³ volume
Helium sensors and modules will be available in multiple resolution and range configurations, supporting FoVs ranging from up to 180° wide to narrow long-range optics.
Voyant is offering early access to Helium for collaborators to explore custom chip resolutions, FoVs, module configurations, multi-sensor fusion, and software-defined scanning. To participate or request more information, contact earlyaccess@voyantphotonics.com.
The post FMCW LiDAR delivers 4D point clouds appeared first on EDN.
Bipolar transistors cut conduction voltage

Diodes has expanded its series of automotive-compliant bipolar transistors with 12 NPN and PNP devices designed to achieve ultra-low VCE(sat). With a saturation voltage of just 17 mV at 1 A and on-resistance as low as 12 mΩ, the DXTN/P 78Q and 80Q series minimize conduction losses by up to 50% versus previous generations, enabling cooler operation and easier thermal management.

The transistors feature collector-emitter voltage ratings (BVCEO) of 30 V, 60 V, and 100 V, and can handle continuous currents up to 10 A (20 A peak), making them suitable for 12‑V, 24‑V, and 48‑V automotive systems. They can be used for gate driving MOSFETs and IGBTs, power line and load switching, low-dropout voltage regulation, DC/DC conversion, and driving motors, solenoids, relays, and actuators.
Rated for continuous operation up to +175°C and offering high ESD robustness (HBM 4 kV, CDM 1 kV), the devices ensure reliable performance in harsh automotive environments. Housed in a compact 3.3×3.3-mm PowerDI3333-8 package, they reduce PCB footprint by up to 75% versus SOT223, while a large underside heatsink delivers low thermal resistance of 4.2°C/W.
The DXTN/P 78Q series is priced from $0.19 to $0.21, while the DXTN/P 80Q series is priced from $0.20 to $0.22, both in 6000-piece quantities. Access product pages and datasheets here.
The post Bipolar transistors cut conduction voltage appeared first on EDN.
MLCC powers efficient xEV resonant circuits

Samsung Electro-Mechanics’ CL32C333JIV1PN# high-voltage MLCC is designed for use in CLLC resonant converters targeting xEV applications such as BEVs and PHEVs. The capacitor provides 33 nF at 1000 V in a compact 1210 (3.2×2.5 mm) package, leveraging a C0G dielectric for high stability.

Maintaining capacitance across –55°C to +125°C with minimal sensitivity to temperature and bias, the device is well suited for high-frequency resonant tanks where electrical consistency directly impacts efficiency and control margin. The surface-mount capacitor enables power electronics designers to reduce component count and footprint in high-voltage CLLC resonant converter designs without compromising reliability.
Alongside the CL32C333JIV1PN#, the company offers two additional 1210-size C0G capacitors. The CL32C103JXV3PN# provides 10 nF at 1250 V, while the CL32C223JIV3PN# provides 22 nF at 1000 V. All three devices are manufactured using proprietary fine-particle ceramic and electrode materials, combined with precision stacking processes, and are optimized for EV charging systems.
The CL32C333JIV1PN#, CL32C103JXV3PN#, and CL32C223JIV3PN# are now in mass production.
The post MLCC powers efficient xEV resonant circuits appeared first on EDN.
Dev kit brings satellite connectivity to IoT

Nordic Semiconductor’s nRF9151 SMA Development Kit (DK) helps engineers build cellular IoT, DECT NR+, and non-terrestrial network (NTN) applications. The kit’s onboard nRF9152 SiP module now features updated modem firmware that enables direct-to-satellite IoT connectivity, adding support for NB-IoT NTN in 3GPP Release 17. The firmware also supports terrestrial LTE-M and NB-IoT networks, along with GNSS.

By replacing internal antennas with SMA connectors, the development board allows direct connection to lab equipment or external antennas for precise RF characterization, power measurements, and field testing. Based on an Arduino Uno–compatible form factor, the board features four user-programmable LEDs, four user-programmable buttons, a Segger J-Link OB debugger, a UART interface via a VCOM port, and a USB connection for debugging, programming, and power.
To accelerate prototyping, the DK includes Taoglas antennas for LTE, NTN, and NR+, along with a Kyocera GNSS antenna. It also provides IoT SIM cards and trial data, enabling immediate terrestrial and satellite connectivity through Deutsche Telekom, Onomondo, and Monogoto.
The nRF9151 SMA DK is available now from Nordic’s distribution partners, including DigiKey, Braemac, and Rutronik. The alpha modem firmware can be downloaded free of charge from the product page linked below.
The post Dev kit brings satellite connectivity to IoT appeared first on EDN.
Electronic design with mechanical manufacturing in mind

Electronics design engineers spend substantial effort on schematics, simulation, and layout. Yet, a component’s long-term success also depends on how well its physical form aligns with downstream mechanical manufacturing processes.
When mechanical design for manufacturing (DFM) is treated as an afterthought, teams can face tooling changes, line stoppages, and field failures that consume the budget and schedule. Building mechanical constraints into design decisions from the outset helps ensure that a concept can transition smoothly from prototype to production without surprises.
The evolving electronic prototyping landscape
Traditional rigid breadboards and perfboards still have value, but they often fall short when a device must conform to curved housings of wearable formats. Engineers who prototype only on flat, rigid platforms may validate electrical behavior while missing mechanical interactions such as strain, connector access, and housing interface.
Scientists are responding with prototype approaches that behave more like the eventual product. For example, MIT researchers, who developed the flexible breadboard called FlexBoard, tested the material by bending it 1,000 times and found it to be fully functional even after repeated deformation.
This bidirectional flexibility allowed the platform to wrap around curved surfaces. It also gave designers a more realistic way to evaluate electronics for wearables, robotics and embedded sensing, where hardware rarely follows a simple planar shape. As these flexible platforms mature, they encourage engineers to think of mechanical behavior not as a late-stage limitation but as a design parameter from the very first version.
Integrating mechanical processes in design
Once a prototype proves the concept, the conversation quickly shifts toward how each part will be manufactured at scale. At this stage, the schematic on paper must reconcile with press stroke limits, tool access, wall thickness, and fixturing. Designing components with specific processes in mind reduces the risk of discovering later that geometry cannot be produced within the budget or timeline.
Precision metal stamping
Metal stamping remains a core process for electrical contacts, terminals, EMI shields, and mini brackets. It excels when parts repeat across high volumes and require consistent form and dimensional control.
A key example is progressive stamping, in which a coil of metal advances through a die set, where multiple stations perform operations in rapid sequence. It strings steps together, so finished features emerge with high repeatability and narrow dimensional spread, making the process suitable for high-volume component manufacturing.
Early collaboration with stamping specialists is beneficial. Material thickness, bend radii, burr direction, and grain orientation all influence tool design and reliability. Features such as stress-relief notches or coined contact areas can often be integrated into the strip layout with little marginal cost once they are considered before the tool is built.
CNC machining
CNC machining often becomes the preferred option where only a few pieces are necessary or shapes are more complicated. It supports complex 3D forms, small production runs, and late-stage changes with fewer up-front tooling costs compared to stamping.
Machined aluminum or copper heatsinks, custom connector housings, and precision mounting blocks are common examples. Designers who plan for machining will benefit from consistent wall thicknesses, accessible tool paths, and tolerances that fit the machine’s capability.
Advanced materials for component durability
The manufacturing method is only part of the process. The base material choice can determine whether a design survives thermal cycles, vibrations, and electrostatic exposure over years of service. Recent work in advanced and responsive materials provides design teams with additional tools to manage these threats. Self-healing polymers and composites are notable examples.
Some of these materials incorporate conductive fillers that redirect electrostatic charge. By steering current away from a single microscopic region, the structure avoids excessive local stress and preserves its functionality for a longer period. For applications such as wearables and portable electronics, this behavior can support longer service intervals and a greater perceived quality.
Engineers are also evaluating high-temperature polymers, filled elastomers, and nanoengineered coatings for use in flexible and stretchable electronics. Each material brings trade-offs in cost, process compatibility, recyclability, and performance. Considering those alongside mechanical processes and board layout helps establish a coherent path from prototype through volume production.
The next generation of electronic products demands a perspective that merges circuit behavior with how parts will be formed, assembled, and protected in real-world environments. Flexible prototyping platforms, process-aware designs for stamping and machining, and careful selection of advanced materials all contribute to this mindset.
When mechanical manufacturing is considered from the get-go, design teams position their work to run reliably on production lines and in the hands of end users.
Ellie Gabel is a freelance writer and associate editor at Revolutionized.
Related Content
- A PCB Design Tool That Mechanical Engineers Will Love
- Design considerations for harsh environment embedded systems
- Mechanical Design Proving Beastly for EEs Who Deal in Bits & Bytes
The post Electronic design with mechanical manufacturing in mind appeared first on EDN.
The DiaBolical dB

Engineers and technicians who work with oscilloscopes are used to seeing waveforms that plot a voltage versus time. Almost all oscilloscopes these days include the Fast Fourier Transform (FFT) to view the acquired waveform in the frequency domain, similar to a spectrum analyzer.
In the frequency domain, the waveforms plot amplitude versus frequency. This view of the signal uses a different scaling. The default vertical scaling of the frequency domain is dBm, or decibels relative to one milliwatt, as shown in Figure 1.
Figure 1 An oscilloscope’s spectrum display (lower grid) uses default vertical units of dBm to display power versus frequency. (Source: Art Pini)
The FFT displays the signal’s frequency spectrum as either power or voltage versus frequency. The default dBm scale measures signal power; alternative units include voltage-based magnitude. In its various forms, the decibel has long confused well-trained technical professionals accustomed to the time domain. If dB is a mystery to you, this article covers the basics you need to know.
The dB was originally a measure of relative power in telephone systems. The unit of measure was named the Bel after Alexander Graham Bell. The decibel (dB) is one-tenth of a Bel and is more commonly used in practice. The definition of the decibel is for electrical applications:
dB = 10 log10 (P2/P1)
Where P1 and P2 are the two power levels being compared.
There are a few key points to note. The first is that the dB is a relative measurement; it measures the ratio of two power levels, P1 and P2, in this example. The second thing is that the dB scale is logarithmic. The log scale is non-linear, emphasizing low-amplitude signals and compressing higher-amplitude signals. This scaling is particularly useful in the frequency domain, where signals tend to exhibit large dynamic ranges.
Based on this definition, some common power ratios and their equivalent dB values are shown in Table 1.
|
P2/P1 |
dB |
|
2:1 |
3 |
|
4:1 |
6 |
|
10:1 |
10 |
|
100:1 |
20 |
|
1:2 |
-3 |
|
1:4 |
-6 |
|
1:10 |
-10 |
|
1:100 |
-20 |
Table 1 Common power ratios and the equivalent decibel values. (Source: Art Pini)
The decibel can also compare root power levels, such as the volt. The definition of the decibel for voltage ratios derived from the definition for power ratios is:
dB = 10 [Log10 (V22/R)/(V12/R)]
= 10 Log10 (V2/V1)2
= 20 log10 (V2/V1)
Where V1 and V2 are the two voltage levels being compared, and R is the terminating resistance.
This derivation utilizes the fact that exponentiation in a logarithm is equivalent to multiplication. The variable R, the terminating resistance (usually 50 Ω), is canceled in the math but still can affect decibel measurements when different resistance values are involved
The voltage-based definition of dB yields the following dB values for these voltage ratios, as shown in Table 2.
|
V2/V1 |
dB |
|
2:1 |
6 |
|
4:1 |
12 |
|
10:1 |
20 |
|
100:1 |
40 |
|
1:2 |
-6 |
|
1:4 |
-12 |
|
1:10 |
-20 |
|
1:100 |
-40 |
Table 2 Common voltage ratios and their equivalent decibel values. (Source: Art Pini)
Relative and absolute measurementsAs we have seen, the decibel is a relative measure that compares two power or voltage levels. As such, it is perfect for characterizing transmission gain or loss and is used extensively in scattering (s) parameter measurements.
An absolute measurement can be made by referencing the measurement to a known quantity. The standard reference values in electronic applications are the milliwatt (dBm), the microvolt (dBmV), and the volt (dBV).
The decibel is used in various other applications, such as acoustics. The sound pressure level in acoustic applications is also measured in dB, and the standard reference is 20 microPascals (μPa).
Using dBmBased on the definition of dB for power ratios and using 1 mW (0.001 Watt) as the reference, dBm is calculated as:
dBm = 10 log10 (P2/0.001)
Where P2 is the power of the signal being measured
Converting from measured power in dBm to power in watts uses the same equation in reverse.
P2 =0.001*10(dBm/10)
For example, the power level in watts (W) for the highest spectral peak is given by the first measure table entry in Figure 1: -5.8 dBm at 5 MHz. The power, in watts, is calculated as follows:
P2 = 0.001 * 10(-5.8 /10)
P2= 2.63*10-4 W =233 mW
Common power levels and their equivalent dBm values are shown in Table 3.
|
Power Level |
dBm |
|
1 mW |
0 |
|
2 mW |
3 |
|
0.5 mW |
-3 |
|
10 mW |
10 |
|
0.1 mW |
-10 |
|
100 mW |
20 |
|
0.01 mW |
-20 |
|
1 W |
30 |
|
10 W |
40 |
|
100 W |
50 |
|
1000 W |
60 |
Table 3 common power levels and their equivalent dBm values. (Source: Art Pini)
The calculation of absolute voltage values for voltage-based decibel measurements is similar. To calculate the voltage level for a decibel value in dBV, the equation is:
V2 = 1 * 10(dBV/20)
For a measured dBV value of 0.3 dBV, the equivalent voltage level is:
V2 = 1 * 10(0.3/20)
V2 = 1.035 volts
Converting from dBV to dBmV is a scaling or multiplication operation. So, if you remember the characteristics of logarithms, multiplication within the logarithm becomes addition, and division becomes subtraction. The conversion requires a simple additive constant as derived below:
dBmV = 20 Log10(V2/1×10-6)
dBmV = 20 Log10(V2/1) – 20 Log (1-6)
But:
dBV= 20 log10 (V2/1)
dBmV = dBV + 120
A little basic algebra and the reverse operation is:
dBV = dBmV – 120
What if the source impedance isn’t 50 Ω?Typically, RF work utilizes cables and terminations with a characteristic impedance of 50 Ω. In video, the standard impedance is 75 Ω; in audio, it is 600 Ω. Reading dBm and matching the source calibration to a 50 Ω input oscilloscope requires adjustments.
First, it is standard practice to terminate sources with their characteristic impedances. A 75-Ω or 600-Ω system signal source requires an appropriate impedance-matching device to connect to a 50-Ω measuring instrument. The most common is the simple resistive impedance-matching pad (Figure 2).

Figure 2 This schematic of a typical 600 to 50 Ω impedance matching pad reflects a 600 Ω load to the source and provides a 50 Ω source impedance for the measuring instrument. (Source: Art Pini)
The matching pad presents a 600-Ω load to the signal source, while the instrument sees a 50-Ω source, so both devices present the expected impedances. This decreases signal losses by minimizing reflections. The impedance pad is a voltage divider with an insertion loss of 16.63 dB, which must be compensated for in the measurement instrument.
The next step is where the terminating resistances come into play. If the source and load impedances differ, this difference must be considered, as it affects the decibel readings. Going back to the basic definition of decibel:
dB = 10 Log10 [(V22/R2)/(V12/R1)]
Consider how the impedance affects the voltage level equivalent to the one-milliwatt power reference level. The reference voltages equivalent to the one-milliwatt power reference differ between 50 and 600 Ω sides of the measurements:
Pref = .001 Watt = Vref600 2/ 600 = Vref502/50
dBm600 = 10 LOG10 [ (V22) / (Vref6002)]
= 10 LOG10 [(V22) / (Vref502/50/600)]
=10 LOG10 [(50/600) (V22/ (Vref502)]
=10 LOG10 [ (V22)/(Vref502)] + 10 LOG10(50/600)
dBm600 = dBm50 – 10.8
The dBm reading on the 50-Ω instrument is 10.8 dB higher than that on the 600-Ω source because the reference power level is different for the two load impedances.
The oscilloscope’s rescale operation can scale the spectrum display to dBm referenced to 600 Ω. Assuming a 600-Ω to 50-Ω impedance matching pad, with an insertion loss of 16.63 dB, is used, and the above-mentioned -10.8 dB correction factor is added, the net scaling factor is 5.83 dB must be added to the FFT spectrum as shown in Figure 3.
Figure 3 Using the rescale function of the oscilloscope to recalibrate the instrument to read spectrum levels in dBm relative to a 600-Ω source. (Source: Art Pini)
The 600-Ω source is set to output a zero-dBm signal level. A 600-Ω to 50-Ω impedance matching pad with an insertion loss of 16.63 dB properly terminates the signal source into the oscilloscope’s 50-Ω input termination. The oscilloscope’s rescale function is applied to the FFT of the acquired signal, adding 5.83 dB to the signal’s spectrum display. This yields a near-zero dBm reading at 5 MHz.
The measurement parameter P1 measures the RMS input to the oscilloscope, showing the attenuation of the external impedance matching pad. The peak-to-peak (P2) and peak voltage (P3) readings are also measured. The peak level of the 5 MHz signal spectrum (P4) of near zero dBm (22 milli-dB). The uncorrected peak spectrum level (P5) is -5.8 dBm
The vertical scale of the spectrum display is now calibrated to match the 600-Ω source. Note that the signal at 5 MHz reads 0 dBm, which matches the signal source setting of 0 dBm (0.774 Vrms) into the expected 600-Ω load.
The decibelDue to its large dynamic range, the decibel is a useful unit of measure and is used in various applications, mainly in the frequency domain. Converting between linear and logarithmic scaling takes some getting used to and possibly a lot of math.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
The post The DiaBolical dB appeared first on EDN.
Semiconductor technology trends and predictions for 2026

As we look ahead to 2026, we see intelligence increasingly being embedded within physical products and everyday interactions. This shift will be powered by rapid adoption of digital identity technologies such as near-field communication (NFC) alongside AI and agentic AI tools that automate workflows, improve efficiency, and accelerate innovation across the product lifecycle.
The sharp rise in NFC adoption—with 92% of brands already using or planning to use it in products in the next year—signals appetite to unlock the true value of the connected world. Enabling intelligence in new places gives brands the opportunity to bridge physical and digital experiences for positive social, commercial, and environmental outcomes.
Regulatory milestones, such as the phased rollout of the EU Digital Product Passport, along with sustainability pressures and the need to ensure transparency to drive trust will be key catalysts for edge and item-level AI.
In the year ahead, companies will unlock significant benefits in customer experience, sustainability, compliance, and supply chain efficiency by embedding intelligence from the edge to individual items and devices.
Let’s dig deeper into the technology trends shaping 2026.
- Edge AI is the fastest growing frontier in semiconductors
Driven by the shift from pure inference to on-device training and continuous, adaptive learning, 2026 will see strong growth in edge AI demand. Specialized chips such as low-power machine learning accelerators, sensor-integrated chips, and memory-optimized chips will be used in consumer electronics, smart cities, and industrial IoT.
Next, new packaging approaches will become the proving ground for performance, cost efficiency, and miniaturization in intelligent edge devices.
- Item-level intelligence is accelerating digital transformation
Intelligence will not stop at the device. Over the next 12 months, low-cost sensing, NFC, and edge AI will push computation down to individual items.
The capability to gather real-time data at item level in a move away from batch data, combined with AI, will enable personalized experiences, automation, and predictive analytics across smart packaging, healthcare and wellness products, retail, and logistics. Applications include real-time tracking, AI-driven personalization, automated supply chain optimization, predictive maintenance, and dynamic authentication.
This marks a fundamental shift as every item becomes a data node and source of intelligence.
- Connected consumer experiences are driving breakthrough NFC adoption
NFC adoption is accelerating alongside the explosion of connected consumer experiences—from wearables and hearables to smart packaging, digital keys and wellness applications. NFC will become a central enabler of trust, personalization, and seamless connectivity.

Figure 1 NFC has become a key enabler in personalization-centric connectivity. Source: Pragmatic Semiconductor
As consumers increasingly expect intelligent product interaction, for example, to track provenance or engage with wellness apps to build a personalized profile and derive usable insights, the opportunity for NFC is clear. Brands will favor ultra-low-cost and thin NFC solutions—where flexible and ultra-thin semiconductors excel—to deliver frictionless, high-quality consumer experiences.
- Heterogeneous integration will unlock design innovation
Heterogeneous integration through chiplets, interposers, and die stacking will become the preferred approach for achieving higher density and improved yields. This is a key enabler for miniaturization and differentiated form factors in facilitating customization for edge AI.
At the same time, the rise of agentic AI-driven EDA tools will lower design barriers and fuel cost-effective innovation through natural language tools. This will ignite startup growth and increase demand for agile, cost-effective foundry design services.
- Compliance shifts from cost to competitive advantage
New regulatory frameworks such as Digital Product Passports, circularity, and Extended Producer Responsibility (EPR) will require authentication, traceability, and lifecycle visibility. Rather than a burden, this presents a strategic opportunity for competitive advantage and market expansion.
Embedded digital IDs with NFC capability allow businesses to secure product authentication, meet compliance and governance expectations, and unlock new value in consumer engagement. As compliance moves from paper systems to embedded intelligence, the opportunity will expand across consumer goods, industrial components, and supply chains.
- Energy constraints are driving efficiencies in semiconductor manufacturing
As semiconductor manufacturing scales to serve AI demand, growing energy consumption in data centers is forcing industry to focus on power-efficient architectures. This is accelerating a shift away from centralized compute toward fully distributed sensing and intelligence at the edge. Edge AI architectures are designed to process data locally rather than transmit it upstream and will be essential to sustaining AI growth without compounding energy constraints.

Figure 2 Semiconductor manufacturing will increasingly adopt circular design principles such as reuse, recycling, and recoverability. Source: Pragmatic Semiconductor
The capability to establish and scale domestic manufacturing will also play a critical role in cutting embedded emissions and enabling more sustainable and efficient supply chains. Semiconductor manufacturing facilities, known as foundries, will be evaluated on their energy and material efficiency, supported by circular design principles such as reuse, recycling, and recoverability.
Companies that can demonstrate strong environmental commitments will gain long-term competitive advantage, attracting customers, partners, and skilled talent.
Intelligence right to the edge
These trends point toward a definitive shift as intelligence moves dynamically into the physical world. Compute will become increasingly distributed and identity embedded, unlocking efficiencies and delivering real-time insights into the fabric of products, infrastructure, and supply chains.
Semiconductor manufacturing will sit at the heart of the next phase of digital transformation. Flexible and ultra-thin chip technologies will enable new classes of innovations, from emerging form factors such as wearables and hearables to higher functional density in constrained spaces, alongside more carbon-efficient manufacturing models.
The implications for businesses are clear. Companies can accelerate innovation, deepen consumer engagement, and turn compliance into a source of competitive advantage. Those that embed connected technologies into their 2026 strategy will be those that are best positioned to take advantage of the digital transformation opportunities ahead.
Richard Price is co-founder and chief technology officer of Pragmatic Semiconductor.
Related Content
- AI at the edge: It’s just getting started
- The AI Future is Now All About the Edge
- Edge AI: Bringing Intelligence Closer to the Source
- Powering E-Paper Displays with NFC Energy Harvesting
- Edge AI powers the next wave of industrial intelligence
The post Semiconductor technology trends and predictions for 2026 appeared first on EDN.
An off-the-shelf digital twin for software-defined vehicles

The complexity of vehicle hardware and software is rising at an unprecedented rate, so traditional development methodologies are no longer sufficient to manage system-level interdependencies among advanced driver assistance systems (ADAS), autonomous driving (AD), and in-vehicle infotainment (IVI) functions.
That calls for a new approach, the one that enables automotive OEMs and tier 1s to speed the development of software-defined vehicles (SDVs) with early full-system, virtual integration that mirrors real-world vehicle hardware. That will accelerate both application and low-level software development for ADAS, AD, and IVI and remove the need for design engineers to build their own digital twins before testing software.
It will also reduce time-to-market for critical applications from months to days. Siemens EDA has unveiled what it calls a virtual blueprint for digital twin development. PAVE360 Automotive, a digital twin software, is pre-integrated as an off-the-shelf offering to address the escalating complexity of automotive hardware and software integration.
While system-level digital twins for SDVs using existing technologies can be complex and time-consuming to create and validate, PAVE360 Automotive aims to deliver a fully integrated, system-level digital twin that can be deployed on day one. That reduces the time, effort, and cost required to build such environments from scratch.

Figure 1 PAVE360 Automotive is a cloud-based digital twin that accelerates system-level development for ADAS, autonomous driving, and infotainment. Source: Siemens EDA
“The automotive industry is at the forefront of the software-defined everything revolution, and Siemens is delivering the digital twin technologies needed to move beyond incremental innovation and embrace a holistic, software-defined approach to product development,” said Tony Hemmelgarn, president and CEO, Siemens Digital Industries Software.
Siemens EDA’s digital twin—a cloud-based off-the-shelf offering—allows design engineers to jumpstart vehicle systems development from the earliest phases with customizable virtual reference designs for ADAS, autonomous driving, and infotainment. Moreover, the cloud-based collaboration unifies development with a single digital twin for all design teams.
The Arm connection
Earlier, Siemens EDA joined hands with Arm to accelerate virtual environments for Arm Cortex-A720AE in 2024 and Arm Zena Compute Subsystems (CSS) in 2025. Now Siemens EDA is integrating Arm Zena CSS with PAVE360 Automotive to enable design engineers to start building on Arm-based designs faster and more seamlessly.

Figure 2 Here is how PAVE360’s digital twin works alongside the Arm Zena CSS platform for AI-defined vehicles. Source: Siemens EDA
On the other hand, access to Arm Zena CSS in a digital twin environment such as PAVE360 Automotive can accelerate software development by up to two years. “With Arm Zena CSS available inside Siemens’ pre-integrated PAVE360 Automotive environment, partners can not only customize their solutions leveraging the unique flexibility of the Arm architecture but also validate and iterate much earlier in the development cycle,” said Suraj Gajendra, VP of products and solutions for Physical AI Business Unit at Arm.
PAVE360 Automotive, now made available to key customers, is planned for general availability in February 2026. It will be demonstrated live at CES 2026 in the Auto Hall on 6–9 January 2026.
Related Content
- Why the Cloud Is Essential for SDV Development
- Unveiling the Transformation of Software-Defined Vehicles
- Software-defined vehicle (SDV): A technology to watch in 2025
- Architectural opportunities propel software-defined vehicles forward
- Digital Twins Power Rapid Software Deployment in Autonomous Vehicles
The post An off-the-shelf digital twin for software-defined vehicles appeared first on EDN.
Wide-range tunable RC Schmitt trigger oscillator

In this Design Idea (DI), the classic Schmitt-trigger-based RC oscillator is “hacked” and analyzed using the simulation software QSPICE. You might reasonably ask why one would do this, given that countless such circuits reliably and unobtrusively clock away all over the world, even in space.
Well, problems arise when you want the RC oscillator to be tunable, i.e., replacing the resistor with a potentiometer. Unfortunately, the frequency is inversely proportional to the RC time constant, resulting in a hyperbolic response curve.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Another drawback is the limited tuning range. For high frequencies, R can become so small that the Schmitt trigger’s output voltage sags unacceptably.
The oscillator’s current consumption also increases as the potentiometer resistance decreases. In practice, an additional resistor ≥1 kΩ must always be placed in series with the potentiometer.
The potentiometer’s maximum value determines the minimum frequency. For values >100 kΩ, jitter problems can occur due to hum interference when operating the potentiometer, unless a shielded enclosure is used.
RC oscillatorFigure 1 shows an RC oscillator modeled with QSPICE’s built-in behavioral Schmitt trigger. It is parameterized as a TLC555 (CMOS 555) regarding switching thresholds and load behavior.
Figure 1: An RC oscillator modeled with QSPICE’s built-in behavioral Schmitt trigger, it is a TLC555 (CMOS 555).
Figure 2 displays the typical triangle wave input and the square wave output. At R1=1 kΩ, the output voltage sag is already noticeable, and the average power dissipation of R1 is around 6 mW, roughly an order of magnitude higher than the dissipation of a low-power CMOS 555.

Figure 2 The typical triangle wave input and the square wave output, where the average power dissipation of R1 is around 6 mW.
Frequency response wrt potentiometer resistanceNext, we examine the oscillator’s frequency response as a function of the potentiometer resistance. R1 is simulated in 100 steps from 1 kΩ to 100 kΩ using the .step param command.
The simulation time must be long enough to capture at least one full period even at the lowest frequency; otherwise, the period duration cannot be measured with the .meas command.
However, with a 3-decade tuning range, far too many periods would be simulated at high frequencies, making the simulation run for a very long time.
Fortunately, QSPICE has a new feature that allows a running simulation to be aborted, after which the new simulation for the next parameter step is executed. The abort criterion is a behavioral voltage source called AbortSim(). It’s not the most elegant or intuitive feature, but it works.
Schmitt trigger oscillatorFigure 3 shows our Schmitt trigger oscillator, but this time with the parameter stepping of R1, the .meas commands for period and frequency measurement, and an auxiliary circuit that triggers AbortSim(). My idea was to build a counter clocked by the oscillator. After a small number of clock pulses—enough for one period measurement—the simulation is aborted.

Figure 3 Schmitt trigger oscillator, this time, with the parameter stepping of R1, the .meas commands for period and frequency measurement, and an auxiliary circuit that triggers AbortSim().
I first tried a 3-stage ripple counter with behavioral D-flops. This worked but wasn’t optimal in terms of computation time.
The step voltage generator in the box in Figure 2 is faster and easier to adjust. A 10-ns monostable is triggered by V(out) of the oscillator and sends short current pulses via the voltage-controlled current source to capacitor C3. The voltage across C3 triggers AbortSim() at >= 0.5V.
The constant current and C3 are selected so that the 0.5 V threshold is reached after 3 clock cycles of the oscillator, thus starting the next measurement.
Note that the simulation time in the .tran command is set to 5 s, which is never reached due to AbortSim().
The entire QSPICE simulation of the frequency response takes the author’s PC a spectacular 1.5 s, whereas previously with LTSPICE (without the abort criterion) it took many minutes.
Figure 4 shows the frequency (FREQ) versus potentiometer resistance (RPOT) curve in a log plot, interpolated over 100 measurement points.

Figure 4 Frequency versus potentiometer resistance curve in a log plot, interpolated over 100 measurement points.
Final circuit hackNow that we have the simulation tools for fast frequency measurement, we finally get to the circuit hack in Figure 5. We expand the circuit in Figure 1 with a resistor R2=RPOT in series with C1.

Figure 5 Hacked Schmitt trigger oscillator with an expanded Figure 1 circuit that includes R2=RPOT in series with C1.
Figure 6 illustrates what happens: for R2=0 (blue trace), we see the familiar triangle wave. When R2 is increased (magenta trace), a voltage divider:
V(out)/V(R2) = (R1+R2)/R1
is created if we momentarily ignore V(C1). V(R2) is thus a scaled-down V(out) square wave signal, to which the V(C1) triangle wave voltage is now added.

Figure 6 The typical triangle wave input with the output now reaching very high frequencies without excessively loading V(OUT).
Because the upper and lower switching thresholds of the Schmitt trigger are constant, V(C1) reaches these thresholds faster as V(R2) increases. The more V(R2) approaches the Schmitt trigger hysteresis VHYST, the smaller the V(C1) triangle wave becomes, and the frequency increases.
At V(R2)=VHYST, the frequency would theoretically become infinite. This condition in the original circuit in Figure 1 would mean R1=0, leading to infinitely high I(out). The circuit hack thus allows very high frequencies without excessively loading V(OUT)!
The problem of the steep frequency rise towards infinity at the “end” of the potentiometer still remains. To fix this, we would need a potentiometer that changes its value significantly at the beginning of its range and only slightly at the end. This is easily achieved by wiring a much smaller resistor in parallel with the potentiometer.
Fixing steep frequency riseIn Figure 7, we see a second hack: R1 has been given a very large value.

Figure 7 Giving R1 a large value keeps the circuit’s current consumption low, allowing RPOT to be dimensioned independently of R1.
This keeps the circuit’s current consumption low, especially at high frequencies. The square wave voltage at RPOT is now taken directly from V(OUT) via a separate voltage divider. This allows RPOT to be dimensioned independently of R1.
In the example, I used a common 100 kΩ potentiometer. The remaining resistors are effectively in parallel with the potentiometer regarding AC signals and set the desired characteristic curve.
Despite all measures, the frequency increase is still quite steep at the end of the range, so a 1 kΩ trimmer is recommended for practical application to conveniently set the maximum frequency.
Figure 8 shows the frequency curve of the final circuit. Compared to the curve of the original circuit in Figure 4, a significantly flatter curve profile is evident, along with a larger tuning range.

Figure 8 Frequency versus potentiometer resistance curve in a log plot, interpolated over 100 measurement points, showing a flatter curve profile and larger tuning range.
Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.
Related Content
- Exponentially-controlled vactrols
- Schmitt trigger adapts its own thresholds
- Schmitt trigger provides alternative to 555 timer
- Schmitt trigger uses two transistors
- RC oscillator generates linear triangle wave
The post Wide-range tunable RC Schmitt trigger oscillator appeared first on EDN.
Enabling a variable output regulator to produce 0 volts? Caveat, designer!

For some time now, many of EDN’s Design Ideas (DIs) have dealt with ground-referenced, single-power-supplied voltage regulators whose outputs can be configured to produce zero or near-zero volts [1][2].
In this mode of operation, regulation in response to an AC signal is problematic. This is because the regulator output voltage can’t be more negative than zero. For the many regulators with totem pole outputs, at zero volts, we could hope for the ground-side MOSFET to be indefinitely enabled, and the high side disabled. But that’s not a regulator, it’s a switch.
Wow the engineering world with your unique design: Design Ideas Submission Guide
There might be some devices that act this way when asked to produce 0 volts, but in general, the best that could be hoped for is that the output is simply disabled. In such a case, a load that is solely an energy sink would pull the voltage to ground (woe unto any that are energy sources!).
But is it lollipops and butterflies all the way down to and including zero volts? I decided to test one regulator to see how it behaves.
Testing the regulatorA TPS54821EVM-049 evaluation module employs a TPS54821 buck regulator. I’ve configured its PCB for 6.3-V out and connected it to an 8-Ω load. I’ve also connected a function generator through a 22 kΩ resistor to the regulator’s V_SNS (feedback) pin.
The generator is set to produce a 360 mVp-p square wave across the load. It also provides a variable offset voltage, which is used to set the minimum voltage Vmin of the regulator output’s square-wave. Figure 1 contains several screenshots of regulator operation while it’s configured for various values of Vmin.
Figure 1 Oscilloscope screenshot with Vmin set to (a) 400 mV, (b) 300 mV, (c) 200 mV, (d) 100 mV, (e) 30 mV, (f) 0 mV, (g) below 0 mV. See text for further discussion. The scales of each screenshot are 100mV and 1mS per large division. An exception is (g), whose timescale is 100µS per large division.
As can be seen, the output is relatively clean when Vmin is 400 mV, but gets progressively noisier as it is reduced in 100mV steps down to 100mV (Figures 1a – 1d).
But the real problems start when Vmin is set to about 30 mV and some kind of AC signal replaces what would preferably be a DC one; the regulator is switching between open and closed-loop operation (Figure 1e).
We really get into the swing of things when Vmin is set to 0 mV and intermittent signals of about 150 mVp-p arise and disappear (Figure 1f). As the generator continues to be changed in the direction that would drive the regulator output more negative if it were capable, the amplitude of the regulator’s ringing immediately following the waveform’s falling edge increases (Figure 1g). Additionally, the overshoot of its recovery increases.
Why isn’t it on the datasheet?This behavior might or might not disturb you. But it exists. And there are no guarantees that things would not be worse with different lots of TPS54821 or other switcher or linear regulator types altogether. These could be operating with different loads, feedback networks, and input voltage supplies with varying DC levels and amounts of noise.
There might be a very good reason that typical datasheets don’t discuss operation with output voltages below their references—it might not be possible to specify an output voltage below which all is guaranteed to work as desired. Or maybe it is.
But if it is, then why aren’t such capabilities mentioned? Where is there an IC manufacturer’s datasheet whose first page does not promise to kiss you and offer you a chocolate before you go to bed? (That is, list every possible feature of a product to induce you to buy it.)
Finding the lowest guaranteed output levelConsider a design whose intent is to allow a regulator to produce a voltage near or at zero. Absent any help from the regulator’s datasheet, I’m not sure I’d know how to go about finding a guaranteed output level below which bad things couldn’t happen.
But suppose this could be done. The “Gold-Plated” [1] DI was updated under this assumption. It provides a link to a spreadsheet that accepts the regulator reference voltage and its tolerance, a minimum allowed output voltage, a desired maximum one, and the tolerance of the resistors to be used in the circuit.
It calculates standard E96 resistor values of a specified precision along with the limits of both the maximum and the minimum output voltage ranges [3].
“Standard” regulator resultsA similar spreadsheet has been created for the more general “standard” regulator circuit in Figure 2. That latter can be found at [4].
Figure 2 The “standard” regulator in which a reference voltage Vext, independent of the regulator, is used in conjunction with Rg2 to drive the regulator output to voltages below its reference voltage. For linear regulators, L1 is replaced with a short.
The spreadsheet [4] was run with the following requirements in Figure 3.

Figure 3 Sample input requirements for the spreadsheet to calculate the resistor values and minimum and maximum output voltage range limits for a Standard regulator design.
The spreadsheet’s calculated voltage limits are shown in Figure 4.

Figure 4 Spreadsheet calculations of the minimum and maximum output voltage range limits for the requirements of Figure 3.
A Monte Carlo simulation was run 10000 times. The limits were confirmed to be close to and within the calculated ones (Figure 5).

Figure 5 Monte Carlo simulation results confirming the limits were consistent with the calculated ones.
A visual of the Monte Carlo results is helpful (Figure 6).

Figure 6 A graph of the Monte Carlo minimum output voltage range and the maximum one for the standard regulator. See text.
The minimum range is larger than the maximum range. This is because two large signals with tolerances are being subtracted to produce relatively small ones. The signals’ nominal values interfere destructively as intended. Unfortunately, the variations due to the tolerances of the two references do not:
OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 ) – Vext · PWM · Rf/Rg2
“Gold-Plated” regulator resultsWhen I released the “Gold-Plated” DI whose basic concept is seen in Figure 7, I did so as a lark. But after applying the aforementioned “standard” regulator’s design criteria to the Gold-Plated design’s spreadsheet [3], it became apparent that the Gold-Plated design has a real value—its ability to more greatly constrain the limits of the minimum output voltage range.
Figure 7 The basic concept of the Gold-Plated regulator. K = 1 + R3/R4 .
The input to the Gold-Plated spreadsheet is shown in Figure 8.

Figure 8 The inputs to the Gold-Plated spreadsheet.
Its calculations of the minimum and maximum output voltage range limits are shown in Figure 9.

Figure 9 The results for the “Gold-Plated” spreadsheet showing maximum and minimum voltage range limits when PWM inputs are at minimum and maximum duty cycles.
The limits resulting from its 10000 run Monte Carlo simulation were again confirmed to be close to and within those calculated by the spreadsheet:

Figure 10 Monte Carlo simulation results of the Gold-Plated spreadsheet, confirming the limits were consistent with the calculated ones.
Again, a visual is helpful, with the Gold-Plated results on the left and the Standard on the right.

Figure 11 Graphs of the Monte Carlo simulation results of the Gold-Plated (left) and Standard (right) designs. The minimum voltage range of the Gold-Plated design is far smaller than that of the Standard.
The Standard regulator’s minimum range magnitude is 161 mV, while that of the Gold-Plated version is only 33 mV. The Gold-Plated’s advantage will increase as the desired Vmin approaches 0 volts. Its benefits are due to the fact that only a single reference is involved in the subtraction of terms:
OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 · PWM · ( 1 – K ) )
Belatedly, another advantage of the Gold-Plated was discovered: When a load is applied to any regulator, its output voltage falls by a small amount, causing a reduction of ΔV at the Vref feedback pin.
In the Gold-Plated, there is an even larger reduction at the output of its op-amp because of its gain. The result is a reduced drop across Rg2. This acts to increase the output voltage, improving load regulation.
In contrast, while the Standard regulator also sees a ΔV drop at the feedback pin, the external regulator voltage remains steady. The result is an increase in the drop across Rg2, further reducing the output voltage and degrading load regulation.
Summing upThe benefits of the Gold-Plated design are clear, but it’s not a panacea. Whether a Gold-Plated or Standard design is used, designers still must address the question: How low should you go? Caveat, designer!
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content/References
- Gold-Plated PWM-control of linear and switching regulators
- Accuracy loss from PWM sub-Vsense regulator programming
- Gold-Plated DI Github
- Enabling a variable output regulator to produce 0 volts DI Github
The post Enabling a variable output regulator to produce 0 volts? Caveat, designer! appeared first on EDN.
Why memory swizzling is hidden tax on AI compute

Walk into any modern AI lab, data center, or autonomous vehicle development environment, and you’ll hear engineers talk endlessly about FLOPS, TOPS, sparsity, quantization, and model scaling laws. Those metrics dominate headlines and product datasheets. If you spend time with the people actually building or optimizing these systems, a different truth emerges: Raw arithmetic capability is not what governs real-world performance.
What matters most is how efficiently data moves. And for most of today’s AI accelerators, data movement is tangled up with something rarely discussed outside compiler and hardware circles, that is, memory swizzling.
Memory swizzling is one of the biggest unseen taxes paid by modern AI systems. It doesn’t enhance algorithmic processing efficiency. It doesn’t improve accuracy. It doesn’t lower energy consumption. It doesn’t produce any new insight. Rather, it exists solely to compensate for architectural limitations inherited from decades-old design choices. And as AI models grow larger and more irregular, the cost of this tax is growing.
This article looks at why swizzling exists, how we got here, what it costs us, and how a fundamentally different architectural philosophy, specifically, a register-centric model, removes the need for swizzling entirely.
The problem nobody talks about: Data isn’t stored the way hardware needs it
In any AI tutorial, tensors are presented as ordered mathematical objects that sit neatly in memory in perfect layouts. These layouts are intuitive for programmers, and they fit nicely into high-level frameworks like PyTorch or TensorFlow.
The hardware doesn’t see the world this way.
Modern accelerators—GPUs, TPUs, and NPUs—are built around parallel compute units that expect specific shapes of data: tiles of fixed size, strict alignment boundaries, sequences with predictable stride patterns, and arranged in ways that map into memory banks without conflicts.
Unfortunately, real-world tensors never arrive in those formats. Before the processing even begins, data must be reshaped, re-tiled, re-ordered, or re-packed into the format the hardware expects. That reshaping is called memory swizzling.
You may think of it this way: The algorithm thinks in terms of matrices and tensors; the computing hardware thinks in terms of tiles, lanes, and banks. Swizzling is the translation layer—a translation that costs time and energy.
Why hierarchical memory forces us to swizzle
Virtually, every accelerator today uses a hierarchical memory stack whose layers, from the top-down, encompass registers; shared or scratchpad memory; L1 cache, L2 cache, sometimes even L3 cache, high-bandwidth memory (HBM), and, at the bottom of the stack, the external dynamic random-access memory (DRAM).
Each level has different size, latency, bandwidth, access energy consumption, and, rather important, alignment constraints. This is a legacy of CPU-style architecture where caches hide memory latency. See Figure 1 and Table 1.

Figure 1 See the capacity and bandwidth attributes of a typical hierarchical memory stack in all current hardware processors. Source: VSORA

Table 1 Capacity, latency, bandwidth, and access energy dissipation of a typical hierarchical memory stack in all current hardware processors are shown here. Source: VSORA
GPUs inherited this model, then added single-instruction multiple-thread (SIMT) execution on top. That makes them phenomenally powerful—but also extremely sensitive to how data is laid out. If neighboring threads in a warp don’t access neighboring memory locations, performance drops dramatically. If tile boundaries don’t line up, tensor cores stall. If shared memory bank conflicts occur, everything waits.
TPUs suffer from similar constraints, just with different mechanics. Their systolic arrays operate like tightly choreographed conveyor belts. Data must arrive in the right order and at the right time. If weights are not arranged in block-major format, the systolic fabric can’t operate efficiently.
NPUs-based accelerators—from smartphone chips to automotive systems—face the same issues: multi-bank SRAMs, fixed vector widths, and 2D locality requirements for vision workloads. Without swizzling, data arrives “misaligned” for the compute engine, and performance nosedives.
In all these cases, swizzling is not an optimization—it’s a survival mechanism.
The hidden costs of swizzling
Swizzling takes time, sometimes a lot
In real workloads, swizzling often consumes 20–60% of the total runtime. That’s not a typo. In a convolutional neural network, half the time may be spent doing NHWC
NCHW conversions; that is, two different ways of laying out 4D tensors in memory. In a transformer, vast amounts of time are wasted into reshaping Q/K/V tensors, splitting heads, repacking tiles for GEMMs, and reorganizing outputs.
Swizzling burns energy and energy is the real limiter
A single MAC consumes roughly a quarter of a picojoule. Moving a value from DRAM can cost 500 picojoules. Moving data from a DRAM dissipates in the ballpark of 1,000 times more energy than performing a basic multiply-accumulate operation.
Swizzling requires reading large blocks of data, rearranging them, and writing them back. And this often happens multiple times per layer. When 80% of your energy budget goes to moving data rather than computing on it, swizzling becomes impossible to ignore.
Swizzling inflates memory usage
Most swizzling requires temporary buffers: packed tiles, staging buffers, and reshaped tensors. These extra memory footprints can push models over the limits of L2, L3, or even HBM, forcing even more data movement.
Swizzling makes software harder and less portable
Ask a CUDA engineer what keeps him up at night. Ask a TPU compiler engineer why XLA is thousands of pages deep in layout inference code. Ask anyone who writes an NPU kernel for mobile why they dread channel permutations.
It’s swizzling. The software must carry enormous complexity because the hardware demands very specific layouts. And every new model architecture—CNNs, LSTMs, transformers, and diffusion models—adds new layout patterns that must be supported.
The result is an ecosystem glued together by layout heuristics, tensor transformations, and performance-sensitive memory choreography.
How major architectures became dependent on swizzling
- Nvidia GPUs
Tensor cores require specific tile-major layouts. Shared memory is banked, avoiding conflicts requires swizzling. Warps must coalesce memory accesses; otherwise, efficiency tanks. Even cuBLAS and cuDNN, the most optimized GPU libraries on Earth, are filled with internal swizzling kernels.
- Google TPUs
TPUs rely on systolic arrays. The flow of data through these arrays must be perfectly ordered. Weights and activations are constantly rearranged to align with the systolic fabric. Much of XLA exists simply to manage data layout.
- AMD CDNA, ARM Ethos, Apple ANE, and Qualcomm AI engine
Every one of these architectures performs swizzling. Morton tiling, interleaving, channel stacking, etc. It’s a universal pattern. Every architecture that uses hierarchical memory inherits the need for swizzling.
A different philosophy: Eliminating swizzling at the root
Now imagine stepping back and rethinking AI hardware from first principles. Instead of accepting today’s complex memory hierarchies as unavoidable—the layers of caches, shared-memory blocks, banked SRAMs, and alignment rules—imagine an architecture built on a far simpler premise.
What if there were no memory hierarchy at all? What if, instead, the entire system revolved around a vast, flat expanse of registers? What if the compiler, not the hardware, orchestrated every data movement with deterministic precision? And what if all the usual anxieties—alignment, bank conflicts, tiling strategies, and coalescing rules—simply disappeared because they no longer mattered?
This is the philosophy behind a register-centric architecture. Rather than pushing data up and down a ladder of memory levels, data simply resides in the registers where computation occurs. The architecture is organized not around the movement of data, but around its availability.
That means:
- No caches to warm up or miss
- No warps to schedule
- No bank conflicts to avoid
- No tile sizes to match
- No tensor layouts to respect
- No sensitivity to shapes or strides, and therefore no swizzling at all
In such a system, the compiler always knows exactly where each value lives, and exactly where it needs to be next. It doesn’t speculate, prefetch, tile, or rely on heuristics. It doesn’t cross its fingers hoping the hardware behaves. Instead, data placement becomes a solvable, predictable problem.
The result is a machine where throughput remains stable, latency becomes predictable, and energy consumption collapses because unnecessary data motion has been engineered out of the loop. It’s a system where performance is no longer dominated by memory gymnastics—and where computing, the actual math, finally takes center stage.
The future of AI: Why a register-centric architecture matters
As AI systems evolve, the tidy world of uniform tensors and perfectly rectangular compute tiles are steadily falling away. Modern models are no longer predictable stacks of dense layers marching in lockstep. Instead, they expand in every direction: They ingest multimodal inputs, incorporate sparse and irregular structures, reason adaptively, and operate across ever-longer sequences. They must also respond in real time for safety-critical applications, and they must do so within tight energy budgets—from cars to edge devices.
In other words, the assumptions that shaped GPU and TPU architectures—the expectation of regularity, dense grids, and neat tiling—are eroding. The future workloads are simply not shaped the way the hardware wants them to be.
A register-centric architecture offers a fundamentally different path. Because it operates directly on data where it lives, rather than forcing that data into tile-friendly formats, it sidesteps the entire machinery of memory swizzling. It does not depend on fixed tensor shapes.
It doesn’t stumble when access patterns become irregular or dynamic. It avoids the costly dance of rearranging data just to satisfy the compute units. And as models grow more heterogeneous and more sophisticated, such an architecture scale with their complexity instead of fighting against it.
This is more than an incremental improvement. It represents a shift in how we think about AI compute. By eliminating unnecessary data movement—the single largest bottleneck and energy sink in modern accelerators—a register-centric approach aligns hardware with the messy, evolving reality of AI itself.
Memory swizzling is the quiet tax that every hierarchical-memory accelerator pays. It is fundamental to how GPUs, TPUs, NPUs, and nearly all AI chips operate. It’s also a growing liability. Swizzling introduces latency, burns energy, bloats memory usage, and complicates software—all while contributing nothing to the actual math.
One register-centric architecture eliminates swizzling at the root by removing the hierarchy that makes it necessary. It replaces guesswork and heuristics with deterministic dataflow. It prioritizes locality without requiring rearrangement. It lets the algorithm drive the hardware, not vice versa.
As AI workloads become more irregular, dynamic, and power-sensitive, architectures that keep data stationary and predictable—rather than endlessly reshuffling it—will define the next generation of compute.
Swizzling was a necessary patch for the last era of hardware. It should not define the next one.
Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.
Related Content
- Overcoming the AI memory bottleneck
- AI to Drive Surge in Memory Prices Through 2026
- HBM memory chips: The unsung hero of the AI revolution
- Generative AI and memory wall: A wakeup call for IC industry
- Breaking Through Memory Bottlenecks: The Next Frontier for AI Performance
The post Why memory swizzling is hidden tax on AI compute appeared first on EDN.







