EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 1 година 11 хв тому

Low-power Wi-Fi 6 MCUs preserve IoT battery life

3 години 12 хв тому

Renesas has announced the RA6W1 dual-band Wi-Fi 6 wireless MCU, to be followed by the RA6W2 Wi-Fi 6 and BLE combo MCU. Based on an Arm Cortex-M33 CPU running at 160 MHz, these low-power microcontrollers dynamically switch between 2.4-GHz and 5-GHz bands in real time, ensuring a stable, high-speed connection.

The RA6W1 and RA6W2 MCUs use Target Wake Time (TWT) to let IoT devices sleep longer, extending battery life and reducing network congestion. They consume as little as 200 nA to 4 µA in deep sleep and under 50 µA while checking for data, enabling devices to stay connected for a year or more on a single battery. This makes them well-suited for applications requiring real-time control, remote diagnostics, and over-the-air updates— for example, environmental sensors, smart home devices, and medical monitors.

Alongside the RA6W1 and RA6W2 MCUs, Renesas launched two fully integrated modules designed to reduce development time and accelerate time to market. The Wi-Fi 6 (RRQ61001) and Wi-Fi 6/BLE combo (RRQ61051) modules feature built-in antennas, certified RF components, and wireless protocol stacks that comply with global network standards.

The RA6W1 MCU in WLCSP and FCQFN packages, as well as the RRQ61001 and RRQ61051 modules, are available now. The RA6W2 MCU in a BGA package is scheduled for release in Q1 2026.

Renesas Electronics 

The post Low-power Wi-Fi 6 MCUs preserve IoT battery life appeared first on EDN.

Automotive buck converter is I2C-tuned

3 години 12 хв тому

Optimized for automotive point-of-load (POL) applications, Diodes’s AP61406Q 5.5-V, 4-A synchronous buck converter provides a versatile I2C programming interface. The I2C 3.0-compatible serial interface supports SCL clock rates up to 3.4 MHz and allows configuration of PFM/PWM modes, switching frequencies (1 MHz, 1.5 MHz, 2 MHz, or 2.5 MHz), and output-current limits of 1A, 2 A, 3 A, and 4 A. The output voltage is adjustable in 20-mV increments.

The AP61406Q uses a proprietary gate-driver scheme to suppress switching-node ringing without slowing MOSFET transitions, helping reduce high-frequency radiated EMI. It operates from an input of 2.3 V to 5.5 V and integrates 75-mΩ high-side and 33-mΩ low-side MOSFETs for efficient step-down conversion. Constant on-time (COT) control further minimizes external components, eases loop stabilization, and delivers low output-voltage ripple.

Offered in a W-QFN1520-8/SWP (Type UX) package, the converter is AEC-Q100 qualified for operation from –40°C to +125°C. Its protection suite—including high-side and low-side current-sense protection, UVLO, VIN OVP, peak and valley current limiting, and thermal shutdown—enhances reliability.

AP61406Q product page 

Diodes

The post Automotive buck converter is I2C-tuned appeared first on EDN.

SiC power modules deliver up to 608 A

3 години 12 хв тому

SemiQ continues to expand its Gen3 QSiC MOSFET portfolio with 1200-V power modules offering high current density and low thermal resistance. The new seven-device lineup includes high-current S3 half-bridge, B2T1 six-pack, and B3 full-bridge modules designed to meet the needs of EV chargers, energy storage systems, and industrial motor drives.

Two of the devices handle currents up to 608 A with a junction-to-case thermal resistance of just 0.07 °C/W in a 62‑mm S3 half-bridge format. The three six-pack modules integrate a three-phase power stage into a compact housing, offering on-resistance from 19.5 mΩ to 82 mΩ, an optimized layout, and minimal parasitic effects. The two full-bridge modules combine current handling up to 120 A with on-resistance as low as 8.6 mΩ and a thermal resistance of 0.28 °C/W.

All parts undergo wafer-level gate-oxide burn-in and are breakdown-tested above 1350 V. Gen3 modules operate at lower gate voltages (18 V/-4.5 V) and reduce both on-resistance and turn-off energy losses up to 30% versus previous generations.

The power modules are available immediately. Explore SemiQ’s entire line of Gen3 MOSFET power modules here.

SemiQ

The post SiC power modules deliver up to 608 A appeared first on EDN.

Handheld analyzers cut through dense RF traffic

3 години 13 хв тому

With 120-MHz gap-free IQ streaming, Keysight’s N99xxD-Series FieldFox analyzers ensure every signal event is captured. This capability lets users stream and replay complex RF activity to quickly pinpoint issues and verify system performance. The result is deeper analysis and greater confidence that key signal details are not overlooked in the field.

The N99xxD-Series includes 14 handheld models—combo or spectrum analyzers—covering frequencies from 14 GHz to 54 GHz. Each model supports more than 25 software-defined FieldFox applications, including vector network analysis, spectrum and real-time spectrum analysis, noise figure measurement, EMI analysis, pulse signal generation, and direction-finding.

Key capabilities of the N99xxD-Series include:

  • 120-MHz IQ streaming with SFP+ 10-GbE interfaces for uninterrupted data capture
  • Wideband signal analysis and playback for troubleshooting, spectrum monitoring, and interference detection
  • Field-to-lab workflow to recreate real-world signals for lab analysis
  • High RF performance with ±0.1 dB amplitude accuracy without warm-up

A technical overview of Keysight’s FieldFox handheld analyzers and D-Series information can be found here.

Keysight Technologies 

The post Handheld analyzers cut through dense RF traffic appeared first on EDN.

MOSFETs bring 750-V capability to TOLL package

3 години 13 хв тому

Now in mass production, Rohm’s SCT40xxDLL series of SiC MOSFETs in TOLL (TO-Leadless) packages delivers high power-handling capability in a compact, low-profile form factor. According to ROHM, the TOLL package provides roughly 39% better thermal performance than conventional TO-263-7L packages.

The SCT40xxDLL lineup consists of six devices, each rated for a 750-V maximum drain-source voltage, compared to the 650-V limit typical of standard TOLL packages. This higher voltage rating enables lower gate resistance and a larger safety margin for surge voltages, helping to further reduce switching losses.

In AI servers and compact PV inverters, rising power requirements coincide with pressure to reduce system size, increasing the need for higher-density MOSFETs. In slim totem-pole PFC designs with thickness limits near 4 mm, Rohm’s new devices cut footprint to 11.68×9.9 mm (about 26% smaller) and reduce package height to 2.3 mm, about half that of typical devices.

The 750-V SiC MOSFETs are available from distributors such as DigiKey, Mouser, and Farnell. For details and datasheets, click here.

Rohm Semiconductor 

The post MOSFETs bring 750-V capability to TOLL package appeared first on EDN.

Splitting voltage with purpose: A guide to precision voltage dividers

6 годин 24 хв тому

Voltage division is not just about ratios; it’s about control, clarity, and purpose. This little guide explores precision voltage dividers with quiet confidence, and sheds light on how they shape signal levels, reference points, and measurement accuracy.

A precision voltage divider produces a specific fraction of its input voltage using carefully matched resistive components. It’s designed for accurate, stable voltage scaling—often used to shape signal levels, generate reference voltages, or condition inputs for measurement. Built with low-tolerance resistors, these dividers ensure consistent performance across temperature and time, making them essential in analog design, instrumentation, and sensor interfacing (Figure 1).

Figure 1 Representation of an SOT23 precision resistor-divider illustrates two tightly matched resistors with accessible terminals at both ends and the midpoint. Source: Author

A side note: While the term precision voltage divider broadly refers to any resistor-based circuit that scales voltage, precision resistor-divider typically denotes a tightly matched resistor pair in a single package, for example, SOT23. These integrated devices offer superior ratio accuracy and thermal tracking, making them ideal for reference scaling and threshold setting in precision analog designs.

As an unbiased real-world example, the HVDP08 series from KOA is a thin-film resistor network designed for high-precision, high-voltage divider applications. It supports resistance values up to 51 MΩ, working voltages up to 1,000 V, and resistance ratios as high as 1000:1.

Figure 2 The HVDP08 high-precision, high-voltage divider achieves higher integration while reducing board space requirements and overall assembly overhead. Source: KOA

Similarly, precision decade voltage dividers—specifically engineered for use as input voltage dividers in multimeters and other range-switching instruments—are now widely available. Simply put, precision decade voltage dividers are resistor networks that provide accurate, selectable voltage ratios in powers of ten. One notable example is the EBG Series 1776-X, widely recognized for its precision and reliability.

Figure 3 EBG Series 1776-X precision decade resistors incorporate ceramic protection and laser-trimmed thin films to achieve ultra-tight tolerances. Source: Miba

Moreover, digitally programmable precision voltage dividers—such as the MAX5420 and MAX5421—are optimized for use in digitally controlled gain amplifier configurations. Programmable gain amplifiers (PGAs) allow precise, software-driven control of signal amplification, making them ideal for applications that require dynamic range adjustment, calibration, or sensor interfacing.

Poorman’s precision practice

Precision does not have to be pricey. In this section, we explore how resourceful design choices—clever resistor selection, thoughtful layout, and a dash of calibration—can yield surprisingly accurate voltage dividers without premium components. Whether you are prototyping on a budget or refining a DIY instrument, this hands-on approach proves that precision is within reach.

Achieving precision on a budget starts with clever resistor selection: Choosing resistors with tight tolerances, low temperature coefficients, and stable long-term behavior, even if they are not top-shelf brands. A thoughtful layout ensures minimal parasitic effects; short traces, good grounding, and avoiding thermal gradients all help preserve accuracy. Finally, a dash of calibration—whether through trimming, software correction, or referencing known voltages—can compensate for small mismatches and elevate a humble design into a reliable performer.

While selecting resistors, it’s important to distinguish between absolute and relative tolerance. Absolute tolerance refers to how closely each resistor matches its nominal value, say ±1% of 10 KΩ. Relative tolerance, on the other hand, describes how well matched a pair or group of resistors are to each other, regardless of their deviation from nominal. In voltage dividers, especially precision ones, relative tolerance often matters more. Even if both resistors drift slightly, as long as they drift together, the ratio—and thus the output voltage—remains stable.

As an aside, ratio tolerance refers to how closely a resistor pair maintains its intended resistance ratio, independent of their absolute values. In precision voltage dividers, this metric is key; even if both resistors drift slightly, a tight ratio tolerance ensures the output voltage remains stable. It’s a subtle but critical factor when accuracy depends more on matching than on nominal values.

Having covered the essentials, we now turn to a hands-on example, one that puts theory into practice with accessible components and practical constraints.

Operational amplifier (op-amp) circuits are commonly used to scale the output voltage of digital-to-analog converters (DACs). Two popular configurations—the non-inverting amplifier and the inverting amplifier—can both amplify the signal and adjust its DC offset.

For applications requiring output scaling without offset, the goal is to expand the voltage range of the DAC’s output while maintaining its original polarity. This setup requires the op-amp’s positive supply rail to exceed the desired maximum output voltage.

Figure 4 This output-scaling circuit extends DAC’s voltage range without altering its polarity. Source: Author

Output voltage formula: VOUT = VIN (1 + RF/RG)

Scaling in action

To scale a DAC output from 0–5 V to 0–10 V, a gain of 2.0 is required.

Using a 10K feedback resistor (RF) and a 10K gain resistor (RG), the gain becomes 2. This configuration doubles the DAC’s output voltage while preserving its zero-based reference.

You can also design op-amp circuits to scale and shift the DAC output by a specific DC offset. This is especially useful when converting a unipolar output, for example, 0 V to 2.5 V, into a bipolar range, for instance, –5 V to +5 V. But that’s a story for another day.

Precision voltage dividers may seem straightforward, but their influence on signal integrity and measurement accuracy runs deep. Whether you are working on analog front-ends, reference rails, or sensor inputs, careful resistor selection and layout choices can make or break performance.

Have a go-to divider trick or layout insight? Drop it in the comments and join the conversation.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Splitting voltage with purpose: A guide to precision voltage dividers appeared first on EDN.

How LLCs unlock innovation in automotive electronics

8 годин 23 хв тому

A recent McKinsey mobility survey shows that automobile owners are prioritizing battery range, charging speeds, and reliability when considering an electric vehicle (EV). Automakers are responding to these consumer preferences by developing more resilient power systems with higher power density and advanced battery management systems (BMS) that maximize space while improving performance.

Regardless of manufacturer or vehicle type, EV architecture development prompts digital technology innovations. Yet tried-and-true analog technologies such as integrated magnetics offer measurable benefits, with inductor-inductor-capacitors (LLCs) providing stable voltage regulation and a consistent response to the load changes needed for EV charging.

LLC resonant circuits operating within switched-mode DC/DC power converters deliver wide output voltage control, soft switching in the primary, low voltage in the secondary, and slight changes in switching frequency—all requirements for EVs.

Because resonant converters have soft switching capabilities and can handle high voltages with nearly 98% efficiency, these devices can rapidly charge EVs while minimizing energy losses. Compact LLC resonant converter modules enable easy scalability and adaptability for different voltage requirements.

LLC resonant circuit 

As Figure 1 shows, LLC resonant converters include MOSFET power switches (S1 and S2), a resonant tank circuit, a high-frequency transformer, and a rectifier. S1 and S2 convert an input DC voltage into a high-frequency square wave.

Figure 1 An LLC resonant half-bridge converter with power switches S1 and S2, a resonant tank circuit, a high-frequency transformer, and a rectifier. Source: Texas Instruments

The resonant tank circuit consists of a resonant capacitor (Cr), a resonant inductor (Lr) in series with the capacitor and transformer (T1), and a magnetizing inductor (LM) in parallel with the capacitor and transformer. Using two inductors allows the tank circuit to respond to a broad range of loads and to establish stable control over the entire load range.

Oscillating at the resonant frequency (fR), the resonant tank circuit eliminates square-wave harmonics and outputs a sine wave of the fundamental switching frequency to the input of T1. Operating the circuit at a switching frequency at or near fR causes the resonant current to discharge or charge the capacitance just before the power switch changes state.

By shaping the current waveform, the resonant tank circuit causes S1 and S2 to turn on at 0 V (zero voltage switching) and turn off at 0 A (zero current switching). The resultant soft switching increases efficiency, decreases energy losses, reduces stress on power systems, and eliminates voltage and current spikes that cause electromagnetic interference (EMI). Soft switching also enables LLC resonant converters to handle a wide range of input and output voltages.

T1 provides input/output isolation. Electrically isolating the input and output circuits prevents ground loops and minimizes interference. Isolation also prevents voltage fluctuations or transients from propagating and allowing voltage variations. After T1 scales the voltage up or down, the rectifier (D1, D2, and CO) converts the sine wave into a stable DC output.

How LLC solutions support high power density 

LLC resonant converters support the growing demand for higher-power-density solutions. Since these converters operate at high switching frequencies while maintaining high efficiency, designers can integrate smaller and lighter transformers and inductors into the LLC package.

Integrating Lr and T1 into a single magnetic unit increases the converter’s power density and circuit efficiency. For EV designers, the size, weight, and cost savings gained make it possible to incorporate more functionality into limited spaces. Optimizing the T1 winding and core structure allows the converter to operate within thermal limits.

Strategically and selectively integrating protection features and intelligent control capabilities into analog controllers reduces system complexity while maintaining performance. Using LLC converters allows manufacturers to move beyond the basics toward adaptive power systems and advanced control methods.

Input power proportional control (IPPC) represents a growing focus on the intelligent power management available through LLC resonant circuits.

As shown in Figure 2, IPPC widens the control range of an LLC converter by modulating the switching frequency and comparing the input power to a control signal. By regulating the output voltage and current, the feedback loop directly controls the converter’s input power.

Figure 2 Simplified application schematic for the Texas Instruments UCC25661-Q1 LLC controller implementing the IPPC scheme. Source: Texas Instruments

With the control signal proportional to the input power, the signal becomes limited in range and limits the converter’s power output. As a result, the control signal works as a load monitor regardless of any variations in the resonant converter’s output voltage, preventing unwanted system shutdowns while also protecting valuable system components.

 LLC applications for LEVs

Light electric vehicles (LEVs) include mopeds, scooters, bikes, and golf carts. Adopting the LLC topology for onboard and external DC/DC converters in an LEV improves the charger efficiency within the battery power and voltage ranges, regardless of the charging architecture. Using an LLC resonant converter also supports the high-power density and efficiency requirements of an LEV while reducing EMI and noise.

When compared to traditional flyback converters and parallel-resonant converters, LLC converters offer specific advantages for LEVs.

One advantage exists through the operation of LLC converters at the wide input and output voltages that match LEV charging requirements. Wide-output LLC converters with IPPC work well for LEVs by supporting constant current and constant voltage charging.

Instead of going into burst mode with a low battery voltage, the converter maintains the operating mode and minimizes ripple into the battery. The stable operating mode shortens the time needed to charge the battery and extends battery life.

LLC applications for PHEVs and EVs

Plug-in hybrid electric vehicle (PHEV) and EV architectures can use LLC resonant circuits for the DC/DC converter, BMS, onboard charger (OBC), and traction inverter subsystems. Along with the high efficiency established through zero-voltage switching, resonant converters provide high power density and decreased switching losses.

Figure 3 is a block diagram of a DC/DC converter combined with an active power factor correction (PFC) circuit. The PFC brings the input current and voltage waveforms in phase and increases the system efficiency.

After applying AC power to the input of the PFC stage, the boost voltage from the PFC combines with the filtered voltage at the DC-link capacitor and becomes the input for the DC/DC converter.

Figure 3 DC/DC converter block diagram. Source: Texas Instruments

EV battery management systems monitor and control state of charge, state of health, and residual capacity to maintain the safe operating range of the battery cells.

Within these broad functions, the BMS monitors the voltage, current, and temperature of the batteries and protects against deep discharge, overcharging, overheating, and overcurrent conditions. The cell balancing function of a BMS ensures that each cell in a battery pack has a uniform charging and discharging rate.

Resonant converters provide precise energy management, scalability, and the isolated power needed for a BMS as represented in Figure 4. As EVs incorporate more loads, the power requirements for high- to low-voltage conversion increase and require a higher power density.

Figure 4 An isolated DC/DC converter isolates the high-voltage battery from the low-voltage battery. Source: Texas Instruments

The LLC topology is a good fit for OBC applications because it addresses the need to adapt the output voltage according to the battery’s charging voltage range. LLC resonant converters simply adjust the voltage with the switching frequency.

While traction inverters convert energy stored in the battery into instantaneous multiphase AC power to drive traction motors, LLC resonant circuits operate within the subsystems that support inverters. These subsystems provide input power protection, signal isolation, isolated and non-isolated DC/DC power supplies, current and voltage sensing, and signal isolation.

LLC innovation for LEVs, PHEVs, and EVs

The high efficiency and compact size of LLC resonant circuit modules maximize vehicle range while cutting costs. LLC converters do have limitations, however, when meeting the output capacity requirements of evolving technologies. Limited power capacity and performance degradation under dynamic conditions require a different approach.

As next-generation zone EV architectures become standard, newer PHEVs and EVs will rely on multiple LLC converters distributed throughout the vehicle within the zone control module to optimize distribution, preserve output power stability, and deliver higher power capacity.

New EVs will also use bidirectional DC/DC LLC converters to connect the high-voltage battery with a low-voltage supply, improve charger efficiency, facilitate charging from and discharging to the grid, and reduce space and costs.

Other improvements include producing LLC resonant converters with two transformers and integrating lightweight planar transformers into the tank circuits.

Dual transformer converters may provide wider-range output voltages while maintaining high efficiency while charging. Using planar transformers in converters reduces the weight and size of converter modules.

EV consumer acceptance

Each innovation represents another step toward widespread consumer acceptance of EVs. In turn, EV adoption reduces greenhouse gas emissions, improves local air quality, and reduces the impact on human health.

Andrew Plummer is a product marketing engineer in Texas Instruments’ high-voltage power business. He focuses on growing the automotive, energy infrastructure and aerospace and defense sectors. He graduated with a bachelor’s degree in electrical engineering from the University of Florida.

Related Content

The post How LLCs unlock innovation in automotive electronics appeared first on EDN.

Two-wire precision current source with wide current range

Срд, 12/10/2025 - 15:00

An interesting analog design problem is the precision current source. Many good designs are available, but most are the three-wire types that can be used as a positive (see Figure 1) or a negative (Figure 2) polarity source, but not both from the same circuit.

Figure 1 A typical three-wire [power supply (PS), ground, and load] precision positive current source offering an accuracy of 1% or better. The output current is 1.24/R1.

Figure 2 A typical three-wire negative current source, or a current sink.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Two-wire designs exist and have the advantage of being able to serve in either polarity connection. Some of them are simple and cheap but somewhat limited in terms of performance. See Figure 3 for a classic example.

Figure 3 A textbook classic two-wire current source/sink where 40V > (V+ – V-) > 4.5V.

Amazingly, this oldie but goodie comprises just two commodity components, one of which is the single resistor that programs it (R1). Its main limitation is a (conservative) 10 mA minimum output current:

Output current = 1.25/R1 >= 10mA and <= 1.5A
Accuracy (assuming perfect R1) = +/- 2%

In other news, a recently published high-performance, ingenious, and elegantly simple Design Idea (DI) for a two-wire source comes from frequent contributor Christopher Paul. Christopher Paul’s circuit significantly extends the precision and voltage compliance of the genre. See it at: “A precision, voltage-compliant current source.”

Meanwhile, my effort is shown in Figure 4. This design takes a different approach to the two-wire topology that allows more than a 1000:1 ratio between maximum and minimum programmed output. It boasts uncompromised precision over the full range.

Here’s how it works.

Figure 4 A two-wire source/sink with 1% or better precision over > 1000:1 output range.

The precision 1.24-V reference Z1 is the heart of the circuit. Start-up resistor R6 provides it with a microamp-level trickle of current on power up. That’s not much, but all it needs is to squeak out more than A1’s 100 µV of input offset. 

Then, A1’s positive feedback from R2 will take over to regeneratively provide Z1 with the required 80 µA of bias through R5. At this point, the A1 pin 5 will stabilize at 1.240 V, and R3 will pass 10 µA.

That will pull A2 pin 3 positive and coax A2’s to turn on pass transistor Q1. The Io current, passed by Q1, will develop output-proportional negative feedback across R1. 

This will sink 10 µA (1.24V/R4) current through R4, nulling and balancing A2’s non-inverting input and its output. This will set Io to 1.24V/R1 + 10µA.

Damping resistors R8 and R9 together with compensation network R7C1 provide a modicum of oscillation and other naughty behavior suppression. This will generally encourage docile niceness.

The minimum programmable Io budget consists of op-amp current draw (500 µA max) + Z1 bias (82 µA max) + R4 feedback (10.1 µA max) + R6 trickle (4 µA max) = 596 µA. The maximum Io is limited by A2’s output capability and Q1’s safe operating area; 1 A is a conservative ceiling.

Although both op-amp and Q1 are rated for 36 V, don’t overheat them with more voltage than the load compliance requires. Even then, for output in the ampere range, Q1 will definitely need a robust heatsink, and R1 and R8 need to be big and fat.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Two-wire precision current source with wide current range appeared first on EDN.

Inductive sensor simplifies position sensing

Втр, 12/09/2025 - 21:32
Melexis' MLX90514 inductive sensor.

Melexis introduces a 16-bit inductive sensor for robotics, industrial, and mobility applications. The new dual-input MLX90514 delivers absolute position with high noise immunity through its SSI output protocol.

Applications such as robotic joint control, industrial motor commutation, and e-mobility drive systems require sensors with high-precision feedback, compact form factor, low-latency response, resilience to environmental stressors, and seamless integration, Melexis explained.

These sensors also need to overcome the limitations of optical encoders and magnetic-based solutions, the company said, which the MLX90514 addresses with its contactless inductive measurement principle. This results in stable and repeatable performance in demanding operational environments.

The inductive measurement principle operates without a permanent magnet, reducing design effort, and the sensor’s PCB-based coil design simplifies assembly and reduces costs, Melexis added.

In addition, the sensor is robust against dust, mechanical wear, and contamination compared with optical solutions. The operating temperature ranges from ‐40°C to 160°C.

Melexis' MLX90514 inductive sensor.(Source: Melexis)

The inductive sensor’s dual-input architecture simultaneously processes signals from two coil sets to compute vernier angles on-chip. This architecture consolidates position sensing in a single IC, reducing external circuitry and enhancing accuracy.

The MLX90514 also ensures synchronized measurements, and supports enhanced functional safety for applications such as measuring both input and output positions of a motor in e-bikes, motorcycles, or robotic joints, according to the company.

Targeting both absolute rotary or linear motion sensing, the MLX90514 provides up to 16-bit position resolution and zero-latency measurement with accuracy better than 0.1° for precise control in demanding environments. Other features include a 5-V supply voltage, coil diameters ranging from 20 mm to 300 mm, and linear displacements of up to 400 mm.

The inductive sensor offers plug-and-play configuration. Engineers only need to program a limited set of parameters, specifically offset compensation and linearization, for faster system integration.

The MLX90514 inductive sensor interface IC is available now. Melexis offers a free online Inductive Simulator tool, which allows users to design and optimize coil layouts for the MLX90514.

The post Inductive sensor simplifies position sensing appeared first on EDN.

Haptic drivers target automotive interfaces

Втр, 12/09/2025 - 21:22
Cirrus Logic's CS40L51, CS40L52, and CS40L53 haptic drivers.

Cirrus Logic Inc. is bringing its expertise in consumer and smartphone haptics to automotive applications with its new family of closed-loop haptic drivers. The CS40L51, CS40L52, and CS40L53 devices mark the company’s first haptic driver solutions that are reliability qualified to the AEC-Q100 automotive standard.

Cirrus Logic's CS40L51, CS40L52, and CS40L53 haptic drivers.(Source: Cirrus Logic Inc.)

The CS40L51/52/53 devices integrate a high-performance 15-V Class D amplifier, boost converter, voltage and current monitoring ADCs, waveform memory, and a Halo Core DSP. In addition to simplifying design, the family’s real-time closed-loop control improves actuator response and expands the frequency bandwidth for improved haptic effects, while the proprietary algorithms dynamically adjust actuator performance, delivering precise and high-definition haptic feedback.

The CS40L51/52/53 haptic drivers are designed to deliver a more immersive and intuitive tactile user experience in a range of in-cabin interfaces, including interactive displays, steering wheels, smart surfaces, center consoles, and smart seats. These devices can operate across varying conditions.

The advanced closed-loop hardware and algorithm improve the response time of the actuator and expand usable frequency bandwidth to create a wider range of haptic effects. Here’s the lineup:

  • The CS40L51 offers advanced sensor-less velocity control (SVC), real-time audio to haptics synchronization (A2H), and active vibration compensation (AVC) for immersive, high-fidelity feedback.
  • The CS40L52 features the advanced closed-loop control SVC that optimizes the system haptic performance in real-time by reducing response time, increasing frequency bandwidth and compensating for manufacturing tolerances and temperature variation.
  • The CS40L53 provides a click compensation algorithm to enable consistent haptic feedback across systems by adjusting haptic effects based on the actuator manufacturing characteristics.

The haptic drivers are housed in a 34-pin wettable flank QFN package. They are AEC-Q100 Grade-2 qualified for automotive applications, with an operating temperature from –40°C to 105°C.

Engineering samples of CS40L51, CS40L52, and CS40L53 are available now. Mass production will start in December 2025. Visit www.cirrus.com/automotive-haptics for more information, including data sheets and application materials.

The post Haptic drivers target automotive interfaces appeared first on EDN.

Nordic adds expansion board for nRF54L series development

Втр, 12/09/2025 - 21:08
Nordic Semiconductor's nRF7002 expansion board II.

Nordic Semiconductor introduces the nRF7002 expansion board II (nRF7002 EBII), bolstering the development options for its nRF54L Series multiprotocol system-on-chips (SoCs). The nRF7002 EBII plug-in board adds Wi-Fi 6 capabilities to the nRF54L Series development kits (DKs).

Based on Nordic’s nRF7002 Wi-Fi companion IC, the nRF7002 EBII allows developers using the nRF54L Series multiprotocol SoCs to leverage the benefits of Wi-Fi 6, including power efficiency improvements for battery-powered Wi-Fi operation, and management of large IoT networks, for IoT applications such as smart home and Matter-enabled devices, industrial sensors, and smart city infrastructure, as well as wearables and medical devices.

Nordic Semiconductor's nRF7002 expansion board II.(Source: Nordic Semiconductor)

The nRF7002 EBII provides seamless hardware and software integration with the nRF54L15 and nRF54LM20 development kits. The EBII supports dual-band Wi-Fi (2.4 GHz and 5 GHz) and advanced Wi-Fi 6 features such as target wake time (TWT), OFDMA, and BSS Coloring, enabling interference-free, battery-powered operation, Nordic said.

The new board also features a dual-band chip antenna for robust connectivity across Wi-Fi bands. The onboard nRF7002 companion IC offers Wi-Fi 6 compliance, as well as backward compatibility with 802.11a/b/g/n/ac Wi-Fi standards. It supports both STA and SoftAP operation modes.

The nRF7002 EBII can be easily integrated with the nRF54L Series development kits via a dedicated expansion header. Developers can use SPI or QSPI interfaces for host communication and use integrated headers for power profiling, making the board suited for energy-constrained designs.

The nRF7002 EBII will be available through Nordic’s distribution network in the first quarter of 2026. It is fully supported in the nRF Connect SDK, Nordic’s software development kit.

The post Nordic adds expansion board for nRF54L series development appeared first on EDN.

Developing high-performance compute and storage systems

Втр, 12/09/2025 - 15:00
Microchip’s Switchtec Gen 5 PCIe switch reference design validation board.

Peripheral Component Interconnect Express, or PCI Express (PCIe), is a widely used bus interconnect interface, found in servers and, increasingly, as a storage and GPU interconnect solution. The first version of PCIe was introduced in 2003 by the Peripheral Component Interconnect Special Interest Group (PCI-SIG) as PCIe Gen 1.1. It was created to replace the original parallel communications bus, PCI.

With PCI, data is transmitted at the same time across many wires. The host and all connected devices share the same signals for communication; thus, multiple devices share a common set of address, data, and control lines and clearly vie for the same bandwidth.

With PCIe, however, communication is serial point-to-point, with data being sent over dedicated lines to devices, enabling bigger bandwidths and faster data transfer. Signals are transferred over connection pairs, known as lanes—one for transmitting data, the other for receiving it. Most systems normally use 16 lanes, but PCIe is scalable, allowing up to 64 lanes or more in a system.

The PCIe standard continues to evolve, doubling the data transfer rate per generation. The latest is PCIe Gen 7, with 128 GT/s per lane in each direction, meeting the needs of data-intensive applications such as hyperscale data centers, high-performance computing (HPC), AI/ML, and cloud computing.

PCIe clocking

Another differentiating factor between PCI and PCIe is the clocking. PCI uses LVCMOS clocks, whereas PCIe uses differential high-speed current-steering logic (HCSL) and low-power HCSL clocks. These are configured with the spread-spectrum clocking (SSC) scheme.

SSC works by spreading the clock signal across several frequencies rather than concentrating it at a single peak frequency. This eliminates large electromagnetic spikes at specific frequencies that could cause interference (EMI) to other components. The spreading over several frequencies reduces EMI, which protects signal integrity.

On the flip side, SSC introduces jitter (timing variations in the clock signal) due to frequency-modulating the clock signal in this scheme. To preserve signal integrity, PCIe devices are permitted a degree of jitter.

In PCIe systems, there’s a reference clock (REFCLK), typically a 100-MHz HCSL clock, as a common timing reference for all devices on the bus. In the SSC scheme, the REFCLK signal is modulated with a low-frequency (30- to 33-kHz) sinusoidal signal.

To best balance design performance, complexity, and cost, but also to ensure best functionality, different PCIe clocking architectures are used. These are common clock (CC), common clock with spread (CCS), separate reference clock with no spread (SRNS), and separate reference clock with independent spread (SRIS). Both CC and separate reference architectures can use SSC but with a differing amount of modulation (for the spread).

Each clocking scheme has its own advantages and disadvantages when it comes to jitter and transmission complexity. The CC is the simplest and cheapest option to use in designs. Here, both the transmitter and receiver share the same REFCLK and are clocked by the same PLL, which multiplies the REFCLK frequency to create the high-speed clock signals needed for the data transmission. With the separate clocking scheme, each PCIe endpoint and root complex have their own independent clock source.

PCIe switches

To manage the data traffic on the lanes between different components within a system, PCIe switches are used. They allow multiple PCIe devices, from network and storage cards to graphics and sound cards, to communicate with one another and the CPU simultaneously, optimizing system performance. In cloud computing and AI data centers, PCIe switches connect multiple NICs, GPUs, CPUs, NPUs, and other processors in the servers, all of which require a robust and improved PCIe infrastructure.

PCIe switches play a key role in next-generation open, hyperscale data center specifications now being worked on rapidly by a growing developer contingent around the world. This is particularly needed with the advent of ML- and AI-centric data centers, underpinned by HPC systems. PCIe switches are also instrumental in many industrial setups, wired networking, communications systems, and where many high-speed devices must be connected with data traffic managed effectively.

A well-known brand of PCIe switches is the Switchtec family from Microchip Technology. The Switchtec PCIe switch IP manages the data flow and peer-to-peer transfers between ports, providing flexibility, scalability, and configurability in connecting multiple devices.

The Switchtec Gen 5.0 PCIe high-performance product lineup delivers very low system latency, as well as advanced diagnostics and debugging tools for troubleshooting and fast product development, making it highly suitable for next-generation data center, ML, automotive, communications, defense, and industrial applications, as well as other sectors. Tier 1 data center providers are relying on Switchtec PCIe switches to enable highly flexible compute and storage rack architectures.

Reference design and evaluation kit for PCIe switches

To enable rapid, PCIe-based system development, Microchip has created a validation board reference design, shown in Figure 1, using the Switchtec Gen 5 PCIe Switch Evaluation Kit, known as the PM52100-KIT.

Microchip’s Switchtec Gen 5 PCIe switch reference design validation board.Figure 1: Microchip’s Switchtec Gen 5 PCIe switch reference design validation board (Source: Microchip Technology Inc.)

The reference design helps developers implement the Switchtec Gen 5 PCIe switch into their own systems. The guidelines show designers how to connect and configure the switch and how to reach the best balance for signal integrity and power, as well as meet other critical design aspects of their application.

Fully tested and validated, the reference design streamlines development and speeds time to market. The solution optimizes performance, costs, and board footprint and reduces design risk, enabling fast market entry with a finished product. See the solutions diagram in Figure 2.

Microchip Switchtec Gen 5 PCIe solutions setup with other key devices.Figure 2: Switchtec Gen 5 PCIe solutions setup with other key devices (Source: Microchip Technology Inc.)

As with all Switchtec PCIe switch designs, full access to the Microchip ChipLink diagnostics tool is included, which allows parameter configuration, functional debug, and signal integrity analysis.

As per all PCIe integrations, clock and timing are important aspects of the design. Clock solutions must be highly reliable for demanding end applications, and the Microchip reference design includes complete PCIe Gen 1–5 timing solutions that include the clock generators, buffers, oscillators, and crystals.

Microchip’s ClockWorks Configurator and product selection tool allow easy customization of the timing devices for any application. The tool is used to configure oscillators and clock generators with specific frequencies, among other parameters, for use within the reference design.

The Microchip PM52100-KIT

For firsthand experience of the Switchtec Gen 5 PCIe switch, Microchip provides the PM52100-KIT (Figure 3), a physical board with 52 ports. The kit enables users to experiment with and evaluate the Switchtec family of Gen 5 PCIe switches in real-life projects. The kit was built with the guidance provided by the Microchip reference design.

Microchip PM52100-KIT diagram.Figure 3: The Microchip PM52100-KIT (Source: Microchip Technology Inc.)

The kit contains an evaluation board with the necessary firmware and cables. Users can download the ChipLink diagnostic tool by requesting access via a myMicrochip account.

With the ChipLink GUI, which is suitable for Windows, Mac, or Linux systems, the board’s hardware functions can easily be accessed and system status information monitored. It also allows access to the registers in the PCIe switch and configuration of the high-speed analog settings for signal integrity evaluation. The ChipLink diagnostic tool features advanced debug capabilities that will simplify overall system development.

The kit operates with a PCIe host and supports the connection of multiple host entities to multiple endpoint devices.

The PCIe interface contains an edge connector for linking to the host, several PCIe Amphenol Mini Cool Edge I/O connectors to connect the host to endpoints, and connectors for add-in cards. The 0.60-mm Amphenol connector allows high-speed signals to Gen 6 PCIe and 64G PAM4/PCIe, but also Gen 5 and Gen 4 PCIe. This connector maintains signal integrity, as its design minimizes signal loss and reflections at higher frequencies.

The board’s PCIe clock interface consists of a common reference clock (with or without SSC), SRNS, and SRIS. A single stable clock, with low jitter, is shared by both endpoints. The second most common clocking scheme is SRNS, where an independent clock is supplied to each end of the PCIe link; this is also supported by the Microchip kit.

Among the kit’s peripherals are two-wire (TWI)/SMBus interfaces; TWI bus access and connectivity to the temperature sensor, fan controller, voltage monitor, GPIO, and TWI expanders; SEEPROM for storage and PCIe switch configuration; and 100 M/GE Ethernet. The kit also includes GPIOs for TWI, SPI, SGPIO, Ethernet, and UART interfaces. There is UART access with a USB Type-B and three-pin connector header.

The included PSX Software Development Kit (integrating GHS’s MULTI development environment) enables the development and testing of the custom PCIe switch functionalities. An EJTAG debugger supports test and debug of custom PSX firmware; a 14-pin EJTAG connector header allows PSX probe connectivity.

Switchtec Gen 5 52-lane PCIe switch reference design

Microchip also offers a Switchtec 52-lane Gen 5 PCIe Switch Reference Design (Figure 4). As with the other reference design, it is fully validated and tested, and it provides the components and tools necessary to thoroughly assess and integrate this design into your systems.

This board includes a 32-bit microcontroller (ATSAME54, based on the Arm Cortex-M4 processor with a floating-point unit) to be configured with Microchip’s MPLAB Harmony software, as well as a CEC1736 root-of-trust controller. The CEC1736 is a 96-MHz Arm Cortex-M4F controller that is used to detect and provide protection for the PCIe system against failure or malicious attacks.

Microchip Switchtec Gen 5 52-lane PCIe switch reference design board.Figure 4: Switchtec Gen 5 52-lane PCIe switch reference design board (Source: Microchip Technology Inc.) Microchip and the PCIe standard

Microchip continues to be actively involved in the advancement of the PCIe standard, and it regularly participates in PCI-SIG compliance and interoperability events. With its turnkey PCIe reference designs and field-proven interoperable solutions, a high-performance design can be streamlined and brought to market very quickly.

To view the full details of this reference design, bill of materials, and to download the design files, visit https://www.microchip.com/en-us/tools-resources/reference-designs/switchtec-gen-5-pcie-switch-reference-design.

The post Developing high-performance compute and storage systems appeared first on EDN.

A digital filter system (DFS), Part 2

Втр, 12/09/2025 - 15:00

Editor’s note: In this Design Idea (DI), contributor Bonicatto designs a digital filter system (DFS). This is a benchtop filtering system that can apply various filter types to an incoming signal. The filtering range is up to 120 kHz.

In Part 1 of this DI, the DFS’s function and hardware implementation are discussed.

In Part 2 of this DI, the DFS’s firmware and performance are discussed.

Firmware

The firmware, although a bit large, is not too complicated. It is broken into six files to make browsing the code a bit easier. Let’s start with the code for the LCD screen, and its attached touch detector.

Wow the engineering world with your unique design: Design Ideas Submission Guide

LCD and touchscreen code

Here might be a good place to show the LCD screens, as they will be discussed below.

The LCD screen in the DFS. From the top left: Splash Screen, Main Screen, Filter Selection Screen, Cutoff or Center Frequency Screen, Digital Gain Screen, About Screen, and Running Screen.

A quick discussion of the screens will give you a good overview of the operation and capabilities of the DFS. The first screen is the opening or Splash Screen—not much here to see, but there is a battery charge icon on the upper right.

Touching this screen anywhere will move you to the Main Screen. The Main Screen shows you the current settings and allows you to jump to the appropriate screen to adjust the filter type, cutoff, or center frequency of the filter, and the digital gain. It also has the “RUN” button that starts the filtering function based on the current settings.

If you touch the “Change Filter Type” on the Main Screen, you will jump to the Filter Selection Screen. This lets you select the filter type you would like to use and if you want 2-pole or 4-pole filtering. (Selecting a 2-pole filter will give you a roll-off of 24 dB per octave or 80 dB per decade, while the 4-pole will provide you with a roll-off of 12 dB per octave or 40 dB per decade.) When you touch the “APPLY” button, your settings (in green) will be applied to your filter, and you will jump back to the Main Screen.

In the Main Screen, if you want to change the cutoff/center frequency. You can set any frequency up to 120 kHz. Hitting “Enter” will bring you back to the Main Screen.

If you want to change the digital gain (gain applied when the filter is run on a sample, between the ADC and the DAC, touch “Change Output Gain” on the Main Screen. The range for this gain is from 0.01 to 100. Again, “Enter” will take you back to the Main Screen.

On the Main Screen, selecting “About” will take you to a screen showing lots of interesting information, such as your filter settings, your current filter’s coefficients, battery voltage, and charge level. (If you do not have a battery installed, you may see fully charged numbers as it is seeing the charger voltage. There is a #define at the top of the main code you can set to false if you don’t have a battery installed, and this line won’t show.) The last item shown is the incoming USB voltage.

In the main code, hitting “Run” will start the system to take in your signal at the input BNC, filter it, and send it out the output BNC. The Running Screen also shows you the current filter settings.

Now, let’s take a quick look at the mechanics of creating the screens. The LCD uses a few libraries to assist in the display of primitive shapes and colors, text, and to provide some functions for reading the selected touchscreen position. When opening the Arduino code, you will find a file labeled “TFT_for_adjustable_filter_7.ino”. This file contains the code for most of the screen displays and touchscreen functions. The code is too long to list here, but here are a few lines of code setting the background color and displaying the letters “DFS”:

tft.fillScreen(tft.color565(0xe0, 0xe0, 0xe0)); // Grey tft.setFont(&FreeSansBoldOblique50pt7b); tft.setTextColor(ILI9341_RED); tft.setTextSize(1); tft.setCursor(13, 100); tft.print("DFS");

A lot of the functions look like this, as they are used to present the data and create the keypads for filter selection, frequency setting, and digital gain setting. In this file are also the functions for monitoring and logically connecting the touch to the appropriate function.

The touchscreen, although included on the LCD, is essentially independent from it. One issue that needed to be solved was that the alignment of the touchscreen over the LCD screen is not accurate, and pixel (0,0) is not aligned with the touch position (0,0) on the touchscreen.

Also, the LCD has 240×320 pixels, and the touch screen has, roughly, 3900×900 positions. So there needs to be some calibration and then a conversion function to convert the touch point to the LCD’s pixel coordinates. You’ll see in the code that, after calibration, the conversion function is very easy using the Arduino map() function.

Other files

There is also a file that contains the initialization code for the ADC. This sets up the sample rate, resolution, and other needed items. Note that this is run at the start of the code, but also when changing the number of poles in the filter. This is due to the need to change the sample rate. Another file contains the code to initialize the DAC, and is also called when the program is started.

Also included is a file with code to pull in factory calibration data for linearizing the ADC. This calibration data is created at the factory and is stored in the microcontroller (micro)—nice feature.

Filtering code

Now to the more interesting code. There is code to route you to the screens and their selections, but let’s skip that and look at how the filtering is accomplished. After you have selected a filter type (low-pass, high-pass, band-pass, or band-stop), the number of poles of the filter (2 or 4), the cutoff or center frequency of the filter, or adjusted the internal digital output gain, the code to calculate the Butterworth IIR filter coefficients is called.

These coefficients are calculated for a direct form II filter algorithm. The coefficients for the filters are floats, which allow the system to create filters accurately throughout a 1 Hz to 120 kHz range. After the coefficients are calculated, the filter can be run. When you touch the “RUN” button, the code begins the execution of the filter.

The first thing that happens in the filter routine is that the ADC is started in a free-running mode. That means it initiates a new ADC reading, waits for the reading to complete, signals (through a register bit) that there is a new reading available, and then starts the cycle again. So, the code must monitor this register bit, and when it is set, the code must quickly get the new ADC reading and filter it, and then send that filtered value to the DAC for output.

After this, it checks for clipping in the ADC or DAC. The input to the DFS is AC-coupled. Then, when it enters the analog front-end (AFE), it is recentered at about 1.65 V. If any ADC sample gets too close to 0 or too close to 4095 (it’s a 12-bit ADC), it is flagged as clipping, and the front panel input clipping LED will light for ~ 0.5 seconds. This could happen if the input signal is too large or the input gain dial has been turned up too far.

Similarly, the DAC output is checked to see if it gets too close to 0 or to 2047 (we’ll explain why the DAC is 11 bits in a moment). If it is, it is flagged as clipping, and the output clipping LED will light for ~0.5 seconds. Clipping of the DAC could happen because the digital gain has been set too high. Note that the output signal could be set too high, using the output gain dial, and it could be clipping the signal, but this would not be detected as the amplified analog back-end (ABE) signal is not monitored in the micro.

Now to why the DAC is 11 bits. In the micro, it is actually a 12-bit DAC, but in an effort to get the sample rate as fast as possible, I discovered that the DAC had an unspec’d slew rate limit of somewhere around 1 volt/microsecond.

To work around this, I divide the signal being passed to the DAC by two so the DAC’s output voltage doesn’t have to slew so far. This is compensated for in the ABE as the actual gain of the output gain adjust is actually 2 to 10, but represented as 1 to 5. [To those of you who are questioning why I didn’t set the reference to 1.65 (instead of 3.3 V) and use the full 12 bits, the answer is this micro has a couple of issues on its analog reference system that precluded using 1.65 V.]

One last task in the filter “running” loop is to check if the “Stop” has been pressed. This a simple input bit read so it only takes few cycles.

A couple of notes on filtering: You may have noticed that there is an extra filter you can select – the “Pass-Thru”. This takes in the ADC reading, “filters” it by only multiplying the sample by the selected digital gain. It then outputs it to the DAC. I find this useful for checking the system and setup.

Another note on filtering is that you will see, in the code, that a gain offset is added when filtering. The reason for this is that, when using a high-pass or band-pass filter, the DC level is removed from the samples. We need to recenter the result at around 2048 counts. But we can’t just add 2048, as the digital gain also comes into play. You’ll see this compensation calculated using the gain and applied in the filtering routines.

Performance

I managed to get the ADC and DAC to work at 923,645 samples per second (sps) when using any of the 2-pole filters. This is a Nyquist frequency around 462 kHz. Since the selectable upper rate for a filter is 120 kHz, we get an oversample rate of almost 4. For 4-pole filters, the system runs at 795,250 sps, for an oversample rate of around 3.3.

The 4-pole filters are slower as they are created by cascading two 2-pole filters. [To check these sample rates and see if the loop has enough time to complete before the following sample is read, I toggle the D4 digital output as I pass through the loop. I then monitor this on a scope.]

The noise floor of the output signal is fairly good, but as I built this on a protoboard with no ground planes, I think it could be lower if this were built on a PCB with a good ground plane. The limiting factor on my particular board, though, is the spurious free dynamic range (SFDR).

I do get harmonic spurs at various frequencies, but I can get up to a 63 dB SFDR. This is not far from the SFDR spec of the ADC. I did notice that the amplitude of these harmonic spurs changed when I moved cables inside the enclosure, so good cable management may improve this.

Use of AI

Recently, I’ve been using AI in design, and I like to include quick information describing if AI helped in the design—here’s how it worked out. The code to initialize the ADC was quite difficult, mostly because there was a lot of misinformation about this online.

I used Microsoft Copilot and ChatGPT to assist. Both fell victim to some of the online misinformation, but by pointing out some of the errors, I got these AI systems close enough that I could complete the ADC initialization myself.

To design the battery charge indicator on the splash screen, I used the same AI. It was a pure waste of time as it produced something that didn’t look like a battery – it was easier and faster to design it myself.

The code for the filter coefficients was difficult to track down online, and ChatGPT did a much better job of searching than Google.

For circuit design, I tried to use AI for the Sallen-Key filter designs, but it was a failure, just a waste of time. Stick to the IC supplier tools for filter design.

Tried to use AI to design a nice splash screen, but I wasted more time iterating with the chatbot than it took to design the screen.

I also tried to get AI to design an enclosure with a viewable LCD, BNC connectors, etc. To be honest, I didn’t give it a lot of chances, but it couldn’t get it to stop focusing on a flat rectangular box.

Modifications

The code is actually easy to read, and you may want to modify it to add features (like a circular buffer sample delay) or modify things like the filter coefficient calculations. I think you’ll find this DFS has a number of uses in your lab/shop.

The schematic, code, 3D print files, links to various parts, and more information and notes on the design and construction can be downloaded at: https://makerworld.com/en/@user_1242957023/upload

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post A digital filter system (DFS), Part 2 appeared first on EDN.

A quick primer demystifies parabolic microphones

Втр, 12/09/2025 - 09:10

Parabolic microphones offer a fascinating blend of geometry and acoustics, enabling long-distance sound capture with surprising clarity. From dish-shaped reflectors to pinpoint audio focus, this quick primer distills the essentials for engineers and curious minds exploring high-gain directional solutions.

Parabolic microphones are specialized tools designed to capture sounds that are too faint for conventional microphones, or when precise directional focus is essential. By concentrating acoustic energy from a narrow field, a well-designed parabolic microphone can isolate a single sound source amid a noisy environment. These microphones are commonly used in nature recording, sports broadcasting, surveillance, and even drone detection.

Understanding parabolic microphones

Capturing audio for sound reinforcement, broadcast television, live performances, video production, or natural ambience requires choosing the right microphone for the job. Ideally, the sound source should be positioned close to the microphone to maximize signal strength and minimize interference from ambient noise or the microphone’s own self-noise. A signal-to-noise ratio (SNR) in the range of 40–60 dB is typically desirable. For distant or low-level sound sources, microphone’s self-noise should be especially low—below 10 dBA is recommended.

In applications like sporting events, surveillance and nature sound recording, getting close to the source is often impractical. Only a well-designed parabolic microphone can deliver suitable SNR under these conditions. Keep in mind that audio signal level drops by 6 dB every time the distance to the source is doubled, and the farther you are, the more unwanted ambient sounds creep in.

Parabolic microphones address both challenges by combining a narrow polar angle with high forward gain. Much like a telephoto camera lens, a parabolic dish offers greater magnification at long distances—but with a tighter field of view.

At the central focal point of a portable parabolic dish, incoming sound waves converge with remarkable intensity. The concept of using a parabolic (semi-spherical) reflector to capture distant sounds has been around for decades, and with good reason.

Unlike conventional microphones, a parabolic reflector acts as a noiseless acoustic amplifier—boosting signal strength without adding electronic noise. Its frequency response and polar pattern are directly influenced by the size of the dish, with larger reflectors offering narrower pickup angles and extended low-frequency reach.

As shown below, the operating principle of a parabolic microphone is straightforward to visualize. Incoming sound waves reflect off the curved surface of the dish and converge at its focal point, where the microphone element captures and converts them into audio signals with enhanced directionality and gain.

Figure 1 A conceptual diagram of a parabolic microphone demonstrates its acoustic focusing mechanism and focal point geometry. Source: Author

Simply put, a parabolic dish collects sound energy from a broad area and concentrates it onto a single focal point, where a microphone is positioned. By focusing acoustic energy in this way, the dish acts as a mechanical amplifier, boosting signal strength without introducing electronic noise. This passive amplification enables the microphone to capture faint or distant sounds with greater clarity, making the system ideal for applications where electronic gain alone would be insufficient or too noisy.

As an aside, although the parabolic dish is designed to reflect sound waves toward its focal point, not all waves strike the microphone diaphragm at the same angle. For this reason, omnidirectional microphones are typically used; they maintain consistent sensitivity regardless of the direction of incoming sound.

This may seem counterintuitive at first: the parabolic system is highly directional, yet it employs an omnidirectional microphone. The key is that the dish provides directionality, while the mic simply needs to capture all focused energy arriving at the focal point.

Parabolic microphones: Buy or DIY?

A complete parabolic microphone system typically includes one or more microphone elements, a parabolic reflector, a mounting mechanism to position the microphone at the dish’s focal point, an optional preamplifier or headphone amplifier, and a way to handhold the assembly or mount it on a tripod.

If you are looking to acquire such a system, you have three main paths: you can purchase a fully assembled unit that includes the microphone element and, optionally, built-in amplification; you can opt for a nearly complete setup that provides the dish and mounting hardware but leaves the microphone and electronics up to you; or you can build your own from scratch, sourcing each component to suit your specific needs.

A parabolic microphone makes an excellent DIY project for several compelling reasons. The underlying principles are fascinating yet easy to grasp, and construction is not prohibitively complex. More than just a science experiment, the finished system can be genuinely useful—whether for nature recording, surveillance, or acoustic research.

Material costs are typically far lower than those of comparable commercial units, making DIY both practical and rewarding. For that reason, the next section highlights a few key design pointers to help you build your own parabolic microphone.

In addition to the all-important dish diameter, three other factors play a critical role when selecting a reflector for a parabolic microphone: the dish’s focal-length-to-diameter (f/D) ratio, the precision of its parabolic curvature, and the smoothness of its inner surface. Each of these influences how effectively the dish focuses sound and how cleanly the microphone captures it.

Oh, this is just a personal suggestion, but I strongly recommend buying a professionally molded parabolic dish rather than attempting to make one from scratch. The reflector is the most critical part of the system, and its precision directly affects performance.

A quick pick is the plastic parabolic dish from Wildtronics, which offers reliable geometry and build quality for DIY use. My purpose in writing this note is not to endorse any particular parabolic dish, but simply to offer a practical pointer that may help others construct a working parabolic microphone with minimal frustration and cost.

Figure 2 The polycarbonate parabolic reflector is engineered for sound amplification. Source: Wildtronics

Once you have a reliable dish, the rest of the build becomes far more manageable. You can then add mechanical accessories such as mic mounts, handles, tripod brackets, and a suitable windshield to reduce wind noise and protect the microphone.

Vital components like an electret microphone element and supporting audio electronics—whether a simple preamplifier or a full recording interface—complete the setup. With the dish as your foundation, assembling a functional and effective parabolic microphone becomes a rewarding DIY process.

Acoustic sensor for DIY parabolic mic

To wrap up, here is a proven acoustic sensor design for a DIY parabolic microphone, built around the Primo EM272 electret condenser microphone.

Figure 3 Basic schematic of an acoustic sensor for a parabolic mic that is built around the Primo EM272 ECM. Source: Author

In an earlier prototype, I used a slightly tweaked prewired monaural audio amplifier module to process signals from this stage, and it performed exactly as expected.

Enjoyed this quick dive into parabolic microphones? With just enough theory to anchor the fundamentals and a few practice-ready tips to spark experimentation, this post is only the beginning. Whether you are a field recordist, audio tinkerer, or simply acoustically curious, there is more to explore.

Read the full post, and if it strikes a chord, I will value your perspective. Every signal adds clarity.

T.K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post A quick primer demystifies parabolic microphones appeared first on EDN.

Tapo or Kasa: Which TP-Link ecosystem best suits ya?

Пн, 12/08/2025 - 15:00

The “smart home” market segment is, I’ve deduced after many years of observing it and, in a notable number of “guinea pig” cases, personally participating in it (with no shortage of scars to show for my experiments and experiences), tough for suppliers to enter and particularly remain active for any reasonable amount of time. I generally “bin” such companies into one of three “buckets”, ordered as follows in increasing “body counts”:

  • Those that end up succeeding long-term as standalone entities (case study: Wyze)
  • Those who end up getting acquired by larger entities (case studies: Blink, Eero, and Ring, all by Amazon, and Nest, by Google)
  • And the much larger list of those who end up fading away (one recent example: Neato Robotics’ robovacs), a demise often predated by an interim transition of formerly-free associated services to paid (one-time or, more commonly, subscription-based) successors as a last-gasp revenue-boosting move, and a strategy that typically fails due to customers instead bailing and switching to competitors.

There’s one other category that also bears mentioning, which I’ll be highlighting today. It involves companies that remain relevant long-term by periodically culling portions of the entireties of product lines within the overall corporate portfolio when they become fiscally unpalatable to maintain. A mid-2020 writeup from yours truly, as an example, showcased a case study of each; Netgear stopped updating dozens of router models’ firmware, leaving them vulnerable to future compromise in favor of instead compelling customers to replace the devices with newer models (four-plus years later, I highlighted a conceptually similar example from Western Digital), and Best Buy dumped its Connect smart home device line in its entirety.

A Belkin debacle

 

Today’s “wall of shame” entry is Belkin. Founded more than 40 years ago, the company today remains a leading consumer electronics brand; just in the past month or so, I’ve bought some multi-port USB chargers, a couple of MagSafe Charging Docks, several RockStar USB-C and Lightning charging-plus-headphone adapters, and a couple of USB-C cables from them. Their Wemo smart plugs, on the other hand…for a long time, truth be told, I was a “frequent flyer” user and fan of ‘em. See, for example, these teardowns and hands-on evaluation articles:

A few years ago, however, my Wemo love affair started to fade. Researchers had uncovered a buffer overflow vulnerability in older units, widely reported in May 2023, that allowed for remote control and broader hacking. But Belkin decided to not bother fixing the flaw, because affected devices were “at the end of their life and, as a result, the vulnerability will not be addressed” (whether it could have even been fixed with just a firmware update, with Belkin’s decline to do so therefore just a fundamental business decision, or reflected a fundamental hardware flaw necessitating a much more costly replacement of and/or refunds for affected devices, was never made clear to the best of my knowledge).

Two-plus years later, and earlier this summer, Belkin effectively pulled the plug on the near- entirety of the Wemo product line by announcing the pending sever of devices’ tethers not only to the Wemo mobile app and associated Belkin server-side account and devices’ management facilities, in a very Insteon-reminiscent move, but also in the process the “cloud” link to Amazon’s Alexa partner services. Ironically, the only remaining viable members of the Wemo product line after January 31, 2026 will be a few newer products that are alternatively controllable via the Thread protocol. As I aptly noted within my 2024 CES coverage:

Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin Wemo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors. Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers.

Enter TP-Link

Clearly, it was time for me to look for a successor smart plug product line supplier and device series. Amazon was the first name that came to mind, but although its branded Smart Plug is highly rated, it’s only controllable via Alexa:

I was looking for an ecosystem that, like Wemo, could be broadly managed, not only by the hardware supplier’s own app and cloud services but also by other smart home standards such as the aforementioned Amazon (Alexa), along with Apple (HomeKit and Siri), Google (Home and Assistant, now transitioning to Gemini), Samsung (SmartThings), and ideally even open-source and otherwise vendor-agnostic services such as IFTTT and Matter-and-Thread.

I also had a specific hardware requirement that still needed to be addressed. The fundamental reason why we’d brought smart plugs into the home in the first place was so that we could remotely turn off the coffee maker in the kitchen if we later realized that we’d forgotten to do so prior to leaving the home; my wife’s bathroom-located curling iron later provided another remote-power-off opportunity. Clearly, standard smart plugs designed for low-wattage lamps and such wouldn’t suffice; we needed high-current-capable switching devices. And this requirement led to the first of several initially confusing misdirections with my ultimately selected supplier (motivated by rave reviews at The Wirecutter and elsewhere), TP-Link.

I admittedly hadn’t remembered until I did research prior to writing this piece that I’d actually already dissected an early TP-Link smart plug, the HS100, back in early 2017. That I’d stuck with Belkin’s Wemo product line for years afterward, admittedly coupled with my increasingly geriatric brain cells, likely explains the memory misfire. That very same device, along with its energy-monitoring HS110 sibling, had launched the company’s Kasa smart home device brand two years earlier, although looking back at the photos I took at the time I did my teardown, I can’t find a “Kasa” moniker anywhere on the device or its packaging, so…🤷‍♂️

My initial research indicated that the TP-Link Kasa HS103 follow-on, introduced a few years later and still available for purchase, would, along with the related HS105 be a good tryout candidate:

 

The two devices supposedly differed in their (resistive load) current-carrying capacity: 10 A max for the HS103 and 15 A for the HS105. I went looking for the latter, specifically for use with the aforementioned coffee maker and curling iron. But all I could find for sale was the former. It turns out that TP-Link subsequently redesigned and correspondingly up-spec’d the HS103 to also be 15A-capable, effectively obsoleting the HS105 in the process.

Smooth sailing, at least at first

And I’m happy to say that the HS103 ended up being both a breeze to set up and (so far, at least) 100% reliable in operation. Like the HS100 predecessor, along with other conceptually similar devices I’ve used in the past, you first connect to an ad-hoc Wi-Fi connection broadcast by the smart plug, which you use to send it your wireless LAN credentials via the mobile app. Then, once the smart plug reboots and your mobile device also reconnects to that same wireless LAN, they can see and communicate with each other via the Kasa app:

And then, after one-time initially installing Kasa’s Alexa skill and setting up my first smart plug in it, subsequent devices added via the Kasa app were then automatically added in Alexa, too:

Inevitable glitches

The latest published version of the Wirecutter’s coverage had actually recommended a newer, slightly smaller (but still 15A-capable) TP-Link smart plug, the EP10, so I decided to try it next:

 

Unfortunately, although the setup process was the same, the end result wasn’t:

This same unsuccessful outcome occurred with multiple devices from the first two EP10 four-pack sets I tried, which, like their HS103 forebears, I’d sourced from Amazon. Remembering from past experiences that glitches like this sometimes happen when a smartphone—which has two possible network connections, Wi-Fi and cellular—is used for setup purposes, I first disabled cellular data services on my Google Pixel 7, then tried a Wi-Fi-only iPad tablet instead. No dice.

I wondered if these particular smart plugs, which, like their seemingly more reliable HS103 precursors, are 2.4 GHz Wi-Fi-only, were somehow getting confused by one or more of the several relatively unique quirks of my Google Nest Wifi wireless network:

  1. The 2.4 GHz and 5 GHz Wi-Fi SSIDs broadcast by any node are the same name, and
  2. Being a mesh configuration, all nodes (both stronger-signal nearby and weaker, more distant, to which clients sometimes connect instead) also have the exact same SSID

Regardless, I couldn’t get them to work no matter what I tried, so I sent them back for a refund…

Location awareness

…only to then have the bright idea that it’d be cool to take both an HS103 and an EP10 apart and see if there was any hardware deviation that might explain the functional discrepancy. So, I picked up another EP10 combo, this one a two-pack. And on a “third time’s the charm” hunch (and/or maybe just fueled by stubbornness), I tried setting one of them up again. Of course, it worked just fine this time 🤷‍♂️

This time, I decided to try a new use case: controlling a table lamp in our dining room that automatically turned on at dusk and turned off again the next morning. We’d historically used an archaic mechanical timer for lamp power control, an approach not only noisy in operation but which also needed to be re-set after each premises electricity outage, since the timer didn’t embed a rechargeable battery to act as a temporary backup power source and keep time:

The mechanical timer was also clueless about the varying sunrise and sunset times across the year, not to mention the twice-yearly daylight saving time transitions. Its smart plug successor, which knows where it is and what day and time it is (whenever it’s powered up and network-connected, of course), has no such limitations:

Rebrands and migrations

Spec changes…inconsistent setup outcomes…there’s one more bit of oddity to share in closing. As this video details:

“Kasa” was TP-Link’s original smart home device brand, predominantly marketed and sold in North America. The company, for reasons that remain unclear to me and others, subsequently, in parallel, rolled out another product line branded as “Tapo” across the rest of the world. Even today, if you revisit the “smart plugs” product page on TP-Link’s website, whose link I first shared earlier in this writeup, you’ll see a mix of Kasa- and Tapo-branded products. The same goes for wall switches, light bulbs, cameras, and other TP-Link smart home devices. And historically, you needed to have both mobile apps installed to fully control a mixed-brand setup in your home.

Fortunately, TP-Link has made some notable improvements of late, from which I’m reading between the lines and deducing that a full transition to Tapo is the ultimate intended outcome. As I tested and confirmed for myself just a couple of days ago, it’s now possible to manage both legacy Kasa and newer Tapo devices using the same Tapo app; they also leverage a common TP-Link user account:

They all remain visible to Alexa, too, and there’s a separate Tapo skill that can also be set up:

along with, as with Kasa, support for other services:

Further hands-on evaluation

To wit, driven by curiosity as to whether device functional deviations are being fueled by (in various cases) hardware differences, firmware-only tweaks or combinations of the two, I’ve taken advantage of a 30%-off Black Friday (week) promotion to also pick up a variety of other TP-Link smart plugs from Amazon’s Resale (formerly Warehouse) area, for both functional and teardown analysis in the coming months:

  • Kasa EP25 (added Apple HomeKit support, also with energy monitoring)
  • Tapo P105 (seeming Tapo equivalent to the Kasa EP10)
  • Tapo P110M (Matter compatible, also with energy monitoring)
  • Tapo P115 (energy monitoring)
  • Tapo P125 (added Apple HomeKit support)

Some of these devices look identical to others, at least from the outside, while in other cases dimensions and button-and-LED locations differ product-to-product. But for us engineers, it’s what’s on the inside that counts. Stand by for further writeups in this series throughout 2026. And until then, let me know your thoughts on what I’ve covered so far in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Tapo or Kasa: Which TP-Link ecosystem best suits ya? appeared first on EDN.

Silicon MOS quantum dot spin qubits: Roads to upscaling

Пн, 12/08/2025 - 12:51

Using quantum states for processing information has the potential to swiftly address complex problems that are beyond the reach of classical computers. Over the past decades, tremendous progress has been made in developing the critical building blocks of the underlying quantum computing technology.

In its quest to develop useful quantum computers, the quantum community focuses on two basic pillars: developing ‘better’ qubits and enabling ‘more’ qubits. Both need to be simultaneously addressed to obtain useful quantum computing technology.

The main metrics for quantifying ‘better’ qubits are their long coherence time—reflecting their ability to store quantum information for a sufficient period, as a quantum memory—and the high qubit control fidelity, which is linked to the ‘errors’ in controlling the qubits: sufficiently low control errors are a prerequisite for successfully performing a quantum error correction protocol.

The demand for ‘more’ qubits is driven by practical quantum computation algorithms, which require the number of (interconnected) physical qubits to be in the millions, and even beyond. Similarly, quantum error correction protocols only work when the errors are sufficiently low: otherwise, the error correction mechanism actually ‘increases’ error, and the protocols diverge.

Of the various quantum computing platforms that are being investigated, one stands out: silicon (Si) quantum dot spin qubit-based architectures for quantum processors, the ‘heart’ of a future quantum computer. In these architectures, nanoscale electrodes define quantum dot structures that trap a single electron (or hole), its spin states encoding the qubit.

Si spin qubits with long coherence times and high-fidelity quantum gate operations have been repeatedly demonstrated in lab environments and are therefore a well-established technology with realistic prospects. In addition, the underlying technology is intimately linked with CMOS manufacturing technologies, offering the possibility of wafer-scale uniformity and yield, an important stepping stone toward realizing ‘more’ qubits.

A sub-class of Si spin qubits uses metal-oxide-semiconductor (MOS) quantum dots to confine the electrons, a structure that closely resembles a traditional MOS transistor. The small size of the Si MOS quantum dot structure (~100 nm) offers an additional advantage to upscaling.

Low qubit charge noise: A critical requirement to scale up

In the race toward upscaling, Si spin qubit technology can potentially leverage advanced 300-mm CMOS equipment and processes that are known for offering a high yield, high uniformity, high accuracy, high reproducibility and high-volume manufacturing—the result of more than 50 years of down selection and optimization. However, the processes developed for CMOS may not be the most suitable for fabricating Si spin quantum dot structures.

Si spin qubits are extremely sensitive to noise coming from their environment. Charge noise, arising from the quantum dot gate stack and the direct qubit environment, is one of the most widely identified causes of reduced fidelity and coherence. Two-qubit ‘hero’ devices with low charge noise have been repeatedly demonstrated in the lab using academic-style techniques such as ‘lift off’ to pattern the quantum dot gate structures.

This technique is ‘gentle’ enough to preserve a good quality Si/SiO2 interface near the quantum dot qubits. But this well-controlled fabrication technique cannot offer the required large-scale uniformity needed for large-scale systems with millions of qubits.

On the other hand, industrial fabrication techniques like subtractive etch in plasma chambers filled with charged ions or lithography-based patterning based on such etching processes easily degrade the device and interface quality, enhancing the charge noise of Si/SiO2-based quantum dot structures.

First steps in the lab-to-fab transition: Low-charge noise and high-fidelity qubit operations achieved on an optimized 300mm CMOS platform

Imec’s journey toward upscaling Si spin qubit devices began about seven years ago, with the aim of developing a customized 300-mm platform for Si quantum dot structures. Seminal work led to a publication in npj Quantum Information in 2024, highlighting the maturity of imec’s 300-mm fab-based qubit processes toward large-scale quantum computers.

Through careful optimization and engineering of the Si/SiO2-based MOS gate stack with a poly-Si gate, charge noise levels of 0.6 µeV/ÖHz at 1Hz were demonstrated, the lowest values achieved on a fab-compatible platform at the time of publication. The values could be demonstrated repeatedly and reproducibly.

Figure 1 These Si MOS quantum dot structures are fabricated using imec’s optimized 300-mm fab-compatible integration flow. Source: imec

More recently, in partnership with the quantum computing company Diraq, the potential of imec’s 300mm platform was further validated. The collaborative work, published in Nature, showed high-fidelity control of all elementary qubit operations in imec’s Si quantum dot spin qubit devices. Fidelities above 99.9% were reproducibly achieved for qubit preparation and measurement operations.

Fidelity values systematically exceeding 99% were shown for one- and two-qubit gate operations, which are the operations performed on the qubits to control their state and entangle them. These values are not arbitrarily chosen. In fact, whether quantum error correction ‘converges’ (net error reduction) or ‘diverges’ (the net error introduced by the quantum error correction machinery increases) is crucially dependent on a so-called threshold value of about 99%. Hence, fidelity values over 99% are required for large scale quantum computers to work.

Figure 2 Schematic of a two-qubit Diraq device on a 300-mm wafer shows the full-wafer, single-die, and single-device level. Source: imec

Charge noise was also measured to be very low, in line with the previous results from the npj Quantum Information paper. Gate set tomography (GST) measurements shed light on the residual errors; the low charge noise values, the coupling between the qubit, and the few remaining residual nuclear-spin-carrying Si isotopes (29Si) turned out to be the main factor in limiting the fidelity for these devices. These insights show that even higher fidelities can be achieved through further isotopic enrichment of the Si layer with 28Si.

In the above studies, the 300-mm processes were optimized for spin qubit devices in an overlapping gate device architecture. Following this scheme, three layers of gates are patterned in an overlapping and more or less self-aligned configuration to isolate and confine an electron. This multilayer gate architecture, extensively studied and optimized within the quantum community, offers a useful vehicle to study individual qubit metrics and small-scale arrays.

Figure 3 Illustration of a triple quantum dot design uses overlapping gates; electrons are shown as yellow dots. The gates reside in three different layers: GL1, GL2, and GL3, as presented at IEDM 2025. Source: imec

The next step in upscaling: Using EUV for gate patterning to provide higher yield, process control, and overlay accuracy

Thus far, imec used a wafer-scale, 300-mm e-beam writer to print the three gate layers that are central to the overlapping gate architecture. Although this 300-mm-compatible technique facilitates greater design flexibility and small pitches between quantum dots, it comes with a downside: its slow writing time does not allow printing full 300-mm wafers in a reasonable process time.

At IEDM 2025, imec for the first time demonstrated the use of single-print 0.33 NA EUV lithography to pattern the three gate layers of the overlapping gate architecture. EUV lithography has by now become the mainstay for industrial CMOS fabrication of advanced (classical) technology nodes; imec’s work demonstrates that it can be equally used to define and fabricate good quantum dot qubits. This means a significant leap forward in upscaling Si spin qubit technology.

Full 300-mm wafers can now be printed with high yield and process control—thereby fully exploiting the reproducibility of the high-quality qubits shown in previous works. EUV lithography brings an additional advantage: it allows the different gates to be printed with higher overlay accuracy than with the e-beam tools. That benefits the quality of the qubits and allows being more aggressive in the dot-to-dot pitches.

Figure 4 TEM and SEM images, after patterning the gate layers with EUV, highlight critical dimensions, as presented at IEDM 2025. Source: imec

The imec researchers demonstrated robust reproducibility, full-wafer room temperature functionality, and good quantum dot and qubit metrics at 10 mK. Charge noise values were also comparable to measurements on similar ‘ebeam-lithography’ devices.

Inflection point: Moving to scalable quantum dot arrays to address the wiring bottleneck

The overlapping gate architecture, however, is not scalable to the large quantum dot arrays that will be needed to build a quantum processor. The main bottleneck is connectivity: each qubit needs individual control and readout wiring, making the interconnect requirements very different from those of classical electronic circuits. In the case of overlapping gates, wiring fanout is provided by the different gate layers, and this imposes serious limitations on the number of qubits the system can have.

Several years ago, a research group at HRL Laboratories in the United States came up with a more scalable approach to gate integration: the single-layer gate device architecture. In this architecture, the gates that are needed to isolate the electrons—the so-called barrier and plunger gates—are fabricated in one and the same layer, more closely resembling how classical CMOS transistors are built and interconnected using a multilayer back end of line (BEOL).

Today, research groups worldwide are investigating how large quantum dot arrays can be implemented in such a single-layer gate architecture, while ensuring that each qubit can be accessed by external circuits. At first sight, the most obvious way is a 2D lattice, similar to integrating large memory arrays in classic CMOS systems.

But eventually, this approach will hit a wiring scaling wall as well. The NxN quantum dot array requires a large number of BEOL layers for interconnecting the quantum dots. Additionally, ensuring good access for reading and controlling qubits that are farther away from the peripheral charge sensors becomes challenging.

A trilinear quantum dot architecture: An imec approach

At IEDM 2021, imec therefore proposed an alternative, smart way of interconnecting neighboring silicon qubits: the bilinear array. The design is based on topologically mapping a 2D square lattice to form a bilinear design, where alternating rows of the lattice are shifted into two rows (or 1D arrays).

While the odd rows of the 2D lattice are placed into an upper 1D array, the even rows are moved to a lower 1D array. In this configuration, all qubits remain addressable while maintaining the target connectivity of four in the equivalent 2D square lattice array. These arrays are conceptually scalable as they can further grow in one dimension, along the rows.

Recently, the imec researchers expanded this idea toward a trilinear quantum dot device architecture that is compatible with the single-layer gate integration approach. With this trilinear architecture, a third linear array of (empty) quantum dots is introduced between the upper and lower rows. This extra layer of quantum dots now serves as a shuttling array, enabling qubit connectivity via the principle of qubit shuttling.

Figure 5 View the concept of mapping a 2D lattice onto a bilinear design and expanding that design to a trilinear architecture. The image illustrates the principle of qubit shuttling for the interaction between qubits 6 and 12. Source: imec

Figure 6 Top view of a 3×5 trilinear single gate array is shown with plunger (P) and barrier (B) gates placed in a single layer, as presented at IEDM 2025. Source: imec

The video below explains how that works. In the trilinear array, single and some of the two-qubit interactions can happen directly between nearest neighbors, the same way as in the bilinear architecture. For others, two-qubit interactions can be performed through the ‘shuttle bus’ that is composed of empty quantum dots. Take a non-nearest neighbor interaction between two qubits as an example.

The video shows schematics, conceptual operation, and manufacturing of trilinear quantum dot architecture. Source: imec

The first qubit is moved to the middle array, shuttled along this array to the desired site to perform the two-qubit operation with a second, target qubit, and shuttled back. These ‘all-to-all’ qubit interactions were not possible using the bilinear approach. Note that these interactions can only be reliably performed with high-fidelity quantum operations to ensure that no information is lost during the shuttling operation.

But how can this trilinear quantum dot architecture address the wiring bottleneck? The reason is the simplified BEOL structure: only two metal layers are needed to interconnect all the quantum dots. For the upper and lower 1D arrays, barrier and plunger gates can connect to one and the same metal layer (M1); the middle ‘shuttle’ array can partly connect to the same M1 layer, partly to a second metal layer (M2). Alongside the linear array, charge sensors can be integrated to measure the state of the quantum dots for qubit readout.

The architecture is also scalable in terms of number of qubits, as the array can further grow along the rows. If that approach at some point hits a scaling wall, it can potentially be expanded to four, five or even more linear arrays, ‘simply’ by adding more BEOL layers.

Using EUV lithography to process the trilinear quantum dot architecture: A world first

At IEDM 2025, imec showed the feasibility of using EUV lithography for patterning the critical layers of this trilinear quantum dot architecture. Single-print 0.33 NA EUV lithography was used to print the single-layer gate, the gate contacts, and the two BEOL metal layers and vias.

Figure 7 Single-layer gate trilinear array is shown after EUV lithography and gate etch with TEM cross sections in X and Y directions, as presented at IEDM 2025. Source: imec

One of the main challenges was achieving a very tight pitch across all the different layers without pitch relaxation. The gate layer was patterned with a bidirectional gate pitch of 40 nm. It was the first time ever that such an ‘unconventional’ gate structure was printed using EUV lithography, since EUV lithography for classical CMOS applications mostly focuses on unidirectional patterns. Next, 22-nm contact holes were printed with <2.5 nm (mean + 3 sigma) contact-to-gate overlay in both directions. The two metal layers M1 and M2 were patterned with metal pitch in the order of 50 nm.

Figure 8 From top to bottom, see the trilinear array (a-c) after M1 and (d-f) after M2 patterning, as presented at IEDM 2025. Source: imec

In the race for upscaling, the use of EUV lithography allows full 300-mm wafers to be processed with high yield, uniformity, and overlay accuracy between the critical structures. First measurements already revealed a room temperature yield of 90% across the wafer, and BEOL functionality was confirmed using dedicated test structures.

The use of single-patterning EUV lithography additionally contributes to cost reduction by avoiding complex multi-patterning schemes and to the overall resolution of the printed features. Moreover, the complexity and asymmetry of the 2D structure cannot be achieved with double patterning techniques.

The outlook: Upscaling and further learnings

In pursuit of enabling quantum systems with increasingly more qubits, imec made major strides: first, reproducibly achieving high-fidelity unit cells on two-qubit devices; second, transitioning from ebeam to EUV lithography for patterning critical layers; and third, moving from overlapping gate architectures to a single-layer gate configuration.

Adding EUV to imec’s 300-mm fab-compatible Si spin qubit platform will enable printing high-quality quantum dot structures across a full 300-mm wafer with high yield, uniformity, and alignment accuracy.

The trilinear quantum dot architecture, compliant with the single-layer gate approach, will allow upscaling the number of qubits by addressing the wiring bottleneck. Currently, work is ongoing to electrically characterize the trilinear array, and to study the impact of both the single-layer gate approach and the use of EUV lithography on the qubit fidelities.

The trilinear quantum dot architecture is a stepping stone toward truly large-scale quantum processors based on silicon quantum dot qubits. It may eventually not be the most optimal architecture for quantum operations involving millions of qubits, and clear bottlenecks remain.

But it’s a step in the learning process toward scalability and allows de-risking the technology around it. It will enhance our understanding of large-scale qubit operations, qubit shuttling, and BEOL integration. And it will allow exploring the expandability of the architecture toward a larger number of arrays.

In parallel, imec will continue working on the overlapping gate structure which can offer very high qubit fidelities. These high-quality qubits can be used as a probe to further study and optimize the qubit’s gate stack, understand the limiting noise mechanisms, tweak and optimize the control modules, and develop the measurement capability for larger scale systems in a systematic, step-by-step approach—leveraging the process flexibility offered by imec’s 300-mm infrastructure.

It’s a viable research vehicle in the quest for better qubits, providing learnings much faster than any large-scale quantum dot architecture. It can help increase our fundamental knowledge of two-qubit systems, an area in which there is still much to learn.

Sofie Beyne, project manager for quantum computing at imec, started her career at Intel, working as an R&D reliability engineer on advanced nodes in the Logic Technology Development department. She rejoined imec in 2023 to focus on bilateral projects around spin qubits.

Clement Godfrin, device engineer at imec, specializes in the dynamics of single nuclear high spin, also called qudit, either to implement quantum algorithm proof of principle on single nuclear spin of a molecular magnet system, or quantum error correction protocol on single donor nuclear spin.

Stefan Kubicek, integration engineer at imec, has been involved in CMOS front-end integration development from 130-nm CMOS node to 14-nm FinFET node. He joined imec in 1998, and since 2016, he has been working on the integration of spin qubits.

Kristiaan De Ggreve, imec fellow and program director for quantum computing at imec, is also Proximus Chair at Quantum Science and Technology and professor of electrical engineering at KU Leuven. He moved to imec in 2019 from Harvard University, where he was a fellow in the physics department and where he retains a visiting position.

Related Content

The post Silicon MOS quantum dot spin qubits: Roads to upscaling appeared first on EDN.

The Big Allis generator sixty years ago 

Птн, 12/05/2025 - 15:00

Think back to the 1965 electrical power blackout in the Northeast United States of just over sixty years ago. It was on November 9, 1965. There was a huge consequence for Consolidated Edison in New York City.

Their power-generating facility in Ravenswood had been equipped with a generator made by Allis-Chalmers, as shown in the following screenshots.

Figure 1 Ravenswood power generating facility and the Big Allis power generator.

That generator was the largest of its kind in the whole world at that time. Larger generators did get made in later years, but at that time, there were none bigger. It was so big that some experts opined that such a generator would not even work. Because of its size and its manufacturer’s name, that generator came to be called “Big Allis”.

Big Allis had a major design flaw. The bearings that supported the generator’s rotor were protected by oil pumps that were powered from the Big Allis generator itself.

When the power grid collapsed, Big Allis stopped delivering power, which then shut down the pumps delivering the oil pressure that had been protecting the rotor bearings.

With no oil pressure, the bearings were severely damaged as the rotor slowed down to a halt. One newspaper article described the bearings as having been ground to dust. It took months to replace those bearings and to provide their oil pumps with separate diesel generators devoted solely to maintaining the protective oil pressure.

So far as I know, Big Allis is still in service, even through the later 1977 and 2003 blackouts, so I guess that those 1965 revisions must have worked out.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post The Big Allis generator sixty years ago  appeared first on EDN.

Design specification: The cornerstone of an ASIC collaboration

Птн, 12/05/2025 - 09:51

Engaging with an ASIC development partner can take many forms. The intended chip may be as simple as a microcontroller, as sophisticated as an AI-based edge computing system-on-chip (SoC), or even a large language model (LLM) AI accelerator for data centers. The customer design team may include experienced ASIC design, verification, and test engineers or comprise only application experts. Each customer relationship is different.

Yet they all share one fundamental need. The customer and the ASIC developer must agree, in greater detail, on what they are trying to build. That is the role of design specification documents. At Faraday, this document is the cornerstone of conversations between customers and chip design teams, covering critical decisions throughout the design process. The topics can range from initial feasibility estimation through sign-off and beyond.

If the design specification is so important, an obvious question arises: how do you construct a specification that will result in a successful ASIC design experience? However, the real answer is that a successful design specification is a joint effort between the customer and the ASIC development partner.

So, there must be a comprehensive, cooperative, checklist-driven procedure for creating a design specification. It allows to mesh smoothly with customers’ design teams, whether they are starting with only a wish list of features or with a detailed design plan. It also works across a wide range of sizes and complexities in today’s ASIC landscape.

What design specification does

The design specification will serve many purposes during the ASIC design. Fundamentally, it will list the design requirements for the ASIC implementation team. As such, it will serve as a shopping list of silicon IP to be included in the design, an outline of the architecture that integrates that IP, and a guide for integration, verification, and testing.

Less obviously, the design specification can be a point of reference for discussions that will take place between the customer and the design team. What exactly should this block do? How much power can we allocate to this function? Does this alternative approach to implementation work for you? All these discussions can begin with the design specification.

Also, the specification can be invaluable for tasks where the customer is often uninvolved. For example, knowing the design intent and how the chip will be used can be priceless in developing verification plans, self-test architectures, test benches, and manufacturing test strategies. Information from the design specification is vital for detailed design activities, such as determining clock architectures and power-management strategy, in which the customer would typically not be directly involved.

Key elements of design specification

So, what goes into the design specification? There are several important categories of information. The most obvious is a set of functional requirements—what the chip is supposed to do. Often, this will be a list of features, but it may be much more detailed. It’s also essential that the specifications include performance, power, and area requirements.

These will influence many conversations, from our initial feasibility assessment to foundry and process selection, library selection, and power-management strategies. And much of this information will be included in the specifications.

It’s also essential to capture a description of the system in which the chip will operate, including the other components. For example, which SPI flash chip will be used for an external flash? The minor differences in SPI protocols between memory chips can determine which SPI controller IP we select.

Another essential kind of system information is more physical: the thermal and mechanical environment. Heat sinks, passive or forced-air cooling, and so on will influence power management and package design.

Not just what, but how

The specification is not just a list of requirements. It also jointly develops design plans for implementing the chip. Chief among these is the gross architecture.

The architecture of an ASIC may be implicit in its function. For instance, a microcontroller may be a CPU core, some memory on a memory bus, and some peripheral controllers on a low-speed peripheral bus. However, a more elaborate SoC may have several CPU cores clustered around a shared cache and a hierarchy of buses, determined by the bandwidth and latency requirements of particular data flows between the memory and IP instances. If the customer hasn’t already decided on architecture, the design team will develop a proposal and review it with the customer.

Figure 1 An example of the proposed architecture for the customer is highlighted in a comprehensive diagram that describes the architecture and provides additional information. Source: Faraday Technology Corp.

In some cases, the customer will already have architecture in place. This may be because the chip extends to an existing product family. Or it may use a network-on-chip (NoC) scheme or something entirely original, such as a data-flow architecture designed to accelerate a particular algorithm. In these cases, ASIC designer’s role is to ensure that the information in the specification is complete enough to capture the design intent unambiguously, to drive the selection of IP with the proper interfaces, and to adequately inform about the chip layout.

The specification may also include information about specific IP blocks. If a block is a controller for a standard interface—say, a USB controller—then there needs to be enough additional information. For instance, it should be a Gen3 USB host with power delivery to allow the design team to select the appropriate IP.

In some cases, a functional block may be something unique. This often happens when the IP is customer specific. In these cases, the customer must provide enough detail for the design team to create and test the block. This may be simply a detailed functional description. Or it may require pseudo-code or Verilog code for critical portions of the block.

Pulling it together

Altogether, the design specification becomes an agreed-upon statement of what the customer requires and what the design team is designing. But which parts of the document come from the customer, which are jointly written, and which are supplied by ASIC designer for the customer to review vary widely from case to case.

At Faraday, we have developed a formal process, called e-cooking, to collect the data. The process begins with a request for a quote from our sales organization. This RFQ will often contain much of the information we need for the design specification.

With RFQ in hand, we assign an engineer to the project in a role we call a technical consultant (TC). TC begins working through a design checklist to transfer information from the RFQ to the design specification.

When an item is missing or requires more detail, TC will contact the customer, explain what further information we need and why, and obtain the necessary data. If the item requires information the customer can’t provide—for instance, a choice of logic libraries—TC can ask the Faraday design team for input, which we then share with the customer for review.

The completed design specification document is a blueprint for the chip design. It will provide information regarding architectural and IP selection, verification, test plans, and packaging choices. It will also explain the statement of work, which describes which design tasks will be done by the customer and which by ASIC designer.

Figure 2 Technical consultants and engineers enter all project information into the e-cooking system, a tool that tracks the chip’s content. Source: Faraday Technology Corp.

The e-cooking process aims to capture customers’ design intent and the work they have already done toward implementation (Figure 2). The designers enter information into the tool, such as the actual cell size and name, silicon area, quantity, spacing, and I/O.

Next, ASIC designer reviews any suggestions for changes or additional data with the customer team. That brings clarity on what ASIC designer intends to implement at the start of the project. By the end of the project, the only surprises are how smoothly the two teams worked together and how well the delivered chip met customers’ expectations.

Barry Lai heads the System Development and Chip Design department at Faraday Technology Corp., a leading provider of ASIC design services and IP. With 20 years of experience in IC design, Barry specializes in SoC integration, specification definition, digital design, low-power design, and integration automation.

Related Content

The post Design specification: The cornerstone of an ASIC collaboration appeared first on EDN.

High-voltage SiC MOSFETs power critical energy systems

Чтв, 12/04/2025 - 21:53

Navitas is now sampling 2.3-kV and 3.3-kV SiC MOSFETs in power-module, discrete, and known-good-die (KGD) formats. Leveraging fourth-generation GeneSiC Trench-Assisted Planar (TAP) technology, these ultra-high-voltage devices offer improved reliability and performance for mission-critical energy-infrastructure applications.

According to Navitas, the TAP architecture uses a multistep electric-field management profile that significantly reduces voltage stress and improves blocking performance compared with trench and conventional planar SiC MOSFETs. In addition to increased long-term reliability and avalanche robustness, TAP incorporates an optimized source contact that enables higher cell-pitch density and improved current spreading. Together, these advances deliver better switching figures of merit and lower on-resistance at elevated temperatures.

Packaging options include the SiCPAK G+ power module, which uses epoxy-resin potting to deliver more than a 60% improvement in power-cycling lifetime and over a 10% improvement in thermal-shock reliability compared with similar silicone-gel–potted designs. Discrete SiC MOSFETs are offered in TO-247 and TO-263-7 packages, while KGD products provide system manufacturers with greater flexibility for custom SiC power-module development. AEC-Plus–grade SiC devices are qualified to standards that exceed conventional AEC-Q101 and JEDEC requirements.

To request samples of the ultra-high-voltage SiC MOSFETs, contact Navitas at info@navitassemi.com.

Navitas Semiconductor 

The post High-voltage SiC MOSFETs power critical energy systems appeared first on EDN.

Thermistors suppress inrush currents

Чтв, 12/04/2025 - 21:53

S series NTC thermistors from TDK Electronics handle steady-state currents up to 35 A and absorb energy up to 750 J. They enable reliable inrush current suppression in switch-mode power supplies, frequency converters, photovoltaic inverters, UPS systems, and soft-start motors.

The S series includes two leaded variants—the S30 and S36—with disk diameters of 30 mm and 36 mm, respectively. The S30 features 7.5-mm lead spacing and a maximum power handling of 19 W, while the larger S36 has 19-mm lead spacing and extends power handling to 25 W. Both variants are rated for a wide climatic category of 55/170/21 in accordance with IEC 60068-1 requirements.

The S30 (ordering code B57130S0M000) and S36 (B57136S0M100) families cover basis resistance values of 2 Ω to 15 Ω and 2 Ω to 20 Ω, respectively. They support continuous currents ranging from 12 A to 25 A (S30) and 10 A to 35 A (S36). Permissible capacitances can reach up to 13,050 µF at 240 VAC (see datasheet for details). The table below summarizes the key electrical characteristics of the S30 and S36 variants.

To access the datasheets for the S30 series and S36 series, click here.

TDK Electronics 

The post Thermistors suppress inrush currents appeared first on EDN.

Сторінки