EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 1 година 35 хв тому

Beyond the current smart grid management systems

Срд, 11/05/2025 - 09:07

Modernizing the electric grid involves more than upgrading control systems with sophisticated software—it requires embedding sensors and automated controls across the entire system. It’s not only the digital brains that manage the network but also the physical devices, like the motors that automate switch operations, which serve as the system’s hands.

Only by integrating sensors and robust controls throughout the entire grid can we fully realize the vision of a smart, flexible, high-capacity, efficient, and reliable power infrastructure.

Source: Bison

The drive to modernize the power grid

The need for increased capacity and greater flexibility is driving the modernization of the power grid. The rapid electrification of transportation and HVAC systems, combined with the rise of artificial intelligence (AI) technologies, is placing unprecedented demands on the energy network.

To meet these challenges, the grid must become more dynamic, capable of supporting new technologies while optimizing efficiency and ensuring reliability.

Integrating distributed energy resources (DERs), such as rooftop solar panels, battery storage, and wind farms, adds further complexity. So, advanced fault detection, self-healing capabilities, and more intelligent controls are essential to managing these resources effectively. Grid-level energy storage solutions, like battery buffers, are also critical for balancing supply and demand as the energy landscape evolves.

At the same time, the grid must address the growing need for resilience. Aging infrastructure, much of it built decades ago, struggles to meet today’s energy demands. Upgrading these outdated systems is vital to ensuring reliability and avoiding costly outages that disrupt businesses and communities.

The increasing frequency of climate-related disasters, including hurricanes, wildfires, and heat waves, highlights the urgency of a resilient grid. Therefore, modernizing the grid to withstand and recover from extreme weather events is no longer optional, it’s essential for the stability of our energy future.

The challenges posed by outdated infrastructure and climate-related disasters are accelerating the adoption of advanced technologies like Supervisory Control and Data Acquisition (SCADA) systems and Advanced Distribution Management Systems (ADMS). These innovations enhance grid visibility, allowing operators to monitor and manage energy flow in real time. This level of control is crucial for quickly addressing disruptions and preventing widespread outages.

Additionally, ADMS makes the grid smarter and more efficient by leveraging predictive analytics. ADMS can forecast energy demand, identify potential issues before they occur, and optimize the flow of electricity across the grid. It also supports conditional predictive maintenance, allowing utilities to address equipment issues proactively based on real-time data and usage patterns.

The key to successful digitization: Fully integrated systems

Smart grids follow the dynamics of the overall global shift toward digitization, aligning with advancements in Industry 4.0, where smart factories go beyond advanced software and analytics. It’s a complete system that integrates IoT sensors, robotics, and distributed controls throughout the production line, creating a setup that’s more productive, flexible, and transparent.

By offering real-time visibility into the production process and component conditions, these automated systems streamline operations, minimize downtime, boost productivity, lower labor costs, and enhance preventive maintenance.

Similarly, smart grids operate as fully integrated systems that rely heavily on a network of advanced sensors, controls, and communication technologies.

Devices such as phasor measurement units (PMUs) provide real-time monitoring of electrical grid stability. Other essential sensors include voltage and current transducers, power quality transducers, and temperature sensors, which monitor key parameters to detect and prevent potential issues. Smart meters also enable two-way communication between utilities and consumers, enabling real-time energy usage tracking, dynamic pricing, and demand response capabilities.

The role of motorized switch operators in grid automation

Among the various distributed components in today’s modern grid infrastructure, motorized switch operators are among the most critical. These devices automate switchgear functions, eliminating the need for manual operation of equipment such as circuit breakers, load break switches, air and SF6 insulated disconnects, and medium- or high-voltage sectionalizers.

By automating these processes, motorized switch operators enhance precision, speed, and safety. They reduce the risk of human error and ensure smoother grid operations. Moreover, these devices integrate seamlessly with SCADA and ADMS, enabling real-time monitoring and control for improved efficiency and reliability across the grid.

Motorized switch operators aren’t just valuable for supporting the smart grid, they also offer practical business benefits on their own, even without smart grid integration. Automating switch operations eliminates the need to send out trucks and personnel every time a switch needs to be operated. This saves significant time, reduces service disruptions, and lowers fleet operation and labor costs.

Motorized switch operators also improve safety. During storms or emergencies, sending crews to remote or hazardous locations can be dangerous. Underground vaults, for example, can flood, turning them into high-voltage safety hazards. Automating these tasks ensures that switches can be operated without putting workers at risk.

The importance of a reliable motor and gear system

When automating switchgear operation, the reliability of the motor and gear system is crucial. These components must perform flawlessly every time, ensuring consistent operation in all conditions, from routine use to extreme situations like storms or grid emergencies.

Given that the switchgear in power grids is designed to operate reliably for decades, motor operators must be engineered with exceptional durability and dependability to ensure they surpass these long-term performance requirements.

Standard off-the-shelf motors often fail to meet the specific demands of medium- and high-voltage switchgear systems. General-purpose motors are typically not engineered to withstand extreme environmental conditions or the high number of operational cycles required in the power grid.

On the other hand, utilities need to modernize infrastructure without expanding vault sizes, and switchgear OEMs want to enhance functionality without altering layouts. A “drop-in” solution offers a seamless and straightforward way to integrate advanced automation into existing systems, saving time, reducing costs, and minimizing downtime.

To meet the unique challenges of medium- and high-voltage switchgear, motor and gear systems must balance two critical constraints—compact size and limited amperage—while still delivering exceptional performance in speed and torque.

Here’s why these attributes matter:

  • Compact size: Space is at a premium in power grid applications, especially for retrofits where manual switchgear is being converted to automated systems. So, motors must fit within the existing contours and confined spaces of switchgear installations. Even for new equipment, utilities demand compact designs to avoid costly expansions of service vaults or installation areas.
  • Limited amperage draw: Motors often need to operate on as little as 5 amps, far less than what’s typical for other applications. Developing a motor and gear system that performs reliably within such constraints is essential to ensuring compatibility with power grid environments.
  • High speed: Fast operation is critical for the safe and effective functioning of switchgear. The ability to open and close switches rapidly minimizes the risk of dangerous electrical arcs, which can cause severe equipment damage, pose safety hazards, and lead to cascading power grid failures.
  • High torque: Overcoming the significant spring force of switchgear components requires motors with high torque. This ensures smooth and consistent operation, even under demanding conditions.

The challenge lies in meeting all four of these requirements. Compact size and low amperage requirements often compromise the speed and torque needed for reliable performance. That’s why motor and gear systems must be specifically engineered and rigorously tested to meet the stringent demands of medium- and high-voltage switchgear applications. Only purpose-built solutions can provide the durability, efficiency, and reliability required to support the long-term stability of the power grid.

Meeting environmental and installation demands

Beyond size, power, and performance considerations, motor and gear systems for medium- and high-voltage switchgear must also meet stringent environmental and installation requirements.

For example, these systems are often exposed to extreme weather conditions, requiring watertight designs to ensure durability in harsh environments. This is especially critical for applications where switchgear is housed in underground vaults that may be prone to flooding or moisture intrusion. Additionally, using specialized lubrication that performs well in both high and low temperature extremes is essential to maintain reliability and efficiency.

Equally important is the ease of installation. Rotary motors provide a significant advantage over linear actuators in this regard. Unlike linear actuators, which require precise calibration, a process that is time-consuming, labor-intensive, and potentially error-prone, rotary motors eliminate this complexity. Their straightforward setup not only reduces installation time but also enhances reliability by eliminating the need for manual adjustments.

To address the diversity of designs in switchgear systems produced by various OEMs, it is essential to work with a motor and gear manufacturer capable of delivering customized solutions. Retrofits often demand a tailored approach due to the unique configurations and requirements of different equipment. Partnering with a company that not only offers bespoke solutions but also has deep expertise in power grid applications is critical.

Future-proofing systems with reliable automation

Automating switchgear operation is a vital step in advancing the modernization of power grids, forming a critical component of smart grid development. Reliable, high-performance motor operators enhance operational efficiency and ensure longevity, providing a solid foundation for evolving power systems.

No matter where a utility is in its modernization journey, investing in durable and efficient motorized switch operators delivers lasting value. This forward-thinking approach not only enhances current operations but also ensures systems are ready to adapt and evolve as modernization advances.

Gary Dorough has advanced from sales representative to sales director for the Western United States and Canada during his 25-year stint at Bison, an AMETEK business. His experience includes 30 years of utility industry collaboration on harmonics mitigation and 15 years developing automated DC motor operators for medium-voltage switchgear systems.

Related Content

The post Beyond the current smart grid management systems appeared first on EDN.

A tutorial on instrumentation amplifier boundary plots—Part 1

Срд, 11/05/2025 - 05:11

In today’s information-driven society, there’s an ever-increasing preference to measure phenomena such as temperature, pressure, light, force, voltage and current. These measurements can be used in a plethora of products and systems, including medical diagnostic equipment, home heating, ventilation and air-conditioning systems, vehicle safety and charging systems, industrial automation, and test and measurement systems.

Many of these measurements require highly accurate signal-conditioning circuitry, which often includes an instrumentation amplifier (IA), whose purpose is to amplify differential signals while rejecting signals common to the inputs.

The most common issue when designing a circuit containing an IA is the misinterpretation of the boundary plot, also known as the common mode vs. output voltage, or VCM vs. VOUT plot. Misinterpreting the boundary plot can cause issues, including (but not limited to) signal distortion, clipping, and non-linearity.

Figure 1 depicts an example where the output of an IA such as the INA333 from Texas Instruments has distortion because the input signal violates the boundary plot (Figure 2).

Figure 1 Instrumentation amplifier output distortion is caused by VCM vs. VOUT violation. Source: Texas Instruments

Figure 2 This is how VOUT is limited by VCM. Source: Texas Instruments

This series about IAs will explain common- versus differential-mode signaling, basic operation of the traditional three-operational-amplifier (op amp) topology, and how to interpret and calculate the boundary plot.

This first installment will cover the common- versus differential-mode voltage and IA topologies, and show you how to derive the internal node equations and transfer function of a three-op-amp IA.

The IA topologies

While there are a variety of IA topologies, the traditional three-op-amp topology shown in Figure 3 is the most common and therefore will be the focus of this series. This topology has two stages: input and output. The input stage is made of two non-inverting amplifiers. The non-inverting amplifiers have high input impedance, which minimizes loading of the signal source.

Figure 3 This is how a traditional three-op-amp IA looks like. Source: Texas Instruments

The gain-setting resistor, RG, allows you to select any gain within the operating region of the device (typically 1 V/V to 1,000 V/V). The output stage is a traditional difference amplifier. The ratio of R2 to R1 sets the gain of the difference amplifier. The balanced signal paths from the inputs to the output yield an excellent common-mode rejection ratio (CMRR). Finally, the output voltage, VOUT, is referred to as the voltage applied to the reference pin, VREF.

Even though three-op-amp IAs are the most popular topology, other topologies such as the two op amps offer unique benefits (Figure 4). This topology has high input impedance and single resistor-programmable gain. But since the signal path to the output for each input (V+IN and V-IN) is slightly different, this topology degrades CMRR performance, especially over frequency. Therefore, this type of IA is typically less expensive than the traditional three-op-amp topology.

Figure 4 The schematic shows a two-op-amp IA. Source: Texas Instruments

The IA shown in Figure 5 has a two-op-amp IA input stage. The third op amp, A3, is the output stage, which applies gain to the signal. Two external resistors set the gain. Because of the imbalanced signal paths, this topology also has degraded CMRR performance (<90dB). Therefore, devices with this topology are typically less expensive than traditional three-op-amp IAs.

Figure 5 A two-op-amp IA is shown with output gain stage. Source: Texas Instruments

While the aforementioned topologies are the most prevalent, there are several unique IAs, including current mirror, current feedback, and indirect current feedback.

Figure 6 depicts the current mirror topology. This type of IA is preferable because it enables an input common-mode range that extends to both supply voltage rails, also known as the rail-to-rail input. However, this benefit comes at the expense of bandwidth. Compared to two-op-amp IAs, this topology yields better CMRR performance (100dB or greater). Finally, this topology requires two external resistors to set the gain.

Figure 6 This is how current mirror topology looks like. Source: Texas Instruments

Figure 7 shows a simplified schematic of the current feedback topology. This topology leverages super-beta transistors (Q1 and Q2) to buffer input signal and forces it across the gain-setting resistor, RG. The resulting current flows through R1 and R2, which create voltages at the outputs of A1 and A2. The difference amplifier, A3, then rejects the common-mode signal.

Figure 7 Simplified schematic displays the current feedback topology. Source: Texas Instruments

This topology is advantageous because super-beta transistors yield a low input offset voltage, offset voltage drift, input bias current, and input noise (current and voltage).

Figure 8 depicts the simplified schematic of an indirect current feedback IA. This topology has two transconductance amplifiers (gm1 and gm2) and an integrator amplifier (gm3). The differential input voltage is converted to a current (IIN) by gm1. The gm2 stage converts the feedback voltage (VFB-VREF) into a current (IFB). The integrator amplifier matches IIN and IFB by changing VOUT, thereby adjusting VFB.

Figure 8 This schematic highlights the indirect current feedback topology. Source: Texas Instruments

One significant difference when compared to the previous topology is the rejection of the common-mode signal. In current feedback IAs (and similar architectures), the common-mode signal is rejected by the output stage difference amplifier, A3. Indirect current feedback IAs, however, reject the common-mode signal immediately at the input (gm1). This provides excellent CMRR performance at DC over frequency and independent of gain.

CMRR performance does not degrade if there is impedance on the reference pin (unlike other traditional IAs). Finally, this topology requires two resistors to set the gain, which may deliver excellent performance across temperature if the resistors have well-matched drift behavior.

Common- and differential-mode voltage

The common-mode voltage is the average voltage at the inputs of a differential amplifier. A differential amplifier is any amplifier (including op amps, difference amplifiers and IAs) that amplifies a differential signal while rejecting the common-mode voltage.

The inverting terminal connects to a constant voltage, VCM. Figure 9 depicts a more realistic definition of the input signal where two voltage sources represent VD. Each source has half the magnitude of VD. Performing Kirchhoff’s voltage law around the input loop proves that the two representations are equivalent.

Figure 9 The above schematic shows an alternate definition of common- and differential-mode voltages. Source: Texas Instruments

Three-op-amp IA analysis

Understanding the boundary plot requires an understanding of three-op-amp IA fundamentals. Figure 10 depicts a traditional three-op-amp IA with an input signal—with input and output nodes A1, A2 and A3 labeled.

Figure 10 A three-op-amp IA is shown with input signal and node labels. Source: Texas Instruments

Equation 1 depicts the overall transfer function of the circuit in Figure 10 and defines the gain of the input stage, GIS, and the gain of the output stage, GOS. Notice that the common-mode voltage, VCM, does not appear in the output-voltage equation, because an ideal IA completely rejects common-mode input signals.

Noninverting amplifier input stage

Figure 11 depicts a simplified circuit that enables the derivation of node voltages VIA1 and VOA1.

Figure 11 The schematic shows a simplified circuit for VIA1 and VOA1. Source: Texas Instruments

Equation 2 calculates VIA1:

The analysis for VOA1 simplifies by applying the input-virtual-short property of ideal op amps. The voltage that appears at the RG pin connected to the inverting terminal of A2 is the same as the voltage at V+IN. Superposition results are shown in Equation 3, which simplifies to Equation 4.

Applying a similar analysis to A2 (Figure 12) yields Equation 5, Equation 6 and Equation 7.

Figure 12 This is a simplified circuit for VIA2 and VOA2. Source: Texas Instruments

Difference amplifier output stage

Figure 13 shows that A3, R1 and R2 make up the difference amplifier output stage, whose transfer function is defined in Equation 8.

Figure 13 The above schematic displays difference amplifier input (VDIFF). Source: Texas Instruments

Equation 9, Equation 10 and Equation 11 use the equations for VOA1 and VOA2 to derive VDIFF in terms of the differential input signal, VD, as well as RF and the gain-setting resistor, RG.

Substituting Equation 11 for VDIFF in Equation 8 yields Equation 12, which is the same as Equation 1.

In most IAs, the gain of the output stage is 1 V/V. If the gain of the output stage is 1 V/V, Equation 12 simplifies to Equation 13:

Figure 14 determines the equations for nodes VOA3 and VIA3.

Figure 14 This diagram highlights difference amplifier internal nodes. Source: Texas Instruments

The equation for VOA3 is the same as VOUT, as shown in Equation 14:

Using superposition as shown in Equation 15 determines the equation for VIA3. The voltage at the non-inverting node of A3 sets the amplifier’s common-mode voltage. Therefore, only VOA2 and VREF affect VIA3.

Since GOS=R2/R1, Equation 15 can be rewritten as Equation 16:

Part 2 highlights

The second part of this series will use the equations from the first part to plot each internal amplifier’s input common-mode and output-swing limitation as a function of the IA’s common-mode voltage.

Peter Semig is an applications manager in the Precision Signal Conditioning group at Texas Instruments (TI). He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.

Related Content

The post A tutorial on instrumentation amplifier boundary plots—Part 1 appeared first on EDN.

ADI upgrades its embedded development platform for AI

Втр, 11/04/2025 - 21:45
ADI's CodeFusion Studio 2.0 for AI development.

Analog Devices, Inc. simplifies embedded AI development with its latest CodeFusion Studio release, offering a new bring-your-own-model capability, unified configuration tools, and a Zephyr-based modular framework for runtime profiling. The upgraded open-source embedded development platform delivers advanced abstraction, AI integration, and automation tools to streamline the development and deployment of ADI’s processors and microcontrollers (MCUs).

CodeFusion Studio 2.0 is now the single entry point for development across all ADI hardware, supporting 27 products today, up from five in the last year, when first introduced in 2024.

Jason Griffin, ADI’s managing director, software and AI strategy, said the release of CodeFusion Studio 2.0 is a major leap forward in ADI’s developer-first journey, bringing an open extensible architecture across the company’s embedded ecosystem with innovation focused on simplicity, performance, and speed.

ADI's CodeFusion Studio 2.0 for AI development.CodeFusion Studio 2.0 streamlines embedded AI development. (Source: Analog Devices Inc.)

A major goal of CodeFusion Studio 2.0 is to help teams move faster from evaluation to deployment, Griffin said. “Everything from SDK [software development kit] setup and board configuration to example code deployment is automated or simplified.”

Griffin calls it a “complete evolution of how developers build on ADI technology,” by unifying embedded development, simplifying AI deployment, and providing performance visibility in one cohesive environment. “For developers and customers, this means faster design cycles, fewer barriers, and a shorter path from idea to production.”

A unified platform and streamlined workflow

CodeFusion Studio 2.0, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools, and optimization capabilities. The unified configuration tools reduce complexity across ADI’s hardware ecosystem.

The new Zephyr-based modular framework enables runtime AI/ML workload profiling, offering layer-by-layer analysis and integration with ADI’s heterogeneous platforms. This eliminates toolchain fragmentation, which simplifies ML deployment and reduces complexity, Griffin noted.

“One of the biggest challenges that developers face with multicore SoCs [system on chips] is juggling multiple IDEs [integrated development environments], toolchains, and debuggers,” Griffin explained. “Each core whether Arm, DSP [digital signal processor], or MPU [microprocessor] comes with its own setup and that fragmentation slows teams down.

“In CodeFusion Studio 2.0, that changes completely,” he added. “Everything now lives in a single unified workspace. You can configure, build, and debug every core from one environment, with  shared memory maps, peripheral management, and consistent build dependencies. The result is a streamlined workflow that minimizes context switching and maximizes focus, so developers spend less time on setup and more time on system design and optimization.”

CodeFusion Studio System Planner also is updated to support multicore applications and expanded device compatibility. It now includes interactive memory allocation, improved peripherals setup, and streamlined pin assignment.

ADI's CodeFusion Studio 2.0 memory allocation screenshot.CodeFusion Studio 2.0 adds interactive memory allocation (Source: Analog Devices Inc.)

The growing complexity in managing cores, memory, and peripherals in embedded systems is becoming overwhelming, Griffin said. The system planner gives “developers a clear graphical view of the entire SoC, letting them visualize cores, assign peripherals, and define inter-core communication all in one workspace.”

In addition, with cross-core awareness, the environment validates shared resources automatically.

Another challenge is system optimization, which is addressed with multicore profiling tools, including the Zephyr AI profiler, system event viewer, and ELF file explorer.

“Understanding how the system behaves in real time, and finding where your performance can improve is where the Zephyr AI profiler comes in,” Griffin said. “It measures and optimizes AI workflows across ADI hardware from ultra-low-power edge devices to high-performance multicore systems. It supports frameworks like TensorFlow Lite Micro and TVM, profiling latency, memory and throughput in a consistent and streamlined way.”

Griffin said the system event viewer acts like a built-in logic analyzer, letting developers monitor events, set triggers, and stream data to see exactly how the system behaves. It’s invaluable for analyzing, synchronization, and timing across cores, he said.

The ELF file explorer provides a graphical map of memory and flash usage, helping teams make smarter optimized decisions.

CodeFusion Studio 2.0 also gives developers the ability to download SDKs, toolchains, and plugins on demand, with optional telemetry for diagnostic and multicore support.

Doubling down on AI

CodeFusion Studio 2.0 simplifies the development of AI-enabled embedded systems with support for complete end-to-end AI workflows. This enables developers to bring their own models and deploy them in ADI’s range of processors from low-power edge devices to high-performance DSPs.

“We’ve made the workflow dramatically easier,” Griffin said. “Developers can now import, convert, and deploy AI models directly to ADI hardware. No more stitching together separate tools. With the AI deployment tools, you can assign models to specific cores, verify compatibility, and profile performance before runtime, ensuring every model runs efficiently on the silicon right from the start.”

Screenshot of the AI models manager in ADI's CodeFusion Studio 2.0.Manage AI models with CodeFusion Studio 2.0 from import to deployment (Source: Analog Devices Inc.) Easier debugging

CodeFusion Studio 2.0 also adds new integrated debugging features that bring real-time visibility across multicore and heterogeneous systems, enabling faster issue resolution, shorter debug cycles, and more intuitive troubleshooting in a unified debug experience.

One of the toughest parts of embedded development is debugging multicore systems, Griffin noted. “Each core runs its own firmware on its own schedule often with its own toolchain making full visibility a challenge.”

CodeFusion Studio 2.0 solves this problem, he said. “Our new unified debug experience gives developers real-time visibility across all cores—CPUs, DSPs, and MPUs—in one environment. You can trace interactions, inspect shared resources, and resolve issues faster without switching between tools.”

Developers spend more than 60% of their time doing debugging, Griffin said, and ADI wanted to address this challenge and reduce that time sink.

CodeFusion Studio 2.0 now includes core dump analysis and advanced GDB integration, which includes custom JSON and Python scripts for both Windows and Linux with multicore support.

A big advance is debugging with multicore GDP core dump analysis and RTOS awareness working together in one intelligent uniform experience, Griffin said.

“We’ve added core dump analysis, built around Zephyr RTOS, to automatically extract and visualize crash data; it helps pinpoint root causes quickly and confidently,” he continued. “And the new GDB toolbox provides advanced scripting performance, tracing and automation, making it the most capable debugging suite ADI has ever offered.”

The ultimate goal is to accelerate development and reduce risk for customers, which is what the unified workflows and automation provides, he added.

Future releases are expected to focus on deeper hardware-software integration, expanded runtime environments, and new capabilities, targeting growing developer requirements in physical AI.

CodeFusion Studio 2.0 is now available for download. Other resources include documentation and community support.

The post ADI upgrades its embedded development platform for AI appeared first on EDN.

32-bit MCUs deliver industrial-grade performance

Втр, 11/04/2025 - 21:04
GigaDevice's GD32F503/505 32-bit MCUs.

GigaDevice Semiconductor Inc. launches a new family of high-performance GD32 32-bit general-purpose microcontrollers (MCUs) for a range of industrial applications. The GD32F503/505 32-bit MCUs expand the company’s portfolio based on the Arm Cortex-M33 core. Applications include digital power supplies, industrial automation, motor control, robotic vacuum cleaners, battery management systems, and humanoid robots.

GigaDevice's GD32F503/505 32-bit MCUs.(Source: GigaDevice Semiconductor Inc.)

Built on the Arm v8-M architecture, the GD32F503/505 series offers flexible memory configurations, high integration, and built-in security functions, and features an advanced digital signal processor, hardware accelerator and a single-precision floating-point unit. The GD32F505 operates at a frequency of 280 MHz, while the GD32F503 runs at 252 MHz. Both devices achieve up to 4.10 CoreMark/MHz and 1.51 DMIPS/MHz.

The series offers up to 1024 KB of Flash and 192 KB of SRAM. Users can allocate code-flash, data-flash, and SRAM location through scatter loading based on their specific application, which allows users to tailor memory resources according to their requirements, GigaDevice said.

The GD32F503/505 series also integrates a set of peripheral resources, including three analog-to-digital converters with a sampling rate of up to 3 Ms/s (supporting up to 25 channels), one fast comparator, and one digital-to-analog converter. For connectivity, it supports up to three SPIs, two I2Ss, two I2Cs, three USARTs, two UARTs, two CAN-FDs, and one USBFS interface.

The timing system features one 32-bit general-purpose timer, five 16-bit general-purpose timers, two 16-bit basic timers, and two 16-bit PWM advanced timers. This translates into precise and flexible waveform control and robust protection mechanisms for applications such as digital power supplies and motor control.

The operating voltage range of the GD32F503/505 series is 2.6V  to 3.6 V, and it operates over the  industrial-grade temperature range of -40°C to 105°C. It also offers three power-saving modes for maximizing power efficiency.

These MCUs also provide high-level ESD protection with contact discharge up to 8 kV and air discharge up to 15 kV. Their HBM/CDM immunity is stable at 4,000 V/1,000 V even after three Zap tests, demonstrating reliability margins that exceed conventional standards for industries such as industrial and home appliances, GigaDevice said.

In addition, the MCUs provide multi-level protection of code and data, supporting firmware upgrades, integrity and authenticity verification, and anti-rollback checks. Device security includes a secure boot and secure firmware update platform, along with hardware security features such as user secure storage areas. Other features include a built-in hardware security engine integrating SHA-256 hash algorithms, AES-128/256 encryption algorithms, and a true random number generator. Each device has a unique independent UID for device authentication and lifecycle management.

A multi-layered hardware security mechanism is centered around multi-channel watchdogs, power and clock monitoring, and hardware CRC. In addition, the GD32F5xx series’ software test library is certified to the German IEC 61508 SC3 (SIL 2/SIL 3) for functional safety. The series provides a complete safety package, including key documents such as a safety manual, FMEDA report, and safety self-test library.

The GD32 MCUs feature a full-chain development ecosystem. This includes the free GD32 Embedded Builder IDE, GD-LINK debugging, and the GD32 all-in-one programmer. Tool providers such as Arm, KEIL, IAR, and SEGGER also support this series, including compilation development and trace debugging.

The GD32F503/505 series is available in several package types, including LQFP100/64/48, QFN64/48/32, and BGA64. Samples are available, along with datasheets, software libraries, ecosystem guides, and supporting tools. Development boards are available on request. Mass production is scheduled to start in December. The series will be available through authorized distributors.

The post 32-bit MCUs deliver industrial-grade performance appeared first on EDN.

Board-to-board connectors reduce EMI

Втр, 11/04/2025 - 20:49
Molex's quad-row board-to-board connectors.

Molex LLC develops a quad-row shield connector, claiming the industry’s first space-saving, four-row signal pin layout with a metal electromagnetic interference (EMI) shield. The quad-row board-to-board connector achieves up to a 25-dB reduction in EMI compared to a non-shielded quad-row solution. These connectors are suited for space-constrained applications such as smart watches and other wearables, mobile devices, AR/VR applications, laptops, and gaming devices.

Shielding protects connectors from external electromagnetic noise such as nearby components and far-field devices that can cause signal degradation, data errors, and system faults. The quad-row connectors eliminate the need for external shielding parts and grounding by incorporating the EMI shields, which saves space and simplifies assembly. It also improves reliability and signal integrity.

Molex's quad-row board-to-board connectors.(Source: Molex LLC)

The quad-row board-to-board connectors, first released in 2020, offers a 30% size reduction over dual-row connector designs, Molex said. The space-saving staggered-circuit layout positions pins across four rows at a signal contact pitch of 0.175 mm.

Targeting EMI challenges at 2.4-6 GHz and higher, the quad-row layout with the addition of an EMI shield mitigates both electromagnetic and radio frequency (RF) interference, as well as signal integrity issues that create noise.

The quad-row shield meets stringent EMI/EMC standards to lower regulatory testing burdens and speed product approvals, Molex said.

The new design also addresses the most significant requirements related to signal interference and incremental power requirements. These include how best to achieve 80 times the signal connections and four times the power delivery compared to a single-pin connector, Molex said.

The quad-row connectors offer a 3-A current rating to meet customer requirements for high power in a compact design. Other specifications include a voltage rating of 50 V, a dielectric withstanding voltage of 250 V, and a rated insulation resistance of 100 megohms. Click here for the datasheet.

Samples of the quad-row shield connectors are available now to support custom inquiries. Commercial availability will follow in the second quarter of 2026.

The post Board-to-board connectors reduce EMI appeared first on EDN.

5-V ovens (some assembly required)—part 2

Втр, 11/04/2025 - 17:28

In the first part of this Design Idea (DI), we looked at simple ways of keeping critical components at a constant temperature using a linear approach. In this second part, we’ll investigate something PWM-based, which should be more controllable and hence give better results.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Adding PWM to the oven

As before, this starts with a module based on a TO-220 package, the tab of which makes a decent hotplate on which our target component(s) can be mounted. Figure 1 shows this new circuit, which compares the voltage from a thermistor/resistor pair with a tri-wave and uses the result to vary the duty cycle of the heating current. Varying the amplitude and level of that tri-wave lets us tune the circuit’s performance.

This looks too simple and perhaps obvious to be completely original, but a quick search found nothing very similar. At least this was designed from scratch.

Figure 1 A tri-wave oscillator, a thermistor, and a comparator work together to pulse-width modulate the current through R7, the main heating element. Q1 switches that current and also helps with the heating.

U1a forms a conventional oscillator running at around 1 kHz. Neither the frequency nor the exact wave-shape on C1 is critical. R1 and R2+R3 determine the tri-wave’s offset, and R4 its amplitude. U1b compares the voltage across the thermistor with the tri-wave, as shown in Figure 2. When the temperature is low so that voltage is higher than any part of the tri-wave, U1b’s output will be solidly low, turning on Q1 to heat up R7 as fast as possible.

As the temperature rises, the voltages start to overlap and proportional control kicks in, progressively reducing the on-time so that the heat input is proportional to the difference between the actual and target temperatures. By the time the set-point has been reached, the on-time is down to ~18%. This scheme minimizes or even eliminates overshoot. (Thermal time-constants—ignored for the moment—can upset this a little.)

Figure 2 Oscilloscope captures showing the operation of Figure 1’s circuit.

Once the circuit is stable, Th1 will have the same resistance as R6, or 3.36 kΩ at our nominal target of 50°C (or 50.03007…°C, assuming perfect components), so Figure 1’s point B will be at half-rail. To keep that balance, the tri-wave must be offset upwards so that slicing gives our 18% figure at the set-point. Setting R3 to 1k0 achieved that. The performance after starting can be seen in Figure 3. (The first 40 seconds or so is omitted because it’s boring.)

Figure 3 From cold, Figure 1’s circuit stabilizes in two to three minutes. The upper trace is U1b’s output, heavily filtered. Also shown are Th1’s temperature (magenta) and that of the hotplate as measured by an external thermistor probe (cyan).

The use of Q1 as an over-driven emitter follower needs some explanation. First thoughts were to use an NPN Darlington or an n-MOSFET as a switch (with U1b’s inputs swapped), but that meant that the collector or drain—which we want to use as a hotplate—would be flapping up and down at the switching frequency.

While the edges are slowish, they could still couple capacitively to a target device: potentially bad news. With a PNP Darlington, the collector can be at ground, give or take a handful of millivolts. (The fine copper wire used to connect the module to the outside world has a resistance of about 1 Ω per meter.) Q1 drops ~1.3 V and so provides about a third of the heating, rather like the corresponding device in Part 1. This is a good reason to stay with the idea of using a TO-220’s tab as that hotplate—at least for the moment. Q1 could be a p-MOSFET, but R7 would then need to be adjusted to suit its (highly variable) VGS(on): fiddly and unrealistic.

LED1 starts to turn on once the set-point is near and becomes brighter as the duty cycle falls. This worked as well in practice as the long-tailed pair approach used in Part 1’s Figure 4.

The duty cycle is given as 18%, but where does that figure come from? It’s the proportion of the input heat that leaks out once the circuit has stabilized, and that depends on how well the module is thermally insulated and how thin the lead-out wires are. With a maximum heating current of 120 mA (600 mW in), practical tests gave that 18% figure, implying that ~108 mW is being lost. With a temperature differential of ~30°C, that corresponds to an overall thermal resistance of ~280°C/W. (Many DIL ICs are quoted as around 100°C/W.)

 Some more assembly required

The final build is mechanically quite different and uses a custom-built hotplate instead of a TO-220’s tab. It’s shown in Figure 4.

Figure 4 Our new hotplate is a scrap of copper sheet with the heater resistors glued to it symmetrically, with Th1 on one side and room for the target component(s) on the other. The third picture shows it fixed to the lower block of insulating foam, with fine wires meandered and ready for terminating. Not shown: an extra wire to ground the copper. Please excuse the blobby epoxy. I’d never get a job on a production line.

R7 now comprises four -33 Ω resistors in series/parallel, which are epoxied towards the ends of a piece of copper, two on each side, with Th1 centered on one side. The other side becomes our hotplate area, with a sweet spot directly above the thermistor. Thermally, it is symmetrical, so that—all other things being equal, which they rarely are—our target component will be heated exactly like Th1.

The drive circuit is a variant on Figure 1, the main difference being Q1, which can now be a small but low-RON n-MOSFET as it’s no longer intended to dissipate any power. R3 and R4 are changed to give a tri-wave amplitude of ~500 mV pk–pk at a frequency of ~500 Hz to optimize the proportional control. Figure 5 and Figure 6 show the schematic and its performance. It now stabilizes within a degree after one minute and perhaps a tenth after two, with decent tracking between the internal (Th1) and hotplate temperatures. The duty cycle is higher, largely owing to the different construction; more (and bulkier) insulation would have reduced it, improving efficiency.

Figure 5 The driving circuit for the new hotplate.

Figure 6 How Figure 5’s circuit performs.

The intro to Part 1 touched on my original oven, which needed to stabilize the operation of a logarithmically tuned oscillator. It used a circuit similar to Part 1’s Figure 5 but had a separate power transistor, whose dissipation was wasted. The logging diode was surrounded by a thermally-insulated cradle of heating resistors and the control thermistor.

It worked well and still does, but these circuits improve on it. Time for a rebuild? If so, I’ll probably go for the simplest, Part 1/Figure 1 approach. For higher-power use, Figure 5 (above) could probably be scaled to use different heating resistors fed from a separate and larger voltage. Time for some more experimental fun, anyway.

 Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

The post 5-V ovens (some assembly required)—part 2 appeared first on EDN.

Achieving analog precision via components and design, or just trim and go

Втр, 11/04/2025 - 07:28

I finally managed to get to a book that has been on my “to read” list for quite a while: “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constant” (2022) by James Vincent (Figure 1). It traces the evolution of measurement, its uses, and its motivations, and how measurement has shaped our world, from ancient civilizations to the modern day. On a personal note, I found the first two-thirds of the book a great read, but in my opinion, the last third wandered and meandering off topic. Regardless, it was a worthwhile book overall.

Figure 1 This book provides a fascinating journey through the history of measurement and how different advances combined over the centuries to get us to our present levels. Source: W. W. Norton & Company Inc.

Several chapters deal with the way measurement and the nascent science of metrology were used in two leading manufacturing entities of the early 20th century: Rolls-Royce and Ford Motor Company, and the manufacturing differences between them.

Before looking at the differences, you have to “reset” your frame of reference and recognize that even low-to-moderate volume production in those days involved a lot of manual fabrication of component parts, as the “mass production” machinery we now take as a given often didn’t exist or could only do rough work.

Rolls-Royce made (and still makes) fine motor cars, of course. Much of their quality was not just in the finish and accessories; the car was entirely mechanical except for the ignition system. They featured a finely crafted and tuned powertrain. It’s not a myth that you could balance a filled wine glass on the hood (bonnet) while the engine was running and not see any ripples in the liquid’s surface. Furthermore, you could barely hear the engine at all at a time when cars were fairly noisy.

They achieved this level of performance using careful and laborious manual adjustments, trimming, filing, and balancing of the appropriate components to achieve that near-perfect balance and operation. Clearly, this was a time-consuming process requiring skilled and experienced craftspeople. It was “mass production” only in terms of volume, but not in terms of production flow as we understand it today.

In contrast, Henry Fiord focused on mass production with interchangeable parts that would work to design objective immediately when assembled. Doing so required advances in measurement of the components at Ford’s factory to weed out incoming substandard parts and statistical analysis of quality, conformance, and deviations. Ford also sent specialists to suppliers’ factories to improve both their production processes and their own metrology.

Those were the days

Of course, those were different times in terms of industrial production. When Wright brothers needed a gasoline engine for the 1903 Flyer, few “standard” engine choices were available, and none came close to their size, weight, and output-power needs.

So, their in-house mechanic Charlie Taylor machined an aluminum engine block, fabricated most parts, and assembled an engine (1903 Wright Engine) in just six weeks using a drill press and a lathe; it produced 12 horsepower, 50% above the 8 horsepower that their calculations indicated they needed (Figure 2).

Figure 2 Perhaps in an ultimate do-it-yourself project, Charlie Taylor, mechanic for Wright brothers, machined the aluminum engine block and fabricated most of the parts and then assembled the complete engine in six weeks (reportedly working only from rough sketches). Source: Wright Brothers Aeroplane Company

Which approach is better—fine adjusting and trims, or use of a better design and superior components? There’s little doubt that the “quality by components” approach is the better tactic in today’s world where even customized cars make use of many off-the-shelf parts.

Moreover, the required volume for a successful car-production line mandates avoiding hand-tuning of individual vehicles to make their components plug-and-play properly. Even Rolls-Royce now uses the Ford approach, of course; the alternative is impractical for modern vehicles except for special trim and accessories.

Single unit “perfection” uses both approaches

In some cases, both calibration and use of better topology and superior components combine for a unique design. Not surprisingly, a classic example is one of the first EDN articles by late analog-design genius Jim Williams, “This 30-ppm scale proves that analog designs aren’t dead yet”. Yes, I have cited it in previous blogs, and that’s no accident (Figure 3).

Figure 3 This 1976 EDN article by Jim Williams set a standard for analog signal-chain technical expertise and insight that has rarely been equaled. Source: EDN

In the article, he describes his step-by-step design concept and fabrication process for a portable weigh scale that would offer 0.02% absolute accuracy (0.01 lb over a 300-pound range). Yet, it would never need adjustment to be put into use. Even though this article is nearly 50 years old, it still has relevant lessons for our very different world.

I believe that Jim did a follow-up article about 20 years later, where he revisited and upgraded that design using newer components, but I can’t find it online.

Today’s requirements were unimaginable—until recently

Use of in-process calibration is advancing due to techniques such as the use of laser-based interferometry. For example, the positional accuracy of the carriage, which moves over the wafer, needs to be in the sub-micrometer range.

While this level of performance can be achieved with friction-free air bearings, they cannot be used in extreme-ultraviolet (EUV) systems since those operate in an ultravacuum environment. Instead, high-performance mechanical bearings must be used, even though they are inferior to air bearings.

There are micrometer-level errors in the x-axis and y-axis, and the two axes are also not perfectly orthogonal, resulting in a system-level error typically greater than several micrometers across the 300 × 300-mm plane. To compensate, manufacturers add interferometry-based calibration of the mechanical positioning systems to determine the error topography of a mechanical platform.

For example, with a 300-mm wafer, the grid is scanned in 10-mm steps, and the interferometer determines the actual position. This value is compared against the motion-encoder value to determine a corrective offset. After this mapping, the system accuracy is improved by a factor of 10 and can achieve an absolute accuracy of better than 0.5 µm in the x-y plane

Maybe too smart?

Of course, there are times when you can be a little too clever in the selection of components when working to improve system performance. Many years ago, I worked for a company making controllers for large machines, and there was one circuit function that needed coarse adjustment for changes in ambient temperature. The obvious way would have been to use a thermistor or a similar component in the circuit.

But our lead designer—a circuit genius by any measure—had a “better” idea. Since the design used a lot of cheap, 100-kΩ pullup resistors with poor temperature coefficient, he decided to use one of those instead of the thermistor in the stabilization loop, as they were already on the bill of materials. The bench prototype and pilot-run units worked as expected, but the regular production units had poor performance.

Long story short: our brilliant designer had based circuit stabilization on the deliberate poor tempco of these re-purposed pull-up resistors and associated loop dynamic range. However, our in-house purchasing agent got a good deal on some resistors of the same value and size, but with a much tighter tempco. Additionally, getting a better component that was functionally and physically identical for less money seemed like a win-win.

That was fine for the pull-up role, but it meant that the transfer function of temperature to resistance was severely compressed. Identifying that problem took a lot of aggravation and time.

What’s your preferred approach to achieving a high-precision, accurate, stable analog-signal chain and front-end? Have you used both methods, or are you inherently partial to one over the other? Why?

Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.

Related Content

The post Achieving analog precision via components and design, or just trim and go appeared first on EDN.

LED illumination addresses ventilation (at the bulb, at least)

Пн, 11/03/2025 - 15:37

The bulk of the technologies and products based on them that I encounter in my everyday interaction with the consumer electronics industry are evolutionary (and barely so in some cases) versus revolutionary in nature. A laptop computer, a tablet, or a smartphone might get a periodic CPU-upgrade transplant, for example, enabling it to complete tasks a bit faster and/or a bit more energy-efficiently than before. But the task list is essentially the same as was the case with the prior product generation…and the generation before that…and…not to mention that the generational-cadence physical appearance also usually remains essentially the same.

Such cadence commonality is also the case with many LED light bulbs I’ve taken apart in recent years, in no small part because they’re intended to visually mimic incandescent precursors. But SANSI has taken a more revolutionary tack, in the process tackling an issue—heat–with which I’ve repeatedly struggled. Say what you (rightly) will about incandescent bulbs’ inherent energy inefficiency, along with the corresponding high temperature output that they radiate—there’s a fundamental reason why they were the core heat source for the Easy-Bake Oven, after all:

But consider, too, that they didn’t integrate any electronics; the sole failure points were the glass globe and filament inside it. Conversely, my installation of both CFL and LED light bulbs within airflow-deficient sconces in my wife’s office likely hastened both their failure and preparatory flickering, due to degradation of the capacitors, voltage converters and regulators, control ICs and other circuitry within the bulbs as well as their core illumination sources.

Evolutionary vs revolutionary

That’s why SANSI’s comparatively fresh approach to LED light bulb design, which I alluded to in the comments of my prior teardown, has intrigued me ever since I first saw and immediately bought both 2700K “warm white” and 5000K “daylight” color-temperature multiple-bulb sets on sale at Amazon two years ago:

They’re smaller A15, not standard A19, in overall dimensions, although the E26 base is common between the two formats, so they can generally still be used in place of incandescent bulbs (although, unlike incandescents, these particular LED light bults are not dimmable):

Note, too, their claimed 20% brighter illumination (900 vs 750 lumens) and 5x estimated longer usable lifetime (25,000 hours vs 5,000 hours). Key to that latter estimation, however, is not only the bulb’s inherent improved ventilation:

Versus metal-swathed and otherwise enclosed-circuitry conventional LED bulb alternatives:

But it is also the ventilation potential (or not) of wherever the bulb is installed, as the “no closed luminaires” warning included on the sticker on the left side of the SANSI packaging makes clear:

That said, even if your installation situation involves plenty of airflow around the bulb, don’t forget that the orientation of the bulb is important, too. Specifically, since heat rises, if the bulb is upside-down with the LEDs underneath the circuitry, the latter will still tend to get “cooked”.

Perusing our patient

Enough of the promo pictures. Let’s now look at the actual device I’ll be tearing down today, starting with the remainder of the box-side shots, in each case, and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Open ‘er up:

lift off the retaining cardboard layer, and here’s our 2700K four-pack, which (believe it or not) had set me back only $4.99 ($1.25/bulb) two years back:

The 5000K ones I also bought at that same time came as a two-pack, also promo-priced, this time at $4.29 ($2.15/bulb). Since they ended up being more expensive per bulb, and because I have only two of them, I’m not currently planning on also taking one of them apart. But I did temporarily remove one of them and replace it in the two-pack box with today’s victim, so you could see the LED phosphor-tint difference between them. 5000K on left, 2700K on right; I doubt there’s any other design difference between the two bulbs, but you never know…🤷‍♂️

Aside from the aforementioned cardboard flap for position retention above the bulbs and a chunk of Styrofoam below them (complete with holes for holding the bases’ end caps in place):

There’s no other padding inside, which might have proven tragic if we were dealing with glass-globe bulbs or flimsy filaments. In this case, conversely, it likely suffices. Also note the cleverly designed sliver of literature at the back of the box’s insides:

Now, for our patient, with initial overview perspectives of the top:

Bottom:

And side:

Check out all those ventilation slots! Also note the clips that keep the globe in place:

Before tackling those clips, here are six sequential clockwise-rotation shots of the side markings. I’ll leave it to you to mentally “glue” the verbiage snippets together into phrases and sentences:

Diving in for illuminated understanding

Now for those clips. Downside: they’re (understandably, given the high voltage running around inside) stubborn. Upside: no even-more-stubborn glue!

Voila:

Note the glimpses of additional “stuff” within the base, thanks to the revealing vents. Full disclosure and identification of the contents is our next (and last) aspiration:

As usual, twist the end cap off with a tongue-and-groove slip-joint (“Channellock”) pliers:

and the ceramic substrate (along with its still-connected wires and circuitry, of course) dutifully detaches from the plastic base straightaway:

Not much to see on the ceramic “plate” backside this time, aside from the 22µF 200V electrolytic capacitor poking through:

Integrated and otherwise simple = Cheap

The frontside is where most of the “action” is:

At the bottom is a mini-PCB that mates the capacitor and wires’ soldered leads to the ceramic substrate-embedded traces. Around the perimeter, of course, is the series-connected chain of 17 (if I’ve counted correctly) LEDs with their orange-tinted phosphor coatings, spectrum-tuned to generate the 2700K “warm white” light. And the three SMD resistors scattered around the substrate, two next to an IC in the upper right quadrant (33Ω “33R0” and 20Ω “33R0”) and another (33Ω “334”) alongside a device at left, are also obvious.

Those two chips ended up generating the bulk of the design intrigue, in the latter case still an unresolved mystery (at least to me). The one at upper right is marked, alongside a company logo that I’d not encountered before, as follows:

JWB1981
1PC031A

The package also looks odd; the leads on both sides are asymmetrically spaced, and there’s an additional (fourth) lead on one side. But thanks to one of the results from my Google search on the first-line term, in the form of a Hackaday post that then pointed at an informative video:

This particular mystery has, at least I believe, been solved. Quoting from the Hackaday summary (with hyperlinks and other augmentations added by yours truly):

The chip in question is a Joulewatt JWB1981, for which no datasheet is available on the internet [BD note: actually, here it is!]. However, there is a datasheet for the JW1981, which is a linear LED driver. After reverse-engineering the PCB, bigclivedotcom concluded that the JWB1981 must [BD note: also] include an onboard bridge rectifier. The only other components on the board are three resistors, a capacitor, and LEDs.

 The first resistor limits the inrush current to the large smoothing capacitor. The second resistor is to discharge the capacitor, while the final resistor sets the current output of the regulator. It is possible to eliminate the smoothing capacitor and discharge resistor, as other LED circuits have done, which also allow the light to be dimmable. However, this results in a very annoying flicker of the LEDs at the AC frequency, especially at low brightness settings.

Compare the resultant schematic shown in the video with one created by EDN’s Martin Rowe, done while reverse-engineering an A19 LED light bulb at the beginning of 2018, and you’ll see just how cost-effective a modern design approach like this can be.

That only leaves the chip at left, with two visible soldered contacts (one on each end), and bare on top save for a cryptic rectangular mark (which leaves Google Lens thinking it’s the guts of a light switch, believe it or not). It’s not referenced in “Big Clive’s” deciphered design, and I can’t find an image of anything like it anywhere else. Diode? Varistor to protect against voltage surges? Resettable fuse to handle current surges? Multiple of these? Something(s) else? Post your [educated, preferably] guesses, along with any other thoughts, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post LED illumination addresses ventilation (at the bulb, at least) appeared first on EDN.

Makefile vs. YAML: Modernizing verification simulation flows

Пн, 11/03/2025 - 12:12

Automation has become the backbone of modern SystemVerilog/UVM verification environments. As designs scale from block-level modules to full system-on-chips (SoCs), engineers rely heavily on scripts to orchestrate compilation, simulation, and regression. The effectiveness of these automation flows directly impacts verification quality, turnaround time, and team productivity.

For many years, the Makefile has been the tool of choice for managing these tasks. With its rule-based structure and wide availability, Makefile offered a straightforward way to compile RTL, run simulations, and execute regressions. This approach served well when testbenches were relatively small and configurations were simple.

However, as verification complexity exploded, the limitations of Makefile have become increasingly apparent. Mixing execution rules with hardcoded test configurations leads to fragile scripts that are difficult to scale or reuse across projects. Debugging syntax-heavy Makefiles often takes more effort than writing new tests, diverting attention from coverage and functional goals.

These challenges point toward the need for a more modular and human-readable alternative. YAML, a structured configuration language, addresses many of these shortcomings when paired with Python for execution. Before diving into this solution, it’s important to first examine how today’s flows operate and where they struggle.

Current scenario and challenges

In most verification environments today, Makefile remains the default choice for controlling compilation, simulation, and regression. A single Makefile often governs the entire flow—compiling RTL and testbench sources, invoking the simulator with tool-specific options, and managing regressions across multiple testcases. While this approach has been serviceable for smaller projects, it shows clear limitations as complexity increases.

Below is an outline of key challenges.

  • Configuration management: Test lists are commonly hardcoded in text or CSV files, with seeds, defines, and tool flags scattered across multiple scripts. Updating or reusing these settings across projects is cumbersome.
  • Readability and debugging: Makefile syntax is compact but cryptic, which makes debugging errors non-trivial. Even small changes can cascade into build failures, demanding significant engineer time.
  • Scalability: As testbenches grow, adding new testcases or regression suites quickly bloats the Makefile. Managing hundreds of tests or regression campaigns becomes unwieldy.
  • Tool dependence: Each Makefile is typically tied to a specific simulator, for instance, VCS, Questa, and Xcelium. Porting the flow to a different tool requires major rewrites.
  • Limited reusability: Teams often reinvent similar flows for different projects, with little opportunity to share or reuse scripts.

These challenges shift the engineer’s focus away from verification quality and coverage goals toward the mechanics of scripting and tool debugging. Therefore, the industry needs a cleaner, modular, and more portable way to manage verification flows.

Makefile-based flow

A traditional Makefile-based verification flow centers around a single file containing multiple targets that handle compilation, simulation, and regression tasks. See the representative structure below.

This approach offers clear strengths: immediate familiarity with software engineers, no additional tool requirements, and straightforward dependency management. For small teams with stable tool chains, this simplicity remains compelling.

However, significant challenges emerge with scale. Cryptic syntax becomes problematic; escaped backslashes, shell expansions, and dependencies create arcane scripting rather than readable configuration. Debug cycles lengthen with cryptic error messages, and modifications require deep Maker expertise.

Tool coupling is evident in the above structure—compilation flags, executable names, and runtime arguments are VCS-specific. Supporting Questa requires duplicating rules with different syntax, creating synchronization challenges.

So, maintenance of overhead grows exponentially. Adding tests requires multiple modifications, parameter changes demand careful shell escaping, and regression management quickly outgrows Maker’s capabilities, forcing hybrid scripting solutions.

These drawbacks motivate the search for a more human-readable, reusable configuration approach, which is where YAML’s structured, declarative format offers compelling advantages for modern verification flows.

YAML-based flow

YAML (YAML Ain’t Markup Language) provides a human-readable data serialization format that transforms verification flow management through structured configuration files. Unlike Makefile’s imperative commands, YAML uses declarative key-value pairs with intuitive indentation-based hierarchy.

See below this YAML configuration structure that replaces complex Makefile logic:

The modular structure becomes immediately apparent through organized directory hierarchies. As shown in Figure 1, a well-structured YAML-based verification environment separates configurations by function and scope, enabling different team members to modify their respective domains without conflicts.

Figure 1 The block diagram highlights the YAML-based verification directory structure. Source: ASICraft Technologies

Block-level engineers manage component-specific test configurations, IP1 andIP2, while integration teams focus on pipeline and regression management. Instead of monolithic Makefiles, teams can organize configurations across focused files: build.yml for compilation settings, sim.yml for simulation parameters, and various test-specific YAML files grouped by functionality.

Advanced YAML features like anchors and aliases eliminate configuration duplication using the DRY (Don’t Repeat Yourself) principle.

Tool independence emerges naturally since YAML contains only configuration data, not tool-specific commands. The same YAML files can drive VCS, Questa, or XSIM simulations through appropriate Python parsing scripts, eliminating the need for multiple Makefiles per tool.

Of course, YAML alone doesn’t execute simulations; it needs a bridge to EDA tools. This is achieved by pairing YAML with lightweight Python scripts that parse configurations and generate appropriate tool commands.

Implementation of YAML-based flow

The transition from YAML configuration to actual EDA tool execution follows a systematic four-stage process, as illustrated in Figure 2. This implementation addresses the traditional verification challenge where engineers spend excessive time writing complex Makefiles and managing tool commands instead of focusing on verification quality.

Figure 2  The YAML-to-EDA phase bridges the YAML configuration. Source: ASICraft Technologies

YAML files serve as comprehensive configuration containers supporting diverse verification needs.

  • Project metadata: Project name, descriptions, and version control
  • Tool configuration: EDA tool selection, licenses, and version specifications
  • Compilation settings: Source files, include directories, definitions, timescale, and tool-specific flags
  • Simulation parameters: Tool flags, snapshot paths, and log directory structures
  • Test specifications: Test names, seeds, plusargs, and coverage options
  • Regression management: Test lists, reporting formats, and parallel execution settings

Figure 3 Here is a view of Python YAML parsing workflow phases. Source: ASICraft Technologies

The Python implementation demonstrates the complete flow pipeline. Starting with a simple YAML configuration:

The Python script loads and processes the configuration below:

When executed, the Python script produces clear output, showing the command translation, as illustrated below:

The complete processing workflow operates in four systematic phases, as detailed in Figure 3.

  1. Load/parse: The PyYAML library converts YAML file content into native Python dictionaries and lists, making configuration data accessible through standard Python operations.
  2. Extract: The script accesses configuration values using dictionary keys, retrieving tool names, file lists, compilation flags, and simulation parameters from the structured data.
  3. Build commands: The parser intelligently constructs tool-specific shell commands by combining extracted values with appropriate syntax for the target simulator (VCS or Xcelium).
  4. Display/execute: Generated commands are shown for verification or directly executed through subprocess calls, launching the actual EDA tool operations.

This implementation creates true tool-agnostic operation. The same YAML configuration generates VCS, Questa, or XSIM commands by simply updating the tool specification. The Python translation layer handles all syntax differences, making flows portable across EDA environments without configuration changes.

The complete pipeline—from human-readable YAML to executable simulation commands—demonstrates how modern verification flows can prioritize engineering productivity over infrastructure complexity, enabling teams to focus on test quality rather than tool mechanics.

Comparison: Makefile vs. YAML

Both approaches have clear strengths and weaknesses that teams should evaluate based on their specific needs and constraints. Table 1 provides a systematic comparison across key evaluation criteria.

Table 1 See the flow comparison between Makefile and YAML. Source: ASICraft Technologies

Where Makefiles work better

  • Simple projects with stable, unchanging requirements
  • Small teams already familiar with Make syntax
  • Legacy environments where changing infrastructure is risky
  • Direct execution needs required for quick debugging without intermediate layers
  • Incremental builds where dependency tracking is crucial

Where YAML excels

  • Growing complexity with multiple test configurations
  • Multi-tool environments supporting different simulators
  • Team collaboration where readability matters
  • Frequent modifications to test parameters and configurations
  • Long-term maintenance across multiple projects

The reality is that most teams start with Makefiles for simplicity but eventually hit scalability walls. YAML approaches require more expansive initial setup but pay dividends as projects grow. The decision often comes down to whether you’re optimizing for immediate simplicity or long-term scalability.

For established teams managing complex verification environments, YAML-based flows typically provide better return on investment (ROI). However, teams should consider practical factors like migration effort and existing tool integration before making the transition.

Choosing between Makefile and YAML

The challenges with traditional Makefile flows are clear: cryptic syntax that’s hard to read and modify, tool-specific configurations that don’t port between projects, and maintenance overhead that grows with complexity. As verification environments become more sophisticated, these limitations consume valuable engineering time that should focus on actual test development and coverage goals.

The YAML-based flows address these fundamental issues through human-readable configurations, tool-independent designs, and modular structures that scale naturally. Teams can simply describe verification intent—run 100 iterations with coverage—while the flow engine handles all tool complexity automatically. The same approach works from block-level testing to full-chip regression suites.

Key benefits realized with YAML

  • Faster onboarding: New team members understand YAML configurations immediately.
  • Reduced maintenance: Configuration changes require simple text edits, not scripting.
  • Better collaboration: Clear syntax eliminates the “Makefile expert” bottleneck.
  • Tool flexibility: Switch between VCS, Questa, or XSIM without rewriting flows.
  • Project portability: YAML configurations move cleanly between different projects.

The choice between Makefile and YAML approaches ultimately depends on project complexity and team goals. Simple, stable projects may continue benefiting from Makefile simplicity. However, teams managing growing test suites, multiple tools, or frequent configuration changes will find YAML-based flows providing better long-term returns on their infrastructure investment.

Meet Sangani is ASIC verification engineer at ASICraft Technologies.

Hitesh Manani is senior ASIC verification engineer at ASICraft Technologies.

Shailesh Kavar is ASIC verification technical manager at ASICraft Technologies.

Related Content

The post Makefile vs. YAML: Modernizing verification simulation flows appeared first on EDN.

Computer-on-module architectures drive sustainability

Птн, 10/31/2025 - 23:46
congatec’s credit-card-sized COM-HPC Mini with carrier.

Sustainability has moved from corporate marketing to a board‑level mandate. For technology companies, this shift is more than meeting environmental, social, and governance frameworks; it reflects the need to align innovation with environmental and social responsibility among all key stakeholders.

Regulators are tightening reporting requirements while investors respond favorably to sustainable strategies. Customers also want tangible progress toward these goals. The debate is no longer about whether sustainability belongs in technology roadmaps but how it should be implemented.

The hidden burden of embedded and edge systems

Electronic systems power a multitude of devices in our daily lives. From industrial control systems and vital medical technology to household appliances, these systems usually run around the clock for years on end. Consequently, operating them requires a lot of energy.

Usually, electronic systems are part of a larger ecosystem and are difficult to replace in the event of failure. When this happens, complete systems are often discarded, resulting in a surplus of electronic waste.

Rapid advances in technology make this issue more pronounced. Processor architectures, network interfaces, and security protocols become obsolete in shorter cycles than they did just a few years ago. As a result, organizations often retire complete systems after a brief service life, even though the hardware still meets its original requirements. The continual need to update to newer standards drives up costs and can undermine sustainability goals.

Embedded and edge systems are foundational technologies driving critical infrastructure in industrial automation, healthcare, and energy applications. As such, the same issues with short product lifecycles and limited upgradeability put them in the same unfortunate bucket of electronic waste and resource consumption.

Bridging the gap between performance demands and sustainability targets requires rethinking system architectures. This is where off-the-shelf computer-on-module (COM) designs come in, offering a path to extended lifecycles and reduced waste while simultaneously future-proofing technology investments.

How COMs extend product lifecycles

Open embedded computing standards such as COM Express, COM-HPC, and Smart Mobility Architecture (SMARC) separate computing components—including processors, memory, network interfaces, and graphics—from the rest of the system. By separating the parts from the whole, they allow updates by swapping modules instead of by requiring a complete system redesign.

This approach reduces electronic waste, conserves resources, and lowers long‑term costs, especially in industries where certifications and mechanical integration make complete redesigns prohibitively expensive. These sustainability benefits go beyond waste reduction: A modular system is easier to maintain, repair, and upgrade, meaning fewer devices end up prematurely as electronic waste.

Recommended Why system consolidation for IT/OT convergence matters

Open standards that enable longevity

To simplify the development and manufacturing of COMs and to ensure interchangeability across manufacturers, consortia such as the PCI Industrial Computer Manufacturing Group (PICMG) promote and ratify open standards.

One of the most central standards in the embedded sector is COM Express. This standard defines various COM sizes, such as Type 6 or Type 10, to address different application areas; it also offers a seamless transition from legacy interfaces to modern differential interfaces, including DisplayPort, PCI Express, USB 3.0, or SATA. COM Express, therefore, serves a wide range of use cases from low-power handheld medical equipment to server-grade industrial automation infrastructure.

Expanding on these efforts, COM-HPC is the latest PICMG standard. Addressing high-performance embedded edge and server applications, COM-HPC arose from the need to meet increasing performance and bandwidth requirements that previous standards couldn’t achieve. COM-HPC COMs are available with three pinout types and six sizes for simplified application development. Target use cases range from powerful small-form-factor devices to graphics-oriented multi-purpose designs and robust multi-core edge servers.

congatec’s credit-card-sized COM-HPC Mini with carrier.COM-HPC, including congatec’s credit-card-sized COM-HPC Mini, provides high performance and bandwidth for all AI-powered edge computing and embedded server applications. (Source: congatec)

Alongside COM Express and COM-HPC, the Standardization Group for Embedded Technologies developed the SMARC standard to meet the demands of power-saving, energy-efficient designs requiring a small footprint. Similar in size to a credit card, SMARC modules are ideal for mobile and portable embedded devices, as well as for any industrial application that requires a combination of small footprint, low power consumption, and established multimedia interfaces.

Congatec's conga-SMX95 SMARC module.As credit-card-sized COMs, SMARC modules are designed for size-, weight-, power-, and cost-optimized AI applications at the rugged edge. (Source: congatec)

As a company with close involvement in developing COM Express, COM-HPC, and SMARC, congatec is invested in the long-term success of more sustainable architectures. Offering designs for common carrier boards that can be used for different standards and/or modules, congatec’s approach allows product designers to use a single carrier board across many applications, as they simply swap the module when upgrading performance, removing the need for complex redesigns.

Virtualization as a path to greener systems

On top of modular design, extending hardware lifecycles requires intelligent software management. Hypervisors, a software tool that creates and manages virtual machines, add an important software layer to the sustainability benefits of COM architectures.

Virtualization allows multiple workloads to coexist securely on a single module, meaning that separate boards aren’t required to run essential tasks such as safety, real-time control, and analytics. This consolidation simultaneously lowers energy consumption while decreasing the demand for the raw materials, manufacturing, and logistics associated with more complex hardware.

Congatec aReady.VT hypervisor.Hypervisors such as congatec aReady.VT are real-time virtualization software tools that consolidate functionality that previously required multiple dedicated systems in a single hardware platform. (Source: congatec) Enhancing sustainability through COM-based designs

The rapid adoption of technologies such as edge AI, real‑time analytics, and advanced connectivity has inspired industries to strive for scalable platforms that also meet sustainability goals. COM architectures are a great example, demonstrating that high performance and environmental responsibility are compatible. They show technology and business leaders that designing sustainability into product architectures and technology roadmaps, rather than treating it as an afterthought, makes good practical and financial sense.

With COM-based modules already providing a flexible and field-proven foundation, the embedded sector is off to a good start in shrinking environmental impact while preserving long-term innovation capability.

The post Computer-on-module architectures drive sustainability appeared first on EDN.

Solar-powered cars: is it “déjà vu” all over again?

Птн, 10/31/2025 - 14:56

I recently came across a September 18 article by the “future technology” editor at The Wall Street Journal, “Solar-Powered Cars and Trucks Are Almost Here” (sorry, behind paywall, but your local library may have free access). The author was positively gushing about companies such as Aptera Motors (California), which will “soon” be selling all-solar-powered cars. On a full daylight charge, they can do a few tens of miles, then it’s time to park in the Sun for that totally guilt-free “fill up.”

Figure 1 The Aptera solar-powered three-wheel “car” can go between 15 and 40 miles on a full all-solar charge. Source: Aptera Motors

The article focused on the benefits and innovations, such as how Aptera claims to have developed solar panels that withstand road hazards, including rocks kicked up at high speed, and similar advances.

The solar exposure-versus-distance numbers are very modest, to be polite. While people living in a sunny environment could add up to 40 miles (64 km) of range a day in summer months, from panels alone, that drops to around 15 miles (24 km) a day in northern climates in winter. Aptera says its front-wheel-drive version goes from 0 to 60 mph (96 km/hour) in 6 seconds, and has a top speed of 101 mph (163 km/hr).

The article also mentions that Aptera is planning to sell its ruggedized panels to Telo Trucks, a San Carlos (Calif) maker of a 500-horsepower mini-electric truck estimated to ship next year, which uses solar panels to extend its range by 15 to 30 supplemental miles per day.

Then I closed my eyes and thought, “Wait, haven’t I heard this story before?” Sure enough, I looked through my notes and saw that I had commented on Aptera’s efforts and those of others back in a 2021 blog, “Are solar-powered cars the ultimate electric vehicles?  Perhaps it’s no surprise, but the timeline then was also “coming soon.”

The laws of physics conspire to make this a very tough project. This sort of ambitious project requires advances across multiple disciplines. There are the materials for the vehicle itself, batteries, rugged solar panels, battery-management electronics —  it’s a long list. These are closely tied to key ratios beginning with power and energy to weight.

Did I mention it’s a three-wheel vehicle (with all the stability issues that brings), seats two people, and is technically classified as a motorcycle despite its fully enclosed cabin? Or that it has to meet vehicle safety mandates and regulations? Will drivers likely need power-draining air conditioning unless they drive open-air, especially as the vehicle needs to be parked in the sun by definition?

I don’t intend to disparage the technological work, innovation, and hard work (and money) they have put into the project. Nonetheless, no matter how you look at it, it’s a lot of effort and retail price (estimated to be around $40,000) for a modest 15 to 40 miles of range. That’s a lot of dollar pain for very modest environmental gain, if any.

Is the all-electric vehicle analogous to the flying car?
Given today’s technology and that of the foreseeable future, I think the path of a truly viable all-solar car (at any price) is similar to that other recurrent dream: the flying car. Many social observers say that the hybrid vehicle (different meaning of “hybrid” here, of course) was brought into popular culture in 1962 by the TV show The Jetsons – but there had been articles in magazines such as Popular Science even before that date.

Figure 2 The flying car that is often discussed was likely inspired by the 1962 animated series “The Jetsons.” Source: Thejetsons.fandom.com

Roughly every ten years since then, the dream resurfaces and there’s a wave of articles in the general media about all the new flying cars under development and road/air test, and how actual showroom models are “just around the corner.” However, it seems like we are always approaching but not making the turn around that corner; Terrafugia’s massive publicity wave, followed by subsequent bankruptcy, is just one example.

The problem for flying cars, however attractive the concept may be, is that the priority needs and constraints for a ground vehicle, such as a car, are not aligned with those of an aircraft; in fact, they often contradict each other.

 It’s difficult enough in any vehicle-engineering design to find a suitable balance among tradeoffs and constraints – after all, that’s what engineering is about. For  the flying  car, however, it is not so much about finding the balance point as it is about reconciling dramatically opposing issues. In addition, both classes of vehicles are subject to many regulatory mandates related to safety, and those add significant complexity.

Sometimes, it’s nearly impossible to “square the circle” and come up with a viable and acceptable solution to opposing requirements. Literally, “to square the circle” refers to the geometry challenge of constructing a square with the same area as a given circle but using only a compass and straightedge, a problem posed by the ancient Greeks and which was proven impossible in 1882. Metaphorically, the phrase means to attempt or solve something that seems impossible, such as combining two fundamentally different or incompatible things.

What’s the future for these all-solar “cars”? Unlike talking heads, pundits, and journalists, I’ll admit that I have no idea. They may never happen, they may become an expensive “toy” for some, or they may capture a small but measurable market share. Once prototypes are out on the street getting some serious road mileage, further innovations and updates may make them more attractive and perhaps less costly—again, I don’t know (nor does anyone).

Given the uncertainties associated with solar-powered and flying cars, why do they get so much attention? That’s an easy question to answer: they are fun and fairly easy to write about and the coverage gets attention. After all, they are more exciting to present and likely to attract more attention than silicon-carbide MOSFETs.

What’s your sense of the reality of solar-powered cars? Are they a dream with too many real-world limitations? Will they be a meaningful contribution to environmental issues, or an expensive virtue-signaling project—assuming they make it out of the garage and become highway-rated, street-legal vehicles?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

The post Solar-powered cars: is it “déjà vu” all over again? appeared first on EDN.

The next RISC-V processor frontier: AI

Птн, 10/31/2025 - 12:56

The RISC-V Summit North America, held on 22-23 October 2025 in Santa Clara, California, showcased the latest CPU cores featuring new vector processors, high-speed interfaces, and peripheral subsystems. These CPU cores were accompanied by reference boards, software design kits (SDKs), and toolchains.

The show also provided a sneak peek of the RISC-V’s design ecosystem, which is maturing fast with the RVA23 application profile and RISC-V Software Ecosystem (RISE), a Linux Foundation project. The emerging ecosystem encompasses compilers, system libraries, language runtimes, simulators, emulators, system firmware, and more.

“The performance gap between high-end Arm and RISC-V CPU cores is narrowing and a near parity is projected by the end of 2026,” said Richard Wawrzyniak, principal analyst for ASIC, SoC and IP at The SHD Group. He named Andes, MIPS, Nuclei Systems, and SiFive as market leaders in RISC-V IP. Wawrzyniak also mentioned new entrants such as Akeana, Tenstorrent, and Ventana.

Andes, boasting 20 years of expertise in the semiconductor IP business, was a prominent presence in the corridors of the RISC-V Summit in Santa Clara. It’s a founding member of RISC-V International and a pure-play IP vendor. At the RISC-V Summit, Andes displayed its processor lineup, including AX45, AX46, AX66, and Cuzco.

Figure 1 The processor lineup was showcased at the RISC-V Summit in Santa Clara. Source: Andes

Andes claims that these RISC-V processors, featuring powerful compute and efficient control, provide the architectural diversity required in artificial intelligence (AI) applications. AX45 and AX46 processors have been taped out and are shipping in volumes. Here, Andes also provides in-chip firmware, tester software, on-board software, and on-cloud software as part of its hardware IP monitoring offerings.

Though RISC-V is enjoying a robust deployment in automotive, Internet of Things (IoT), and networking, AI was all the rage on the RISC-V Summit floor. “If RISC-V has a tailwind, it’s AI,” Wawrzyniak said.

RISC-V world’s AI moment

Andes claims it’s driving RISC-V into the AI world with features such as advanced vector processing. And that its RISC-V processors are powering devices from the battery-sipping edge to high-performance data centers. Andes also claims that 38% of its revenue comes from AI designs.

Companies like Andes can also bring differentiation and efficiency to AI processor designs through automated custom extensions. “We are getting there, and the deployment speed is impressive,” said Dr. Charlie Su, president and CTO of Andes Technology.

Figure 2 Meta deployed two generations of AI accelerators for training and inference using RISC-V vector/scalar cores. Source: Andes

“RISC-V is getting better for AI applications in data centers,” said Ty Garibay, president of Condor Computing. “RVA23 has a massive investment in features for data center-class AI designs.” Condor Computing, a wholly owned subsidiary of Andes, founded in 2023, develops high-performance RISC-V IPs and is based in Austin, Texas.

Wawrzyniak of SHD Group acknowledges that AI applications are driving the adoption of RISC-V-enabled system-on-chips (SoCs). “The heterogeneous nature of SoCs has created opportunities for multiple CPU architectures,” he said. “These SoCs can support both RISC-V and other ISAs, allowing applications to pick the best core for each function.”

Moreover, the diverse needs for AI acceleration are fueling the demand for RISC-V. “RISC-V CPU IP vendors can more easily introduce new and more powerful CPU cores, which extends the reach of RISC-V into AI applications that require greater compute power,” Wawrzyniak said.

During his keynote, Wawrzyniak said that initial RISC-V deployments were driven by embedded applications such as networking, smart sensors, storage, and wearables. “RISC-V is now transitioning to higher-end applications like ADAS and data centers as AI expands to those applications.”

RISC-V processor duo

At the RISC-V Summit, Andes provided more details about its new application processors. It showcased AX66, a mid-range application processor, and Cuzco, a high-end application processor; both are RVA23-compliant. AX66—incorporating up to 8 cores—features dual vector pipes with VLEN=128 and front-end decode 4-wide. It has a shared L3 cache of up to 32 MB.

Figure 3 AX66 is a 64-bit multicore CPU IP for developing a high-performance quad-decode 13-stage superscalar out-of-order processor. Source: Andes

On the higher end, Cuzco features time-based scheduling with a time resource matrix to determine instruction issue cycles after decoding, thereby reducing logic complexity and dynamic power for wide machines. Cuzco’s decode is either 6-wide or 8-wide, and it has 8 execution pipelines (2 per slice).

Cuzco incorporates up to 8 cores and offers a shared L3 cache of up to 256 MB. The Cuzco RISC-V processor has been implemented at 5-nm nodes with 8 execution pipelines and 7 million gates. It features an L2 configuration with 2MB and is targeted for a 2.5-GHz speed.

Figure 4 The Cuzco design represents the first in a new class of RISC-V CPUs aimed at data center-class performance while maintaining power efficiency and area benefits. Source: Andes

For the development of these RISC-V processors, the AndeSight integrated development environment (IDE) helps design engineers generate files for LLVM to recognize new instructions. Then there is AndesAIRE software, which facilitates graph-level optimization for pruning and quantization as well as back-end-aware optimization for fusion and allocation.

For OS support, the processors comply with RVA22 and RVA23 profiles and SoC hardware and software platforms. Andes also provides additional support to ensure that the Linux kernel is upstream-compatible.

Cuzco, unveiled at Hot Chips 2025 earlier this year, features a time-based out-of-order microarchitecture engineered to deliver high performance and efficiency across compute-intensive applications in AI, data center, networking, and automotive markets. Andes provided a preview of this out-of-order CPU at the RISC-V Summit.

Condor Computing developed the Cuzco RISC-V core, which is fully integrated into the Andes toolchain and ecosystem. Condor recently completed full hardware emulation of its new CPU IP while successfully booting Linux and other operating systems.

“Condor’s microarchitecture combines advanced out-of-order execution with novel hardware techniques to dramatically boost performance-per-watt and silicon efficiency,” Andes CTO Su said. “It’s ideally suited for demanding CPU workloads in AI, automotive compute, applications processing, and beyond.”

The first customer availability of the Cuzco RISC-V processor is expected in the fourth quarter of 2025.

The RISC-V adoption

According to Wawrzyniak, chip designers are now looking at both Arm and RISC-V processor architectures. “The RISC-V ISA and its rising ecosystem have interjected competition once again into the SoC design landscape.”

Furthermore, the custom RISC-V ISA extensions empower innovation and tailored performance. Not surprisingly, therefore, the adoption of RISC-V by large technology companies such as Broadcom, Google, Meta, MediaTek, Qualcomm, Renesas, and Samsung continues to validate the utility of the RISC-V ISA in the semiconductor industry.

RISC-V, once an academic exercise, has come a long way since its launch in May 2010 at the University of California, Berkley. However, as Krste Asanovic, chief architect at SiFive, said during his keynote, RISC-V will continue to evolve across different verticals and that it’ll be around for a long time.

Related Content

The post The next RISC-V processor frontier: AI appeared first on EDN.

1,200-V diodes offer low loss, high efficiency

Чтв, 10/30/2025 - 22:24
Taiwan Semi's 1,200-V PLA/PLD series diodes in a ThinDPAK package.

Taiwan Semiconductor launches a new series of automotive-grade, low-loss diodes in three popular industry-standard packages. They provide an automotive-level performance upgrade in existing designs and low-power dissipation required for higher-power rectification applications.

Taiwan Semi's 1,200-V PLA/PLD series diodes in a ThinDPAK package.(Source: Taiwan Semiconductor)

The 1,200-V PLA/PLD series, with ratings of 15 A, 30 A or 60 A, all feature low forward voltage (1.3 Vf max), low reverse leakage (<10 µA at 25°C), and high junction temperature (175°C Tj max). They are available in three packages—ThinDPAK, D2PAK-D, and TO-247BD—for design flexibility.

These 1,200-V diodes provide easy drop-in replacements using an industry-standard pinout to improve efficiency in existing designs, according to the company. They can be used in a variety of applications such as three-phase AC/DC converters, server and computing power (including AI power) systems, EV charging stations, on-board battery chargers, Vienna rectifiers, totem pole and bridgeless topologies, inverters and UPS systems, and general-purpose rectification in high-power systems.

The new PLA/PLD series is offered in six models manufactured to automotive-quality standards. Two of the models, the PLAD15QH (ThinDPAK) and PLDS30QH (D2PAK-D), are fully AEC-Q qualified for automotive applications. The other four models include the PLAD15Q (ThinDPAK), PLDS30Q (D2PAK-D), PLAH30Q (TO-247BD), and PLAH60Q (TO-247BD).

The PLA/PLD series are sampling now. They are in-stock at DigiKey and Mouser. Production lead times is 8-14 weeks ARO. Design resources include datasheets, spice models, Foster and Cauer thermal models, and CAD files (symbol, footprint, and 3D model).

The post 1,200-V diodes offer low loss, high efficiency appeared first on EDN.

Wirewound resistors operate in harsh environments

Чтв, 10/30/2025 - 21:56
Bpurns' Riedon precision wirewound resistors.

Bourns Inc. launches its series of Riedon precision wirewound resistors. These passive devices meet application requirements for high accuracy and long-term stability. They offer a wide resistance range of up to 6 megohms (MΩ) with ultra-low resistance tolerances (as low as ±0.005 percent).

Bpurns' Riedon precision wirewound resistors.(Source: Bourns Inc.)

This rugged, high-precision resistor series is offered in multiple axial, radial, and square package sizes and in a variety of lead configurations for greater design flexibility. They feature non-inductive multi-Pi cores, protective encapsulation technology, and a low standard temperature coefficient of ±2 ppm/°C.

These features help minimize inductance and noise while maintaining stability and efficiency even under high heat and harsh electrical conditions, Bourns said.

The series is 100 percent acceptance tested and RoHS-compliant. Applications include measurement equipment, bridge circuits, load cells and strain gauges, imaging systems, current sensing equipment, and high-frequency circuit designs.

The Riedon wirewound resistors are available now. Custom solutions are also available to meet specific customer requirements.

Last year, Bourns expanded its Riedon power resistor family with the launch of 11 product series, including wirewound resistors and current-sense resistors. They feature high power ratings, low temperature coefficients (TCRs), a wide resistance range, and an extended temperature range.

These resistors are available in numerous packaging options, including wirewound through-hole and surface mount; surface-mount metal film; and bare/coated metal element resistors. They target a variety of applications, including battery energy storage systems, industrial power supplies, motor drives, smart meters, telecom 5G remote radio and baseband units, and current sensing.

The post Wirewound resistors operate in harsh environments appeared first on EDN.

Sony debuts image sensor with MIPI A-PHY link

Чтв, 10/30/2025 - 18:09

According to Sony, the IMX828 CMOS image sensor is the industry’s first to integrate a MIPI A-PHY interface for connecting automotive cameras, sensors, and displays with their ECUs. The built-in serializer-deserializer physical layer removes the need for external serializer chips, enabling more compact, lower-power camera systems.

The IMX828 offers 8-Mpixel resolution (effective pixels) and a 150-dB high dynamic range. Its pixel structure achieves a high saturation level of 47 kcd/m², allowing accurate recognition of high-luminance objects such as red traffic signals and LED taillights.

A low-power parking-surveillance mode detects motion to help reduce theft and vandalism risk. Images are captured at low resolution and frame rate to keep power consumption under 100 mW. When motion is detected, the sensor alerts the ECU and switches to normal imaging mode.

Sony plans to obtain AEC-Q100 Grade 2 qualification before mass production begins. The IMX828 meets ISO 26262 requirements, with hardware metrics conforming to ASIL-B and the development process to ASIL-D. Sample shipments are expected to start in November 2025. A datasheet was not available at the time of this announcement.

Sony Semiconductor Solutions 

The post Sony debuts image sensor with MIPI A-PHY link appeared first on EDN.

EIS-powered chipset improves EV battery monitoring

Чтв, 10/30/2025 - 18:09

NXP’s battery management chipset integrates electrochemical impedance spectroscopy (EIS) to enable lab-grade vehicle diagnostics. The system comprises three devices: the BMA7418 18-channel Li-Ion cell controller, BMA6402 communication gateway, and BMA8420 battery junction box monitor. Together, they deliver hardware-based synchronization of all cell measurements within a high-voltage battery pack with nanosecond precision.

By embedding EIS directly in hardware, the chipset supports real-time, high-frequency monitoring of battery health. Accurate impedance measurements, combined with in-chip discrete Fourier transformation, help OEMs manage faster and safer charging, detect early signs of degradation, and simplify overall system design.

EIS sends controlled excitation signals through the battery and analyzes frequency responses to reveal cell aging, temperature shifts, or micro shorts. NXP’s system uses an integrated excitation source with a pre-charge circuit, while DC link capacitors provide secondary energy storage for greater efficiency.

The complete BMS solution is expected to be available by the beginning of 2026, with enablement software running on NXP’s S32K358 automotive microcontroller. Read more about the chipset here.

NXP Semiconductors 

The post EIS-powered chipset improves EV battery monitoring appeared first on EDN.

Compact oscillator fits tight AI interconnects

Чтв, 10/30/2025 - 18:09

Housed in a 6-pin, 2.0×1.6-mm LGA package, Mixed-Signal Devices’ MS1180 crystal oscillator conserves space in AI data center infrastructure. Factory-programmed to provide any frequency from 10 MHz to 1000 MHz with under 1-ppb resolution, it is well-suited for 1.6T and 3.2T optical modules, active optical cables, active electrical cables, and other size-constrained interconnect devices.

The MS1180 is optimized for key networking frequencies—156.25 MHz, 312.5 MHz, 491.52 MHz, and 625 MHz—and maintains low RMS phase jitter of 28.3 fs to 43.1 fs when integrated from 12 kHz to 20 MHz. It offers ±20-ppm frequency stability from –40 °C to +105 °C. Power-supply-induced phase noise is –114 dBc for 50-mV supply ripples at 312.5 MHz, with a supply-jitter sensitivity of 0.1 fs/mV (measured with 50-mVpp ripple from 50 kHz to 1 MHz on VDD pin).

Supporting multiple output formats (CML, LVDS, EXT LVDS, LVPECL, HCSL), the device runs from a single 1.8- V supply with an internal regulator.

The MS1180 crystal oscillator is sampling now to strategic partners and Tier 1 customers. Production volumes are expected to ramp in Q1 2026.

MS1180 product page   

Mixed-Signal Devices  

The post Compact oscillator fits tight AI interconnects appeared first on EDN.

Retimer boosts USB 3.2 and DP in auto cockpits

Чтв, 10/30/2025 - 18:09

A bit-level retimer from Diodes, the PI2DPT1021Q enables high-speed USB and DisplayPort (DP) connectivity in automotive smart cockpits and infotainment systems. The 10-Gbps bidirectional device supports USB 3.2 and DP 1.4 standards for various automotive USB Type-C applications.

The retimer has 4:4 channels, configurable via I²C for different modes: four-lane DP, two-lane DP with one-lane USB 3.2 Gen 2, or one- or two-lane USB 3.2 Gen 2. It is AEC-Q100 Grade 2 qualified and operates over a temperature range of -40° to +105 °C.

To maintain signal integrity, the PI2DPT1021Q offers receiver adaptive equalization that compensates for channel losses up to -23 dB at 5 GHz. It also provides low latency (<1 ns) from signal input to output, ensuring good interoperability between USB and DP devices. Additional features include jitter cleaning, an adaptive continuous-time linear equalizer (CTLE), and a 3-tap transmitter with selectable adjustment.

The PI2DPT1021Q retimer costs $1.65 each in lots of 5000 units.

PI2DPT1021Q product page 

Diodes

The post Retimer boosts USB 3.2 and DP in auto cockpits appeared first on EDN.

GaN flyback converter supplies up to 75 W

Чтв, 10/30/2025 - 18:09

ST’s VIPerGaN50W houses a 700-V GaN power transistor, flyback controller, and gate driver in a compact 5×6-mm QFN package. The quasi-resonant offline converter delivers up to 75 W from high-line input (185–265 VAC) or 50 W across the full universal input range (85–265 VAC). It uses a proprietary technique that ensures chargers and power supplies operate silently at all load levels.

Along with zero voltage switching (ZVS), the VIPerGaN50W includes dynamic blanking time, which minimizes switching losses by limiting the frequency. It also offers adjustable valley synchronization delay to maximize efficiency at any input line and load condition. A valley-lock feature stabilizes skipped cycles to prevent audible switching noise.

At no load, the converter’s standby power drops below 30 mW thanks to adaptive burst mode, helping meet stringent ecodesign regulations. Advanced power-management features ensure the output-power capability and switching frequency remain stable, even when the supply voltage changes.

In production now, the VIPerGaN50W is priced from $1.09 each in lots of 1000 units.

VIPerGaN50W product page

STMicroelectronics

The post GaN flyback converter supplies up to 75 W appeared first on EDN.

RISC-V Summit spurs new round of automotive support

Чтв, 10/30/2025 - 17:44
RISC-V icon symbol.

The adoption of RISC-V with open standards in automotive applications continues to accelerate, leveraging its flexibility and scalability, particularly benefiting the automotive industry’s shift to software-defined vehicles. Several RISC-V IP core and development tool providers recently announced advances and partnerships to drive RISC-V adoption in automotive applications.

In July 2025, the first Automotive RISC-V Ecosystem Summit, hosted by Infineon Technologies AG, was held in Munich. Infineon believes cars will change in the next five years more than in the last 50 years, and as traditional architectures come to their limit, RISC-V will be a game-changer, enabling the collaboration between software and hardware.

RISC-V icon symbol.(Source: Adobe Stock)

However, RISC-V adoption will require an ecosystem to deliver new technologies for the automotive industry. The summit showcased RISC-V solutions and technologies ready for automotive, particularly for SDVs, bringing together RISC-V players in areas such as compute IP, software, and development solutions.

Fast-forward to October with several RISC-V players expanding the enabling ecosystem for automotive with key collaborations ahead of the October 2025 RISC-V Summit. Quintauris, for example, announced several partnerships, including with Andes Technology Corp., Everspin Technologies, Tasking, and Lauterbach GmbH, all focused on advancing RISC-V for automotive and other safety-critical applications.

The Quintauris strategic partnership with Andes, a provider of RISC-V processor cores, brings Andes’s RISC-V processor IP into Quintauris’s RISC-V-based portfolio, consisting of profiles, reference architectures, and software components. The partnership will focus on automotive, industrial, and edge computing applications. It kicks off with the integration of the 32-bit ISO 26262–certified processor in the AndesCore processor series with Quintauris’s automotive real-time reference architecture.

Quintauris is also teaming up with Everspin to bring its advanced memory solutions—magnetoresistive RAM technologies—into Quintauris’s reference architectures and real-time platforms for automotive, industrial, and edge applications. This partnership addresses the need for memory subsystems to meet the high standards for performance and functional safety in automotive applications.

In the development tools space, Quintauris announced a new partnership with Tasking to bolster RISC-V development in the automotive industry. Delivering certifiable development tools for safety-critical embedded software, Quintauris will integrate Tasking’s RISC‑V compiler into its upcoming RISC‑V reference platform.

Addressing embedded systems debugging, the new Quintauris and Lauterbach collaboration focuses on safety-critical industries such as automotive. Under the partnership, Lauterbach’s TRACE32 toolset, including a debug and trace suite, for embedded systems will be integrated into the Quintauris RISC-V reference platform. The TRACE32 toolset provides debugging, traceability, and system analysis tools.

Lauterbach also announced in October that its TRACE32 development tools support Tenstorrent’s system-on-chips (SoCs) and chiplets for RISC-V and AI-based workloads in the automotive, client, and server sectors. Tenstorrent’s automotive and robotics base die SoC targets automotive applications in SDVs. The SoC implements at least eight 64-bit superscalar, out-of-order TT-Ascalon RISC-V cores with vector and hypervisor ISA extensions, along with RISC-V-based AI accelerators and additional RISC-V cores for system and communication management.

The TRACE32 development tools allow simultaneous debugging of the TT-Ascalon RISC-V processors and other cores implemented on the chip, from pre-silicon development to prototyping on silicon and in-field debugging on electronic control units.

Also helping to accelerate the global adoption of RISC-V, Tenstorrent and CoreLab Technology are collaborating on an open-architecture computing platform for automotive edge and robotics applications. The Atlantis computing platform addresses demanding AI computing requirements, delivering a scalable, safety-ready CPU IP portfolio. The platform will leverage Tenstorrent’s RISC-V CPU IP and CoreLab Technology’s energy-efficient IP and SoC solutions.

Designed to deliver on performance, power efficiency, low total cost of ownership, and customization, all RISC-V CPU cores in the platform support deep customization, enabling customers to tailor their compute resources for their applications, according to Tenstorrent.

The automotive industry demands that ecosystem players meet stringent functional safety and security standards. To meet these requirements, Codasip recently announced that two of its high-performance embedded processor cores, the Codasip L735 and Codasip L739, have received TÜV SÜD certification for functional safety.

The L735 is certified up to ASIL-B and the L739 is certified up to ASIL-D, defined by the ISO 26262 standard. Both products are also compliant with ISO/SAE 21434 for cybersecurity in automotive development. In addition, Codasip’s IP development process is certified to both ISO 26262 and ISO/SAE 21434.

The L735 and L739 cores are part of the Codasip 700 family. The L735 includes safety mechanisms such as error-correcting code on caches and tightly coupled memories, a memory protection unit, and support for RISC-V RERI to provide standardized error reporting. The L739 adds dual-core lockstep, enabling ASIL-D certification.

Capability Hardware Enhanced RISC Instructions (CHERI) variants are available for both products. CHERI security technology protects against memory safety vulnerabilities. Codasip is standardizing a CHERI extension for RISC-V in collaboration with other members of the CHERI Alliance.

The post RISC-V Summit spurs new round of automotive support appeared first on EDN.

Сторінки