Українською
  In English
EDN Network
Power pole collapse

Two or three days ago, as of this writing, there was a power pole collapse in Bellmore, NY, at the intersection of Bellmore Avenue and Sunrise Highway. The collapsed pole is seen in Figure 1, lying across two westbound lanes of Sunrise Highway. The traffic lights are dark.
Figure 1 Collapsed power pole in Bellmore, NY, temporarily knocking out power.
Going to Google Maps, I took a close look at a photograph of the collapsed pole taken three months earlier, back in July, when the pole was still standing (Figure 2).

Figure 2 The leaning power pole and its damaged wood in July 2025.
The wood at the base of the leaning power pole was clearly, obviously, and indisputably in a state of severe decrepitude.
An older picture of this same pole on Google Maps, taken in December 2022 (Figure 3), shows this pole to have been damaged even at that time. Clearly, the local power utility company had, by inexcusable neglect, allowed that pole damage to remain unaddressed, which had thus allowed the collapse to happen.

Figure 3 Google Maps image of a power pole showing damage as early as December 2022.
Sunrise Highway is an extremely busy roadway. It is only by sheer blind luck that nobody was injured or killed by this event.
A replacement pole was later installed where the old pole had fallen. The new pole’s placement is exactly vertical, but how many other power poles out there are in a similarly unsafe condition as that fallen pole in Bellmore had been?
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- A tale about loose cables and power lines
- Shock hazard: filtering on input power lines
- Why do you never see birds on high-tension power lines?
- Misplaced insulator proves fatal
- Ground strikes and lightning protection of buried cables
The post Power pole collapse appeared first on EDN.
System-level tool streamlines quantum design workflows

Quantum System Analysis software, part of Keysight’s Quantum EDA platform, simulates quantum architectures at the system level. It unifies electromagnetic, circuit, and quantum dynamics domains to enable early design validation in a single environment.

By modeling the full quantum workflow—from initial design through system-level experiments—the software reduces reliance on costly cryogenic testing and shortens time-to-validation. It includes tools for optimizing dilution fridge input lines to manage thermal noise and estimate qubit temperatures. A time dynamics simulator models quantum system evolution using Hamiltonians derived from EM or circuit simulations, accurately emulating experiments such as Rabi and Ramsey pulsing to reveal qubit behavior.
Quantum System Analysis supports superconducting qubit platforms and can be extended to other modalities such as spin qubits. It complements Quantum Layout, QuantumPro EM, and Quantum Circuit Simulation tools.
The post System-level tool streamlines quantum design workflows appeared first on EDN.
CAN FD transceiver boosts space data rates

Microchip’s ATA6571RT radiation-tolerant CAN FD transceiver supports data rates to 5 Mbps for high-reliability space communications. Well-suited for satellites and spacecraft, it withstands a total ionizing dose of 50 krad(Si) and offers single-event latch-up immunity up to 78 MeV·cm²/mg at +125 °C.

Unlike conventional CAN transceivers limited to 1 Mbps, the ATA6571RT handles CAN FD frames with payloads up to 64 bytes, reducing bus load and improving throughput. It ensures robust, efficient data transmission under harsh space conditions, while backward compatibility with classic CAN enables an easy upgrade path for existing systems.
A cyclic redundancy check (CRC) algorithm enhances error detection and reliability in safety-critical applications. The transceiver also delivers improved EMC and ESD performance, along with low current consumption in sleep and standby modes while retaining full wake-up capability. It interfaces directly with 3-V to 3.6-V microcontrollers through the VIO pin.
The ATA6571RT transceiver costs $210 each in 10-unit quantities. A timeline for availability was not provided at the time of this announcement.
The post CAN FD transceiver boosts space data rates appeared first on EDN.
Reconfigurable modules ease analog design

Chameleon adaptive analog modules from Okika provide preprogrammed, ready-to-use analog functions in a compact 14×11-mm form factor. Each module employs the company’s FlexAnalog field-programmable analog array (FPAA) architecture—a reconfigurable matrix with over 40 configurable analog modules (CAMs), including filters, amplifiers, and oscillators. Nonvolatile memory and reconfiguration circuitry are integrated on the module.

Chameleon simplifies analog designs that need flexibility without the complexity of firmware or digital control. Modules function as fixed analog blocks straight out of the box—no microcontroller or firmware required. Configurations can be reprogrammed in-system or using any 3.3-V SPI EEPROM programmer.
Anadigm Designer 2 software provides parameter-tuning and filter-design tools for simulating and verifying Chameleon’s performance. Typical applications include programmable active filters, sensor signal conditioning and linearization, and industrial automation and adaptive control, as well as research and prototyping.
Chameleon extends the FlexAnalog platform into an application-ready design that adapts easily. For pricing and availability, contact sales@okikadevices.com.
The post Reconfigurable modules ease analog design appeared first on EDN.
Vertical GaN advances efficiency and power density

onsemi has developed power semiconductors based on a vertical GaN (vGaN) architecture that improves efficiency, power density, and ruggedness. These GaN-on-GaN devices conduct current vertically through the semiconductor, supporting higher operating voltages and faster switching frequencies.

Most commercially available GaN devices are built on silicon or sapphire substrates, which conduct current laterally. onsemi’s GaN-on-GaN technology enables vertical current flow in a monolithic die, handling voltages up to and beyond 1200 V while delivering higher power density, better thermal stability, and robust performance under extreme conditions. Compared with lateral GaN semiconductors, vGaN devices are roughly three times smaller.
These advantages translate to significant system-level benefits. High-end power systems using vGaN can cut energy and heat losses by nearly 50%, while reducing size and weight. The technology enables smaller, lighter, and more efficient systems for AI data centers, electric vehicles, and other electrification applications.
onsemi is now sampling 700-V and 1200-V vGaN devices to early access customers. For additional information about vertical GaN, click here.
The post Vertical GaN advances efficiency and power density appeared first on EDN.
SoC delivers dual-mode Bluetooth for edge devices

Ambiq’s Apollo510D Lite SoC provides both Bluetooth Classic and BLE 5.4 connectivity, enabling always-on intelligence at the edge. It is powered by a 32-bit Arm Cortex-M55 processor running at up to 250 MHz with Helium vector processing and Ambiq’s turboSPOT dynamic scaling. A dedicated Cortex-M4F network coprocessor operating at up to 96 MHz handles wireless and sensor-fusion tasks.

According to Ambiq, its Subthreshold Power Optimized Technology (SPOT) delivers 16× faster performance and up to 30× better AI energy efficiency than comparable M4- or M33-based devices. The SoC’s BLE 5.4 radio subsystem provides +14 dBm transmit power, while dual-mode capability supports low-power audio streaming and backward compatibility with Classic Bluetooth.
The Apollo510D Lite integrates 2 MB of RAM and 2 MB of nonvolatile memory with dedicated instruction/data caches for faster execution. It also includes secureSPOT 3.0 and Arm TrustZone to enable secure boot, firmware updates, and data protection across connected devices.
Along with the Apollo510D Lite (dual-mode Bluetooth), Ambiq’s lineup includes the Apollo510 Lite (no BLE radio) and the Apollo510B Lite (BLE-only). The Apollo510 Lite series is sampling now, with volume production expected in Q1 2026.
The post SoC delivers dual-mode Bluetooth for edge devices appeared first on EDN.
Dual-range motion sensor simplifies IIoT system designs

STMicroelectronics debuts the tiny ISM6HG256X three-in-one motion sensor in a 2.5 × 3-mm package for data-hungry industrial IoT (IIoT) systems, while also supporting edge AI applications. The IMU sensor combines simultaneous low-g (±16 g) and high-g (±256 g) acceleration detection with a high-performance precision gyroscope for angular rate measurement, ensuring the detection from subtle motion or vibrations to severe shocks.
“By integrating an accelerometer with dual full-scale ranges, it eliminates the need for multiple sensors, simplifying system design and reducing overall complexity,” ST said.
The ISM6HG256X is suited for IIoT applications such as asset tracking, worker safety wearables, condition monitoring, robotics, factory automation, and black box event recording.
In addition, the embedded edge processing and self-configurability support real-time event detection and context-adaptive sensing, which are needed for asset tracking sensor nodes, wearable safety devices, continuous industrial equipment monitoring, and automated factory systems.
(Source: STMicroelectronics)
Key features of the MEMS motion sensor are the unique machine-learning core and finite state machine, together with adaptive self-configuration and sensor fusion low power (SFLP). In addition, thanks to the SFLP algorithm, 3D orientation tracking also is possible with a few µA of current consumption, according to ST.
These features are designed to bring edge AI directly into the sensor to autonomously classify detected events, which supports real-time, low-latency performance, and ultra-low system power consumption.
The ISM6HG256X is available now in a surface-mount package that can withstand harsh industrial environments from -40°C to 105°C. Pricing starts at $4.27 for orders of 1,000 pieces from the eSTore and through distributors. It is part of ST’s longevity program, ensuring long-term availability of critical components for at least 10 years.
Also available to help with development are the new X-NUCLEO-IKS5A1 industrial expansion board with MEMS Studio design environment and software libraries, X-CUBE-MEMS1. These tools help implement functions such as high-g and low-g fusion, sensor fusion, context awareness, asset tracking, and calibration.
The ISM6HG256X will be showcased in a dedicated STM32 Summit Tech Dive, “From data to insight: build intelligent, low-power IoT solutions with ST smart sensors and STM32,” on November 20.
The post Dual-range motion sensor simplifies IIoT system designs appeared first on EDN.
LIN motor driver improves EV AC applications

As precise control of cabin airflow and temperature becomes more critical in vehicles to enhance passenger comfort as well as to support advanced thermal management systems, Melexis introduces the MLX81350 LIN motor driver for air conditioning (AC) flaps and automated air vents in electric vehicles (EVs). The MLX81350 delivers a balanced combination of performance, system integration, and cost efficiency to meet these requirements.
The fourth-generation automotive LIN motor driver, built on high-voltage silicon-on-insulator technology, delivers up to 5 W (0.5 A) per motor and provides quiet and efficient motor operation for air conditioning flap motors and electronic air vents.
(Source: Melexis)
In addition to flash programmability, Melexis said the MLX81350 offers high robustness and function density while reducing bill-of-materials complexity. It integrates both analog and digital circuitry, providing a single-chip solution that is fully compliant with industry-standard LIN 2.x/SAE J2602 and ISO 17987-4 specifications for LIN slave nodes.
The MLX81350 features a new software architecture that enhances performance and efficiency over the previous generation. This enhancement includes improved stall detection and the addition of sensorless, closed-loop field-oriented control. This enables smoother motor operation, lower current consumption, and reduced acoustic noise to better support automotive HVAC and thermal management applications, Melexis said.
However, the MLX81350 still maintains pin-to-pin compatibility with its predecessors for easier migration with existing designs.
The LIN motor driver offers lots of peripherals to support advanced motor control and system integration, including a configurable RC clock (24-40 MHz), four general-purpose I/Os (digital and analog), one high-voltage input, 5× 16-bit motor PWM timers, two 16-bit general timers, and a 13-bit ADC with <1.2 -µs conversion time across multiple channels, as well as UART, SPI, and I²C master or slave interfaces. The LIN interface enables seamless communication within vehicle networks, and provides built-in protection and diagnostic features, including over-current, over-voltage, and temperature shutdown, to ensure safe and reliable operation in demanding automotive environments.
The MLX81350 is designed according to ASIL B (ISO 26262) and offers flexible wake-up options via LIN, external pins, or an internal wake-up timer. Other features include a low standby current consumption (25 µA typ.; 50 µA max.) and internal voltage regulators that allow direct powering from the 12-V battery, supporting an operating voltage range of 5.5 V to 28 V.
The MLX81350 is available now. The automotive LIN motor driver is offered in SO-8 EP and QFN-24 packages.
The post LIN motor driver improves EV AC applications appeared first on EDN.
OKW’s plastic enclosures add new custom features

OKW can now supply its plastic enclosures with bespoke internal metal brackets and mounting plates for displays and other large components. The company’s METCASE metal enclosures division designs and manufactures the custom aluminum parts in-house.
(Source: OKW Enclosures Inc.)
One recent project of this type involved OKW’s CARRYTEC handheld enclosures. Two brackets fitted to the lid allowed a display to be flush mounted; a self-adhesive label covered the join between screen and case. Another mounting plate, fitted in the base, was designed to support a power supply.
Custom brackets and supports can be configured to fit existing PCB pillars in OKW’s standard plastic enclosures. Electronic components can then be installed on the brackets’ standoffs.
CARRYTEC (IP 54 optional) is ideal for medical and laboratory electronics, test/measurement, communications, mobile terminals, data collection, energy management, sensors, Industry 4.0, machine building, construction, agriculture and forestry.
The enclosures feature a robust integrated handle with a soft padded insert. They can accommodate screens from 8.4″ to 13.4″. Interfaces are protected by inset areas on the underside. A 5 × AA battery compartment can also be fitted (machining is required).
These housings can be specified in off-white (RAL 9002) ABS (UL 94 HB) or UV-stable lava ASA+PC (UL 94 V-0) in sizes S 8.74″ × 8.07″ × 3.15″, M 10.63″ × 9.72″ × 1.65/3.58″ and L 13.70″ ×11.93″ × 4.61″.
In addition to the custom metal brackets and mounting plates, other customizing services include machining, lacquering, printing, laser marking, decor foils, RFI/EMI shielding, and installation and assembly of accessories.
For more information, view the OKW website: https://www.okwenclosures.com/en/news/blog/BLG2510-metal-brackets-for-plastic-enclosures.htm
The post OKW’s plastic enclosures add new custom features appeared first on EDN.
A current mirror reduces Early effect

It’s just a fact of life. A BJT wired in common emitter, even after compensating for the effects of device and temperature variations, still isn’t a perfect current source.
Wow the engineering world with your unique design: Design Ideas Submission Guide
One of the flaws in the ointment is the Early effect of collector voltage on collector current. It can sometimes be estimated from datasheet parameters if output admittance (hoe) is specified (Ee ~ hoe / test current). A representative value is 1% per volt. Figure 1 shows its mischief in action in the behavior of a simple current mirror, where:
I2 = I1(1 + Vcb/Va)
Va ~ 100v
Ierr = Vcb/Va ~ 1%/V
Figure 1 Current mirror without emitter degeneration.
If the two transistors are matched, I2 should equal I1. But instead, Q2’s collector current may increase by 1% per Vcb volt. A double-digit Vcb may create a double-digit percentage error. That would make for a rather foggy “mirror”!
Fortunately, a simple trick for mitigating Early is well known to skilled practitioners of our art. (Please see the footnote). Emitter degeneration is based on an effect that’s 4000 times stronger than the effect of Vcb on Ic.
That’s the effect of Vbe on collector current, and it can easily reduce Ee by two orders of magnitude. Figure 2 shows how it works:
I2 ~ I1(1 + Vcb/Va) – (0.026R)(I2 – I1))
Ierr ~ (Vcb/Va)/(Vr/26mV + 1)

Figure 2 Current mirror with emitter degeneration
Equal resistors R added in series with both emitters will develop voltages Vr = I1*R and I2*R that will be equal if the currents are equal. But if the currents differ (e.g., because of Early), then a Vbe differential will appear…duh…
This is useful because the Vbe differential will oppose the initial current differential, and the effect is large, even if Vr is small. Figure 3 shows how dramatically this reduces Ierr.

Figure 3 A normalized Early effect (y-axis) versus emitter degeneration voltage Ve = Ia*R (x-axis). Note that just 50 mV reduces Early by 3:1. That’s indeed a “long way”!
Footnote
One DI has an earlier conversation about current mirrors and the Early effect: “A two-way mirror—current mirror that is.” In the grand tradition of editor Aalyia’s DI kitchen, frequent and expert commentator Ashutosh suggested how emitter degeneration could improve performance:
asa70
May 27, 2025
Regarding degen, i’ve found that a half volt at say 1mA FS helps match the in and out currents much better even at a tenth of the current, even for totally randomly selected transistors. I suppose it is because the curves will be closer at smaller currents, so that even a 50 mV drop goes a long way
Ashutosh certainly nailed it! 50mV does go a long (3:1) way!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A two-way mirror—current mirror that is
- A two-way Wilson current mirror
- Use a current mirror to control a power supply
- A comparison between mirrors and Hall-effect current sensings
The post A current mirror reduces Early effect appeared first on EDN.
Designing a thermometer with a 3-digit 7-segment indicator

Transforming a simple 10-kΩ NTC thermistor into a precise digital thermometer is a great example of mixed-signal design in action. Using a mixed-signal IC—AnalogPAK SLG47011—this design measures temperatures from 0.1°C to 99.9°C with impressive accuracy and efficiency.
SLG47011’s analog-to-digital converter (ADC) with programmable gain amplifier (PGA) captures precise voltage readings, while its memory table and width converter drive a 3-digit dynamic 7-segment display. Each digit lights up in rapid sequence, creating a stable indication for a user, a neat demonstration of efficient multiplexing.
Compact, flexible, and self-contained, this design shows how one device can seamlessly handle sensing, computation, and display control—no microcontroller required.
Operating principle
The circuit schematic of the thermometer with a 3-digit 7-segment indicator is shown in Figure 1.

Figure 1 The circuit schematic displays a thermometer with 3-digit 7-segment indicator. Source: Renesas
The VDIV = 1.8 V voltage is applied to PIN 7 through a resistive divider RT / (R + RT), where R = 5.6 kΩ. PIN 8 activates the first digit, while PIN 6 activates the second digit and decimal point. PIN 4 activates the third digit.
The signal from PIN 7 goes to the single-ended input of the PGA (buffer mode, mode #6) and then to ADC CH0 for further sampling. The allowable temperature range measured by the thermometer is 0.1°C to 99.9°C (or 273.25 K to 373.05 K).
The voltage (VIN) after the resistive divider is equal to:

The ADC converts this voltage to a 10-bit code using the formula:

Whereas,
- RT is the resistance of the NTC thermistor:

- R0 = 10,000 Ω is the resistance at ambient temperature T0 (25°C or 285.15 K)
- B = 4050 K is a constant of the thermistor
- VIN is the voltage on PIN 7
- 1024 represents the 10-bit resolution of the ADC (210)
- 1620 represents the internal Vref in mV
- VINdec is VIN in 10-bit decimal format
The maximum value of VINdec is 1023.
The NTC thermistor resistances for the minimum and maximum value of the temperature are calculated using equations below:

The maximum voltage after the resistive divider is ![]()
The minimum voltage after the resistive divider is ![]()
The relationship between the measured temperature and VIN for the applied parameters of the circuit is shown in Figure 2.

Figure 2 Graph shows the relationship between temperature and VIN.
Thermometer design
The GreenPAK IC-based thermometer design is shown in Figure 3. Download free Go Configure Software Hub to open the design file and see how the functionality is carried out.

Figure 3 Thermometer with 3-digit 7-segment indicator design is built around a mixed-signal IC. Source: Renesas
The SLG47011 mixed-signal IC contains a memory table macrocell that can hold 4096 12-bit words. This space is enough to store the values of each of the three indicator digits for each VINdec (1024 * 3 = 3072 values in total). In other words, the 3n word of the memory table corresponds to the first digit, the 3n + 1 to the second digit, and the 3n + 2 to the third digit of each corresponding T, where n = VINdec.
The ADC output value is sent to the MathCore macrocell, where it’s multiplied by three. This value is then used as a memory table address. Assuming that the ADC output is 1000, the MathCore output is 3000. This means that the memory table values at 3000, 3001, and 3002 addresses will be used and will correspond to the indicator’s first, second, and third digits accordingly.
Data from the MathCore output goes to the IN+ CH0 input of the multichannel DCMP macrocell. This data is compared with the data on the IN- CH0 input, which is taken from the Data Buffer0 output. Data Buffer0 stores the data from the CNT11/DLY11/FSM0 macrocell, which operates in Counter/FSM mode.
The Counter/FSM is reset to “1” when a HIGH signal from the ADC data-ready output arrives and starts counting upward. The multichannel DCMP OUT0 output is connected to the Keep input of CNT11/DLY11/FSM0. This means that when the CNT11/DLY11/FSM0 current value is equal to the MathCore output value, the DCMP OUT0 output is HIGH, and the Keep input of CNT11/DLY11/FSM0 is also HIGH, keeping the counted value for further addressing to the memory table.
At the same time, together with CNT11/DLY11/FSM0, the Memory Control Counter is counting upward from 0 and sets the memory table address.
Thus, when the ADC measures a certain voltage value, the previously described comparison operation will point to the corresponding voltage value stored in the memory table—three consecutively recorded digits, which are then dynamically displayed on the 7-segment display.
The memory table’s stored data then goes to the width converter macrocell, which converts the serial 12-bit input into a parallel 12-bit output (Table 1).

Table 1 The above data highlights width converter connections. Source: Renesas
The inverter enables the decimal point (DP) through PIN 16 based on state of 3-bit LUT0 (second digit).
To dynamically display the temperature, the digits will be ON sequentially with a period of 300 μs. The period is set by the CNT2/DLY2 macrocell (in Reset Counter Mode). The 3-bit LUT4 sets the clock of the width converter based on its synchronization with the CNT11/DLY11/FSM0 clock and the state of DCMP OUT0.
The P DLY, DFF8, and 3-bit LUT12 macrocells form a state counter for the Up/Down input of the Memory Control Counter macrocell based on the state of the second digit (falling edge on OUT2 of the width converter).
When the first digit is ON, the Memory Control Counter counts upward by 1; when the second digit is first set ON, the state counter is set to LOW, forcing the Memory Control Counter to count down, while it has already activated the third number. Therefore, the second number is activated again, and the state counter goes HIGH, forcing the Memory Control Counter count upward, while it has already activated the first digit. Thus, all three digits will be sequentially activated until there is a new measured value from the ADC macrocell.
CNT8/DLY8, CNT12/DLY12/FSM1, and 3-bit LUT7 are used to properly turn on the ADC after the first turn-on when POR arrives, as well as during further operation when the ADC is turned on and off. CNT12/DLY12/FSM1 provides a period of 1.68 s, which results in the thermometer value being updated every 1.68 s.
Memory table filling algorithm
The algorithm below is shown for a VDIV voltage of 1.8 V and a resistive divider of 5.6 kΩ and RT.
First, the resistance value of RT (Ω) at ambient temperature T is calculated using the formula:

Second, the value of the temperature t (°C) for a determined RT value is calculated by:

Then, the calculated t (°C) values are rounded to the first decimal point.
For each VINdec value, three values are assigned in the memory table as follows: each VINdec corresponds to three consecutive values in the memory table 3n, 3n + 1, and 3n + 2, where n = VINdec.
Three separate columns for each of the values of 3n, 3n + 1, and 3n + 2 should be created. They each correspond to the first, second, and third digits of the indicator, respectively. The first column is assigned to the first digit of the rounded t value. The second column is assigned to the second digit, and the third column is assigned to the third digit.
For each digit of each column, a 7-bit binary value is found (m11 – m5), corresponding to the activation of the corresponding digit of the 7-segment display (Table 2).

Table 2 The above data highlights the 7-segment code. Source: Renesas
When the measured tmeas temperature is in range 0.1°C > tmeas > 99.9°C, the 0 – L symbols should be displayed on the indicator. The third digit is not activated in this case.
The next step is to add 5 more bits (m4 – m0) to the right of this value to get a 12-bit number.
The ninth bit (m3) is responsible for turning on the first digit, the tenth bit (m2) is responsible for turning on the second digit, and the eleventh bit (m1) for the third digit. Since a 7-segment indicator with a common cathode is used, turning on the digit is done with a LOW level (0). Therefore, for the first column (with words of type 3n), the ninth bit (m3) will equal 0, while the tenth (m2) and the eleventh (m1) bits will equal 1.
For the second column (with words of type 3n + 1), the tenth bit (m2) will be equal to 0, while the ninth (m3) and eleventh (m1) bits will be equal to 1. For the third column (with words of type 3n + 2), the eleventh (m1) bit will be equal to 0, while the ninth (m3) and tenth (m2) bits will be equal to 1.
The twelfth bit (m0) is not used, so its value does not affect the design. The resulting 3072 binary 12-bit values must then be converted to hex.
The required values for the memory table are already determined, now they need to be sorted in ascending order of the Word index and inserted into the appropriate location in the software. For a better understanding of the connections between the memory table and the width converter, view Figure 4.

Figure 4 The above diagram highlights connections between the memory table and the width converter. Source: Renesas
Test results
Figure 5 shows the result of measuring a temperature of around 17°C with respect to data obtained by a multimeter thermocouple.

Figure 5 Temperature range is set up to room temperature of around 17°C. Source: Renesas
Figure 6 shows the result of measuring a temperature of around 59°C with respect to data obtained by a multimeter thermocouple.

Figure 6 The measurement results show a temperature of around 59°C. Source: Renesas
Figure 7 shows the result of measuring a temperature of around 70°C with respect to data obtained by multimeter thermocouple.

Figure 7 The measurement results show a temperature of around 70°C. Source: Renesas
The mixed-signal integration
This design illustrates a practical approach to implementing a compact digital thermometer using the SLG47011 mixed-signal chip. Its ADC with PGA enables precise indirect temperature measurement, while the memory table and width converter manage dynamic control of the 3-digit 7-segment indicator.
By adjusting the resistive divider and updating the memory table, engineers can easily redefine the measurement range to suit different applications. The result is a straightforward and flexible thermometer design that effectively demonstrates mixed-signal integration in practice.
Myron Rudysh is application engineer at Renesas Electronics.
Nazar Ftomyn is application engineer at Renesas Electronics.
Yaroslav Chornodolskyi is application engineer at Renesas Electronics.
Bohdan Kholod is senior product development engineer at Renesas Electronics.
Related Content
- Thermometers perform calibration checks
- Thermal Imaging Sensors for Fever Detection
- Electronic Thermometer Project by LM35 and LM3914
- Measure temperature precisely with an infrared thermometer
- Electronic Thermometer with CrowPanel Pico 2.8 Inch 320×240 TFT LCD
The post Designing a thermometer with a 3-digit 7-segment indicator appeared first on EDN.
The shift from Industry 4.0 to 5.0

The future of the global industry will be defined by the integration of AI with robotics and IoT technologies. AI-enabled industrial automation will transform manufacturing and logistics across automotive, semiconductors, batteries, and beyond. IDTechEx predicts that the global sensor market will reach $255 billion by 2036, with sensors for robotics, automation, and IoT poised as key growth markets.
From edge AI and IoT sensors for connected devices and equipment (Industry 4.0) to collaborative robots, or cobots (Industry 5.0), technology innovations are central to future industrial automation solutions. As industry megatrends and enabling technologies increasingly overlap, it’s worth evaluating the distinct value propositions of Industry 4.0 and Industry 5.0, as well as the roadmap for key product adoption in each.
Sensor and robotics technology roadmap for Industry 4.0 and Industry 5.0 (Source: IDTechEx)
What are Industry 4.0 and Industry 5.0?
Industry 4.0 emerged in the 2010s with IoT and cloud computing, transforming traditionally logic-controlled automated production systems into smart factories. Miniaturized sensors and industrial robotics enable repetitive tasks to be automated in a controlled and predictable manner. IoT networking, cloud processing, and real-time data management unlock productivity gains in smart factories through efficiency improvements, downtime reductions, and optimized supply chain integration.
Industry 4.0 technologies have gained significant traction in many high-volume, low-mix product markets, including consumer electronics, automotive, logistics, and food and beverage. Industrial robots have been key to automation in many sectors, excelling at tasks such as material handling, palletizing, and quality inspection in manufacturing and assembly applications.
If Industry 4.0 is characterized by cyber-physical systems, then Industry 5.0 is all about human-robot collaboration. Collaborative and humanoid robots better accommodate changing tasks and facilitate safer, more natural interaction with human operators—areas where traditional robots struggle.
Cobots are designed to work closely with humans without the need for direct control. AI models trained on tailored, application-specific datasets are employed to make cobots fully autonomous, with self-learning and intelligent behaviors.
The distinction between Industry 4.0 and Industry 5.0 technologies is ambiguous, particularly as products in both categories increasingly integrate AI. Nevertheless, technology innovations continue to enable the next generation of Industry 4.0 and Industry 5.0 products.
Intelligent sensors for Industry 4.0In 2025, the big trend within Industry 4.0 is moving from connected to intelligent industrial systems using AI. AI models built and trained on real operation data are being augmented into sensors and IoT solutions to automate decision-making and offer predictive functionality. Edge AI sensors, digital twinning, and smart wearable devices are all key enabling technologies promising to boost productivity.
Edge-AI-enabled sensors are hitting the market, employing on-board neural processor units with AI models to carry out data inference and prediction on endpoint devices. Edge AI cameras capable of image classification, segmentation, and object detection are being commercialized for machine vision applications. Sony’s IMX500 edge AI camera module has seen early adoption in retail, factory, and logistics markets, while Cognex’s AI-powered 3D vision system gains traction for in-line quality inspection in EV battery and PCB manufacturing.
With over 15% of production costs arising from equipment failure in many industries, edge AI sensors monitoring equipment performance and automating maintenance can mitigate risks. Analog Devices, STMicroelectronics, TDK, and Siemens all now offer in-sensor or co-packaged machine-learning vibration and temperature sensors for industrial predictive maintenance. Predictive maintenance has been slow to take off, however, with industrial equipment suppliers and infrastructure service providers (rail, wind, and marine assets) being early adopters.
Simulating and modeling industrial operational environments is becoming more feasible and valuable as sensor data volume grows. Digital twins can be built using camera and position sensor data collected on endpoint devices. Digital twins enable performance simulation and maintenance forecasting to maximize productivity and minimize operational downtime. Proof-of-concept use cases include remote equipment operation, digital staff training, and custom AI model development.
Beyond robotics and automation, industrial worker safety is still a challenge. The National Safety Council estimates that the total cost of U.S. work injuries was $177 billion in 2023, with high incident rates in construction, logistics, agriculture, and manufacturing industries.
Smart personal protection equipment with temperature, motion, and gas sensors can monitor worker activity and environmental conditions, giving managers oversight to ensure safety. Wearable IoT skin patches offering hydration and sweat analysis are also emerging in the mining and oil and gas industries, reducing risk by proactively addressing the physiological and cognitive effects of dehydration.
Human-robot collaboration for Industry 5.0Industry 4.0 relies heavily on automation, making it ideal for high-volume, low-mix manufacturing. As the transition to Industry 5.0 takes place, warehouse operators are seeking greater flexibility in their supply chains to support low-volume, high-mix production.
A defining aspect of Industry 5.0 is human-robot collaboration, with cobots being a core component of this concept. Humanoid robots are also designed to work alongside humans, aligning them with Industry 5.0 principles. However, as of late 2025, their technology and safety standards are still developing, so in most factory settings, they are deployed with physical separation from human workers.
Ten-year humanoid robot hardware market forecast (2025–2035) (Source: IDTechEx)
Humanoid robots, widely perceived as embodied AI, are projected to grow rapidly over the next 10 years. IDTechEx forecasts that the humanoid robot hardware market is set to take off in 2026, growing to reach $25 billion by 2035. This surge is fueled by major players like Tesla and BYD, who plan a more than tenfold expansion in humanoid deployment in their factories between 2025 and 2026.
As of 2025, despite significant hype around humanoid robots, there are still limited real-world applications where they fit. Among industrial applications, the automotive and logistics sectors have attracted the most interest. In the short- to mid-term, the automotive industry is expected to lead humanoid adoption, driven by the historic success of automation, large-scale production demands, and stronger cost-negotiation power.
Lightweight and slow-moving cobots, designed to work next to human operators without physical separation, have also gained significant momentum in recent years. Cobots are ideal options for small and mid-sized enterprises due to their low cost, small footprint, ease of programming, flexibility, and low power consumption.
Cobots could tackle a key industry pain point: the risk of shutdown to entire production lines when a single industrial robot malfunctions, due to the need to ensure human operators can safely enter robot working zones for inspection. Cobots could be an ideal solution to mitigate this, as they can work closely and flexibly with human operators.
The most compelling application of cobots is in the automotive industry for assembly, welding, surface polishing, and screwing. Cobots are also attractive in high-mix, low-volume production industries such as food and beverage.
Limited technical capabilities and high costs currently restrict wider cobot adoption. However, alternative business models are emerging to address these challenges, including cobot-as-a-service and try-first-and-buy-later models.
Outlook for Industry X.0AI, IoT, and robotics are mutually enabling technologies, with industrial automation applications positioned firmly within this nexus and poised to capitalize on advancements.
Key challenges for Industry X.0 technologies are long return-on-investment (ROI) timelines and bespoke application requirements. Industrial IoT sensor networks take an average of two years to generate returns, while humanoid robots in warehouses require 18 months of pilot testing before broader use. However, economies-of-scale cost reductions and supporting infrastructure can ease ROI concerns, while long-term productivity gains will also offset high upfront costs.
The next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making. With IDTechEx forecasting that humanoid and cobot adoption will take off by the end of the decade, the 2030s are set to be defined by Industry 5.0.
The post The shift from Industry 4.0 to 5.0 appeared first on EDN.
A precision, voltage-compliant current source
A simple current source
It has long been known that the simple combination of a depletion-mode MOSFET (and before these were available, a JFET) and a resistor made a simple, serviceable current source such as that seen on the right side of Figure 1.
Figure 1 Current versus voltage characteristics of a DN2540 depletion mode MOSFET and the circuit of a simple current source made with one, both courtesy of Microchip.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This is evident from the figure’s left side, which shows the drain current versus drain voltage characteristics for various gate-source voltages of a DN2540 MOSFET. Once the drain voltage rises above a certain point, further increases cause only very slight rises in drain current (not visible on this scale). This simple circuit might suffice for many applications, except for the fact that the VGS required for a specific drain current will vary over temperature and production lots. Something else is needed to produce a drain current with any degree of precision.
Alternative current source circuitsAnd so, we might turn to something like the circuits of Figure 2.

Figure 2 A current source with a more predictable current, left (IXYS) and a voltage regulator which could be employed as a current source with a more predictable current, right (TI). Source: IXYS and Texas Instruments
In these circuits, we see members of the ‘431 family regulating MOSFET source and BJT emitter voltages. The Texas Instruments circuit on the right demonstrates the need for an oscillation-prevention capacitor, and my experience has been that this is also needed with the IXYS circuit on the left.
Although RL1, RS, and R1 pass precise, well-regulated currents to the transistors in their respective circuits, resistors RB and R do not. RB’s current is subject to a not well-controlled VGS, and R’s is affected by whatever variations there might be in VBATT.
The MOSFET circuit is a true two-terminal current source, so a load can be connected in series with the current source at its positive or negative terminal. But then the load is always subjected to the poorly-controlled RB current.
The BJT is part of a three-terminal circuit, and for a load to avoid the VBATT-influenced current through R, it could only be connected between VBATT and the BJT collectors. Even so, variations in VBATT could produce currents, which lead to voltages that are not entirely rejected at the TLA431 cathode, and so would produce uncontrolled currents in the BJTs and therefore in the load.
A true two-terminal current sourceFigure 3 addresses these limitations in circuit performance. In analyzing it, as always, I rely on datasheet maximum and minimum values whenever they are available, but resort to and state that I’m employing typical values when they are not.

Figure 3 This circuit delivers predictable currents to U1 and M1 and therefore to a load. It’s a true two-terminal current source which accommodates load connection to both low and high side.
U1 establishes 1.24 · ( 1 + R4 / R3 ) volts at VS and adds a current of VS / (R4 + R3) to the MOSFET drain.
An additional drain current comes from:
2 · ( VS – VBE(Q2) / ( R2 + R5 )
The “2” is due to the fact that R2 and R1 currents are identical (discounting the Early effect on Q1). The current through R1 is nearly constant regardless of the value of VGS. This current provides what U1 needs to operate.
The precision of the total DC current through the load is limited by the tolerances of R1 through R5, the U1 reference’s accuracy, and the value of the BJT’s temperature-dependent VBE drop. (U1’s maximum feedback reference current over its operating temperature is a negligible 1 µA.)
U1 requires a minimum of 100 µA to operate, so R5 is chosen to provide it with 150 µA. Per its On Semi datasheet, at this current and over Q1’s operating temperature range, the 2N3906’s typical VCE saturation voltage is 50 mV. Add that to the 15mV drop across R1 for a total of 65 mV, which is the smallest achievable VSG value.
Accordingly, we are some small but indeterminant amount shy of the maximum drain current guaranteed for the part (at 25°C, 25 V VDS, and 0 V VGS only) by its datasheet. At the other extreme, under otherwise identical conditions, a VGS of -3.5 V will guarantee a drain current of less than 10 µA. For such, U1 and the circuit as a whole will operate properly at a VS of 5 VDC.
Higher temperatures might require a more negative VGS by a maximum of -4.5 mV/°C and, therefore, possibly larger values of VS and, accordingly, of R5. This would be to ensure that U1’s cathode voltage remains above 1.24 V under all conditions.
D2 is selected for a Zener voltage which, when added to D1’s voltage drop, is greater than VS, but is less than the lesser of the maximum allowed cathode-anode voltage of U1 (18 V) and the maximum allowed VGS of M1 (20 V). D1‘s small capacitance shields the rest of the circuit from the Zener capacitance, which might otherwise induce oscillations. The diodes are probably not needed, but they provide cheap protection. Neither passes current or affects circuit performance during normal operation. C1 ensures stable operation.
U1 strives to establish a constant voltage at VS regardless of the DC and AC voltage variations of the unregulated supply V1. Working against it in descending order of impact are the magnitude of the conductance of the R3 + R4 resistor string, U1‘s falling loop gain with frequency, and M1’s large Rds and small Cds. Still, the circuit built around the 400-V VDS-capable M1 achieves some surprisingly good results in the test circuit of Figure 4.

Figure 4 Circuit used to test the impedance of the Figure 3 current source.
Table 1 and Figure 5 list and display some measurements. Impedances in megohms are calculated using the formula RLOAD · 10(-dB, VLOAD / VGEN) / 20 / 1E6.

Table 1 Impedances of the current source of Figure 3 at various frequencies, evaluated using the circuit of Figure 4.

Figure 5 Plotted curves of Figure 3 current source impedance from the data in Table 1.
ObservationsThere are several conclusions that can be drawn from the curves in Figure 5. The major one is that at low frequencies, the AC impedance Z is roughly inversely proportional to current. A more insightful way to express this is that Z is proportional to R3 + R4, which sets the current. With larger resistance, current variations produce larger voltages for the ‘431 IC to use for regulation; that is, there’s more gain available in the circuit’s feedback loop to increase impedance.
Another phenomenon is that in the 1 and 10-mA current curves, the impedance rises much more quickly as frequency increases above 1 kHz. This is consistent with the fact that the TLVH431B gain is more or less flat from DC to 1 kHz and falls thereafter. The following phenomenon masks this effect somewhat at the higher 100 mA current.
Finally, at all currents, there is an advantage to operating at higher values of VDS. This is especially apparent at the highest current, 100 mA. And this is consistent with the fact that for the characteristic curves of the DN2540 MOSFET seen in Figure 1, higher VDS voltages are required at higher currents before the curves become horizontal.
Precision current sourceA precision high impedance, moderate-to high voltage-compliant current source has been introduced. Its two-terminal nature means that a load in series with it can be connected to the source’s positive or negative end. Unlike earlier designs, the ‘431 regulator IC’s operating current is independent of both the source’s supply voltage and of its MOSFET’s VGS voltage. The result is a more predictable DC current as well as higher AC impedances than would otherwise be obtainable.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- A high-performance current source
- PWM-programmed LM317 constant current source
- Simple, precise, bi-directional current source
- A negative current source with PWM input and LM337 output
- Programmable current source requires no power supply
The post A precision, voltage-compliant current source appeared first on EDN.
Back EMF and electric motors: From fundamentals to real-world applications

Let us begin this session by revisiting a nostalgic motor control IC—the AN6651—designed for rotating speed control of compact DC motors used in tape recorders, record players, and similar devices.
The figure below shows the AN6651’s block diagram and a typical application circuit, both sourced from a 1997 Panasonic datasheet. These retouched visuals offer a glimpse into the IC’s internal architecture and its practical role in analog motor control.

Figure 1 Here is the block diagram and application circuit of the AN6651 motor control IC. Source: Panasonic
Luckily, for those still curious to give it a try, the UTC AN6651—today’s counterpart to the legacy AN6651—is readily available from several sources.
Before we dive deeper, here is a quick question—why did I choose to begin with the AN6651? It’s simply because this legacy chip elegantly controls motor speed using back electromotive force (EMF) feedback—a clever analog technique that keeps rotation stable without relying on external sensors.
In analog systems, this approach is especially elegant: the IC monitors the voltage generated by the motor itself (its back EMF), which is proportional to speed. By adjusting the drive current to maintain a target EMF, the chip effectively regulates motor speed under varying loads and supply conditions.
And yes, this post dives into back EMF (BEMF) and electric motors. Let’s get started.
Understanding back EMF in everyday motors
A spinning motor also acts like a generator, as its coils moving through magnetic fields induce an opposing voltage called back EMF. This back EMF reduces the current flowing through the motor once it’s up to speed.
At that point, only enough current flows to overcome friction and do useful work—far less than the surge needed to get it spinning. Actually, it takes very little time for the motor to reach operating speed—and for the current to drop from its high initial value.
This self-regulating behaviour of back EMF is central to motor efficiency and protection. As the mechanical load rises and the motor begins to slow, back EMF decreases, allowing more current to flow and generate the required torque. Under light or no-load conditions, the motor speeds up, increasing back EMF and limiting current draw.
This dynamic ensures that the motor adjusts its power consumption based on demand, preventing excessive current that could overheat the windings or damage components. In essence, back EMF reflects motor speed and actively stabilizes performance, a principle rooted in classical DC motor theory.
It ‘s worth noting that back EMF plays a critical role as a natural current limiter during normal motor operation. When motor speed drops—whether due to a brownout or excessive mechanical loading—the resulting reduction in back EMF allows more current to flow through the windings.
However, if left unchecked, this surge can lead to overheating and permanent damage. Maintaining adequate speed and load conditions helps preserve the protective function of back EMF, ensuring safe and efficient motor performance.
Armature feedback method in motion control
Armature feedback is a form of self-regulating (passive) speed control that uses back EMF and has been employed for decades in audio tape transport mechanisms, luxury toys, and other purpose-built devices. It remains widely used in low-cost motor control systems where precision sensors or encoders are impractical.
This approach leverages the motor’s ability to act as a generator: as the motor rotates, it produces a voltage proportional to its speed. Like any generator, the output also depends on the strength of the magnetic field flux.
Now let’s take a quick look at how to measure back EMF using a minimalist hardware setup.

Figure 2 The above blueprint presents a minimalist hardware setup for measuring the back EMF of a DC motor. Source: Author
Just to elaborate, when the MOSFET is ON, current flows from the power supply through the motor to ground, during which back EMF cannot be measured. When the MOSFET is OFF, the motor’s negative terminal floats, allowing back EMF to be measured. A microcontroller can generate the required PWM signal to drive the MOSFET.
Likewise, its onboard analog-to-digital converter (ADC) can measure the back EMF voltage relative to ground for further processing. Note that since the ADC measures voltage relative to ground, a lower input value corresponds to a higher back EMF.
That is, measuring the motor’s speed using back EMF involves two alternating steps: first, run the motor for a brief period; then, remove the drive signal. Due to inertia in the motor and mechanical system, the rotor continues to spin momentarily, and this coasting phase provides a window to sample the back EMF voltage and estimate the motor’s rotational speed.
The reference signal can then be routed to the PWM section, where the drive power is fine-tuned to maintain steady motor operation.
Still, in most cases, since the PWM driver outputs armature voltage as pulses, back EMF can also be measured during the intervals between those pulses. Keep note, when the transistor switches off, a strong inductive spike is generated, and the recirculation current flows through the antiparallel flyback diode. Therefore, a brief delay is demanded to allow the back EMF voltage to settle before measurement.
Notably, a high-side P-channel MOSFET can be used as a motor driver transistor instead of a low-side N-channel MOSFET. Likewise, discrete op-amps—rather than dedicated ICs—can also govern motor speed, but that is a topic for another day.
And while this is merely a blueprint, its flexibility allows it to be readily adapted for measuring back EMF—and thus the RPM—of nearly any DC motor. With just a few tweaks, this low-cost approach can be adapted to support a wide range of motor control applications—sensorless, scalable, and easy to implement. Naturally, it takes time, technical skill, and a bit of patience—but you can master it.
Back EMF and the BLDC motor
Back EMF in BLDC motors acts like a built-in feedback system, helping the motor regulate its speed, boost efficiency, and support smooth sensorless control. The shape of this feedback signal depends on how the motor is designed, with trapezoidal and sinusoidal waveforms being the most common.
While challenges like low-speed control and waveform distortion can arise, understanding and managing back EMF effectively opens the door to unlocking the full potential of BLDC motors in everything from fans to drones to electric vehicles.
So, what are the key effects of back EMF in BLDC motors? Let us take a closer look:
- Design influence: The shape of the back EMF waveform—trapezoidal or sinusoidal—directly affects control strategy, acoustic noise, and how smoothly the motor runs. Trapezoidal designs suit simpler, cost-effective controllers, while sinusoidal profiles offer quieter, more refined motion.
- Position estimation: Back EMF is widely used in sensorless control algorithms to estimate rotor position.
- Speed control: Back EMF is directly tied to rotor speed, making it a reliable signal for regulating motor speed without external sensors.
- Speed limitation: Back EMF eventually balances the supply voltage, limiting further acceleration unless voltage is increased.
- Current modulation: As the motor spins faster, back EMF increases, reducing the effective voltage across the windings and limiting current flow.
- Torque impact: Since back EMF opposes the applied voltage, it affects torque production. At high speeds, stronger back EMF draws less current, resulting in lower torque.
- Efficiency optimization: Aligning commutation with back EMF waveform improves performance and reduces losses.
- Regenerative braking: In some systems, back EMF is harnessed during braking to feed energy back into the power supply or battery, a valuable feature in electric vehicles and battery-powered devices where efficiency matters.
Oh, I nearly skipped over a few clever tricks that make BLDC motor control even more efficient. One of them is back EMF zero crossing—a sensorless technique where the controller detects when the voltage of an unpowered phase crosses zero, presenting it to time commutation events without physical sensors. To avoid false triggers from electrical noise or switching artifacts, this signal often needs debouncing, either through filtering or timing thresholds.
But this method does not work at startup, when the rotor is not spinning fast enough to generate usable back EMF. That is where open-loop acceleration comes in: the motor is driven with fixed timing until it reaches a speed where back EMF becomes detectable and closed-loop control can take over.
For smoother and more precise performance, field-oriented control (FOC) goes a step further. It transforms motor currents into a rotating reference frame, enabling accurate torque and flux control. Though traditionally used in permanent magnet synchronous motors (PMSMs), FOC is increasingly applied to sinusoidal BLDC motors for quieter, more refined motion.
A vast number of ICs nowadays make sensorless motor control feel like a walk in the park. As an example, below you will find the application schematic of the DRV10983 motor IC, which elegantly integrates power MOSFETs for driving a three-phase sensorless BLDC motor.

Figure 3 Application schematic of the DRV10983 chip, illustrating its function as a three-phase sensorless motor driver with integrated power MOSFETs. Source: Texas Instruments
That wrap up things for now. Talked too much, but there is plenty more to uncover. If this did not quench your thirst, stay tuned—more insights are brewing.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Driver Precision, Efficiency Get a Boost from Microelectronics
- Brushless DC Motors – Part I: Construction and Operating Principles
- Brushless DC Motors–Part II: Control Principles
- Back EMF method detects stepper motor stall: Pt. 1–The basics
- Back EMF method detects stepper motor stall: Pt. 2–Torque effects and detection circuitry
The post Back EMF and electric motors: From fundamentals to real-world applications appeared first on EDN.
Beyond the current smart grid management systems

Modernizing the electric grid involves more than upgrading control systems with sophisticated software—it requires embedding sensors and automated controls across the entire system. It’s not only the digital brains that manage the network but also the physical devices, like the motors that automate switch operations, which serve as the system’s hands.
Only by integrating sensors and robust controls throughout the entire grid can we fully realize the vision of a smart, flexible, high-capacity, efficient, and reliable power infrastructure.

Source: Bison
The drive to modernize the power grid
The need for increased capacity and greater flexibility is driving the modernization of the power grid. The rapid electrification of transportation and HVAC systems, combined with the rise of artificial intelligence (AI) technologies, is placing unprecedented demands on the energy network.
To meet these challenges, the grid must become more dynamic, capable of supporting new technologies while optimizing efficiency and ensuring reliability.
Integrating distributed energy resources (DERs), such as rooftop solar panels, battery storage, and wind farms, adds further complexity. So, advanced fault detection, self-healing capabilities, and more intelligent controls are essential to managing these resources effectively. Grid-level energy storage solutions, like battery buffers, are also critical for balancing supply and demand as the energy landscape evolves.
At the same time, the grid must address the growing need for resilience. Aging infrastructure, much of it built decades ago, struggles to meet today’s energy demands. Upgrading these outdated systems is vital to ensuring reliability and avoiding costly outages that disrupt businesses and communities.
The increasing frequency of climate-related disasters, including hurricanes, wildfires, and heat waves, highlights the urgency of a resilient grid. Therefore, modernizing the grid to withstand and recover from extreme weather events is no longer optional, it’s essential for the stability of our energy future.
The challenges posed by outdated infrastructure and climate-related disasters are accelerating the adoption of advanced technologies like Supervisory Control and Data Acquisition (SCADA) systems and Advanced Distribution Management Systems (ADMS). These innovations enhance grid visibility, allowing operators to monitor and manage energy flow in real time. This level of control is crucial for quickly addressing disruptions and preventing widespread outages.
Additionally, ADMS makes the grid smarter and more efficient by leveraging predictive analytics. ADMS can forecast energy demand, identify potential issues before they occur, and optimize the flow of electricity across the grid. It also supports conditional predictive maintenance, allowing utilities to address equipment issues proactively based on real-time data and usage patterns.
The key to successful digitization: Fully integrated systems
Smart grids follow the dynamics of the overall global shift toward digitization, aligning with advancements in Industry 4.0, where smart factories go beyond advanced software and analytics. It’s a complete system that integrates IoT sensors, robotics, and distributed controls throughout the production line, creating a setup that’s more productive, flexible, and transparent.
By offering real-time visibility into the production process and component conditions, these automated systems streamline operations, minimize downtime, boost productivity, lower labor costs, and enhance preventive maintenance.
Similarly, smart grids operate as fully integrated systems that rely heavily on a network of advanced sensors, controls, and communication technologies.
Devices such as phasor measurement units (PMUs) provide real-time monitoring of electrical grid stability. Other essential sensors include voltage and current transducers, power quality transducers, and temperature sensors, which monitor key parameters to detect and prevent potential issues. Smart meters also enable two-way communication between utilities and consumers, enabling real-time energy usage tracking, dynamic pricing, and demand response capabilities.
The role of motorized switch operators in grid automation
Among the various distributed components in today’s modern grid infrastructure, motorized switch operators are among the most critical. These devices automate switchgear functions, eliminating the need for manual operation of equipment such as circuit breakers, load break switches, air and SF6 insulated disconnects, and medium- or high-voltage sectionalizers.
By automating these processes, motorized switch operators enhance precision, speed, and safety. They reduce the risk of human error and ensure smoother grid operations. Moreover, these devices integrate seamlessly with SCADA and ADMS, enabling real-time monitoring and control for improved efficiency and reliability across the grid.
Motorized switch operators aren’t just valuable for supporting the smart grid, they also offer practical business benefits on their own, even without smart grid integration. Automating switch operations eliminates the need to send out trucks and personnel every time a switch needs to be operated. This saves significant time, reduces service disruptions, and lowers fleet operation and labor costs.
Motorized switch operators also improve safety. During storms or emergencies, sending crews to remote or hazardous locations can be dangerous. Underground vaults, for example, can flood, turning them into high-voltage safety hazards. Automating these tasks ensures that switches can be operated without putting workers at risk.
The importance of a reliable motor and gear system
When automating switchgear operation, the reliability of the motor and gear system is crucial. These components must perform flawlessly every time, ensuring consistent operation in all conditions, from routine use to extreme situations like storms or grid emergencies.
Given that the switchgear in power grids is designed to operate reliably for decades, motor operators must be engineered with exceptional durability and dependability to ensure they surpass these long-term performance requirements.
Standard off-the-shelf motors often fail to meet the specific demands of medium- and high-voltage switchgear systems. General-purpose motors are typically not engineered to withstand extreme environmental conditions or the high number of operational cycles required in the power grid.
On the other hand, utilities need to modernize infrastructure without expanding vault sizes, and switchgear OEMs want to enhance functionality without altering layouts. A “drop-in” solution offers a seamless and straightforward way to integrate advanced automation into existing systems, saving time, reducing costs, and minimizing downtime.
To meet the unique challenges of medium- and high-voltage switchgear, motor and gear systems must balance two critical constraints—compact size and limited amperage—while still delivering exceptional performance in speed and torque.
Here’s why these attributes matter:
- Compact size: Space is at a premium in power grid applications, especially for retrofits where manual switchgear is being converted to automated systems. So, motors must fit within the existing contours and confined spaces of switchgear installations. Even for new equipment, utilities demand compact designs to avoid costly expansions of service vaults or installation areas.
- Limited amperage draw: Motors often need to operate on as little as 5 amps, far less than what’s typical for other applications. Developing a motor and gear system that performs reliably within such constraints is essential to ensuring compatibility with power grid environments.
- High speed: Fast operation is critical for the safe and effective functioning of switchgear. The ability to open and close switches rapidly minimizes the risk of dangerous electrical arcs, which can cause severe equipment damage, pose safety hazards, and lead to cascading power grid failures.
- High torque: Overcoming the significant spring force of switchgear components requires motors with high torque. This ensures smooth and consistent operation, even under demanding conditions.
The challenge lies in meeting all four of these requirements. Compact size and low amperage requirements often compromise the speed and torque needed for reliable performance. That’s why motor and gear systems must be specifically engineered and rigorously tested to meet the stringent demands of medium- and high-voltage switchgear applications. Only purpose-built solutions can provide the durability, efficiency, and reliability required to support the long-term stability of the power grid.
Meeting environmental and installation demands
Beyond size, power, and performance considerations, motor and gear systems for medium- and high-voltage switchgear must also meet stringent environmental and installation requirements.
For example, these systems are often exposed to extreme weather conditions, requiring watertight designs to ensure durability in harsh environments. This is especially critical for applications where switchgear is housed in underground vaults that may be prone to flooding or moisture intrusion. Additionally, using specialized lubrication that performs well in both high and low temperature extremes is essential to maintain reliability and efficiency.
Equally important is the ease of installation. Rotary motors provide a significant advantage over linear actuators in this regard. Unlike linear actuators, which require precise calibration, a process that is time-consuming, labor-intensive, and potentially error-prone, rotary motors eliminate this complexity. Their straightforward setup not only reduces installation time but also enhances reliability by eliminating the need for manual adjustments.
To address the diversity of designs in switchgear systems produced by various OEMs, it is essential to work with a motor and gear manufacturer capable of delivering customized solutions. Retrofits often demand a tailored approach due to the unique configurations and requirements of different equipment. Partnering with a company that not only offers bespoke solutions but also has deep expertise in power grid applications is critical.
Future-proofing systems with reliable automation
Automating switchgear operation is a vital step in advancing the modernization of power grids, forming a critical component of smart grid development. Reliable, high-performance motor operators enhance operational efficiency and ensure longevity, providing a solid foundation for evolving power systems.
No matter where a utility is in its modernization journey, investing in durable and efficient motorized switch operators delivers lasting value. This forward-thinking approach not only enhances current operations but also ensures systems are ready to adapt and evolve as modernization advances.
Gary Dorough has advanced from sales representative to sales director for the Western United States and Canada during his 25-year stint at Bison, an AMETEK business. His experience includes 30 years of utility industry collaboration on harmonics mitigation and 15 years developing automated DC motor operators for medium-voltage switchgear systems.
Related Content
- Building a smarter grid
- Energy Generation and Storage in the Smart Grid
- Smart-grid standards: Progress through integration
- Spanish Startup Secures Smart Grids from Cyberattacks
- Smart Grids and AI: The Future of Efficient Energy Distribution
The post Beyond the current smart grid management systems appeared first on EDN.
A tutorial on instrumentation amplifier boundary plots—Part 1

In today’s information-driven society, there’s an ever-increasing preference to measure phenomena such as temperature, pressure, light, force, voltage and current. These measurements can be used in a plethora of products and systems, including medical diagnostic equipment, home heating, ventilation and air-conditioning systems, vehicle safety and charging systems, industrial automation, and test and measurement systems.
Many of these measurements require highly accurate signal-conditioning circuitry, which often includes an instrumentation amplifier (IA), whose purpose is to amplify differential signals while rejecting signals common to the inputs.
The most common issue when designing a circuit containing an IA is the misinterpretation of the boundary plot, also known as the common mode vs. output voltage, or VCM vs. VOUT plot. Misinterpreting the boundary plot can cause issues, including (but not limited to) signal distortion, clipping, and non-linearity.
Figure 1 depicts an example where the output of an IA such as the INA333 from Texas Instruments has distortion because the input signal violates the boundary plot (Figure 2).

Figure 1 Instrumentation amplifier output distortion is caused by VCM vs. VOUT violation. Source: Texas Instruments

Figure 2 This is how VOUT is limited by VCM. Source: Texas Instruments
This series about IAs will explain common- versus differential-mode signaling, basic operation of the traditional three-operational-amplifier (op amp) topology, and how to interpret and calculate the boundary plot.
This first installment will cover the common- versus differential-mode voltage and IA topologies, and show you how to derive the internal node equations and transfer function of a three-op-amp IA.
The IA topologies
While there are a variety of IA topologies, the traditional three-op-amp topology shown in Figure 3 is the most common and therefore will be the focus of this series. This topology has two stages: input and output. The input stage is made of two non-inverting amplifiers. The non-inverting amplifiers have high input impedance, which minimizes loading of the signal source.

Figure 3 This is how a traditional three-op-amp IA looks like. Source: Texas Instruments
The gain-setting resistor, RG, allows you to select any gain within the operating region of the device (typically 1 V/V to 1,000 V/V). The output stage is a traditional difference amplifier. The ratio of R2 to R1 sets the gain of the difference amplifier. The balanced signal paths from the inputs to the output yield an excellent common-mode rejection ratio (CMRR). Finally, the output voltage, VOUT, is referred to as the voltage applied to the reference pin, VREF.
Even though three-op-amp IAs are the most popular topology, other topologies such as the two op amps offer unique benefits (Figure 4). This topology has high input impedance and single resistor-programmable gain. But since the signal path to the output for each input (V+IN and V-IN) is slightly different, this topology degrades CMRR performance, especially over frequency. Therefore, this type of IA is typically less expensive than the traditional three-op-amp topology.

Figure 4 The schematic shows a two-op-amp IA. Source: Texas Instruments
The IA shown in Figure 5 has a two-op-amp IA input stage. The third op amp, A3, is the output stage, which applies gain to the signal. Two external resistors set the gain. Because of the imbalanced signal paths, this topology also has degraded CMRR performance (<90dB). Therefore, devices with this topology are typically less expensive than traditional three-op-amp IAs.

Figure 5 A two-op-amp IA is shown with output gain stage. Source: Texas Instruments
While the aforementioned topologies are the most prevalent, there are several unique IAs, including current mirror, current feedback, and indirect current feedback.
Figure 6 depicts the current mirror topology. This type of IA is preferable because it enables an input common-mode range that extends to both supply voltage rails, also known as the rail-to-rail input. However, this benefit comes at the expense of bandwidth. Compared to two-op-amp IAs, this topology yields better CMRR performance (100dB or greater). Finally, this topology requires two external resistors to set the gain.

Figure 6 This is how current mirror topology looks like. Source: Texas Instruments
Figure 7 shows a simplified schematic of the current feedback topology. This topology leverages super-beta transistors (Q1 and Q2) to buffer input signal and forces it across the gain-setting resistor, RG. The resulting current flows through R1 and R2, which create voltages at the outputs of A1 and A2. The difference amplifier, A3, then rejects the common-mode signal.

Figure 7 Simplified schematic displays the current feedback topology. Source: Texas Instruments
This topology is advantageous because super-beta transistors yield a low input offset voltage, offset voltage drift, input bias current, and input noise (current and voltage).
Figure 8 depicts the simplified schematic of an indirect current feedback IA. This topology has two transconductance amplifiers (gm1 and gm2) and an integrator amplifier (gm3). The differential input voltage is converted to a current (IIN) by gm1. The gm2 stage converts the feedback voltage (VFB-VREF) into a current (IFB). The integrator amplifier matches IIN and IFB by changing VOUT, thereby adjusting VFB.

Figure 8 This schematic highlights the indirect current feedback topology. Source: Texas Instruments
One significant difference when compared to the previous topology is the rejection of the common-mode signal. In current feedback IAs (and similar architectures), the common-mode signal is rejected by the output stage difference amplifier, A3. Indirect current feedback IAs, however, reject the common-mode signal immediately at the input (gm1). This provides excellent CMRR performance at DC over frequency and independent of gain.
CMRR performance does not degrade if there is impedance on the reference pin (unlike other traditional IAs). Finally, this topology requires two resistors to set the gain, which may deliver excellent performance across temperature if the resistors have well-matched drift behavior.
Common- and differential-mode voltage
The common-mode voltage is the average voltage at the inputs of a differential amplifier. A differential amplifier is any amplifier (including op amps, difference amplifiers and IAs) that amplifies a differential signal while rejecting the common-mode voltage.
The inverting terminal connects to a constant voltage, VCM. Figure 9 depicts a more realistic definition of the input signal where two voltage sources represent VD. Each source has half the magnitude of VD. Performing Kirchhoff’s voltage law around the input loop proves that the two representations are equivalent.

Figure 9 The above schematic shows an alternate definition of common- and differential-mode voltages. Source: Texas Instruments
Three-op-amp IA analysis
Understanding the boundary plot requires an understanding of three-op-amp IA fundamentals. Figure 10 depicts a traditional three-op-amp IA with an input signal—with input and output nodes A1, A2 and A3 labeled.

Figure 10 A three-op-amp IA is shown with input signal and node labels. Source: Texas Instruments
Equation 1 depicts the overall transfer function of the circuit in Figure 10 and defines the gain of the input stage, GIS, and the gain of the output stage, GOS. Notice that the common-mode voltage, VCM, does not appear in the output-voltage equation, because an ideal IA completely rejects common-mode input signals.

Noninverting amplifier input stage
Figure 11 depicts a simplified circuit that enables the derivation of node voltages VIA1 and VOA1.

Figure 11 The schematic shows a simplified circuit for VIA1 and VOA1. Source: Texas Instruments
Equation 2 calculates VIA1:

The analysis for VOA1 simplifies by applying the input-virtual-short property of ideal op amps. The voltage that appears at the RG pin connected to the inverting terminal of A2 is the same as the voltage at V+IN. Superposition results are shown in Equation 3, which simplifies to Equation 4.


Applying a similar analysis to A2 (Figure 12) yields Equation 5, Equation 6 and Equation 7.

Figure 12 This is a simplified circuit for VIA2 and VOA2. Source: Texas Instruments



Difference amplifier output stage
Figure 13 shows that A3, R1 and R2 make up the difference amplifier output stage, whose transfer function is defined in Equation 8.

Figure 13 The above schematic displays difference amplifier input (VDIFF). Source: Texas Instruments

Equation 9, Equation 10 and Equation 11 use the equations for VOA1 and VOA2 to derive VDIFF in terms of the differential input signal, VD, as well as RF and the gain-setting resistor, RG.



Substituting Equation 11 for VDIFF in Equation 8 yields Equation 12, which is the same as Equation 1.

In most IAs, the gain of the output stage is 1 V/V. If the gain of the output stage is 1 V/V, Equation 12 simplifies to Equation 13:

Figure 14 determines the equations for nodes VOA3 and VIA3.

Figure 14 This diagram highlights difference amplifier internal nodes. Source: Texas Instruments
The equation for VOA3 is the same as VOUT, as shown in Equation 14:

Using superposition as shown in Equation 15 determines the equation for VIA3. The voltage at the non-inverting node of A3 sets the amplifier’s common-mode voltage. Therefore, only VOA2 and VREF affect VIA3.

Since GOS=R2/R1, Equation 15 can be rewritten as Equation 16:

Part 2 highlights
The second part of this series will use the equations from the first part to plot each internal amplifier’s input common-mode and output-swing limitation as a function of the IA’s common-mode voltage.
Peter Semig is an applications manager in the Precision Signal Conditioning group at Texas Instruments (TI). He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.
Related Content
- Instrumentation amplifier input-circuit strategies
- Discrete vs. integrated instrumentation amplifiers
- New Instrumentation Amplifier Makes Sensing Easy
- Instrumentation amplifier VCM vs VOUT plots: part 1
- Instrumentation amplifier VCM vs. VOUT plots: part 2
The post A tutorial on instrumentation amplifier boundary plots—Part 1 appeared first on EDN.
ADI upgrades its embedded development platform for AI

Analog Devices, Inc. simplifies embedded AI development with its latest CodeFusion Studio release, offering a new bring-your-own-model capability, unified configuration tools, and a Zephyr-based modular framework for runtime profiling. The upgraded open-source embedded development platform delivers advanced abstraction, AI integration, and automation tools to streamline the development and deployment of ADI’s processors and microcontrollers (MCUs).
CodeFusion Studio 2.0 is now the single entry point for development across all ADI hardware, supporting 27 products today, up from five in the last year, when first introduced in 2024.
Jason Griffin, ADI’s managing director, software and AI strategy, said the release of CodeFusion Studio 2.0 is a major leap forward in ADI’s developer-first journey, bringing an open extensible architecture across the company’s embedded ecosystem with innovation focused on simplicity, performance, and speed.
CodeFusion Studio 2.0 streamlines embedded AI development. (Source: Analog Devices Inc.)
A major goal of CodeFusion Studio 2.0 is to help teams move faster from evaluation to deployment, Griffin said. “Everything from SDK [software development kit] setup and board configuration to example code deployment is automated or simplified.”
Griffin calls it a “complete evolution of how developers build on ADI technology,” by unifying embedded development, simplifying AI deployment, and providing performance visibility in one cohesive environment. “For developers and customers, this means faster design cycles, fewer barriers, and a shorter path from idea to production.”
A unified platform and streamlined workflowCodeFusion Studio 2.0, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools, and optimization capabilities. The unified configuration tools reduce complexity across ADI’s hardware ecosystem.
The new Zephyr-based modular framework enables runtime AI/ML workload profiling, offering layer-by-layer analysis and integration with ADI’s heterogeneous platforms. This eliminates toolchain fragmentation, which simplifies ML deployment and reduces complexity, Griffin noted.
“One of the biggest challenges that developers face with multicore SoCs [system on chips] is juggling multiple IDEs [integrated development environments], toolchains, and debuggers,” Griffin explained. “Each core whether Arm, DSP [digital signal processor], or MPU [microprocessor] comes with its own setup and that fragmentation slows teams down.
“In CodeFusion Studio 2.0, that changes completely,” he added. “Everything now lives in a single unified workspace. You can configure, build, and debug every core from one environment, with shared memory maps, peripheral management, and consistent build dependencies. The result is a streamlined workflow that minimizes context switching and maximizes focus, so developers spend less time on setup and more time on system design and optimization.”
CodeFusion Studio System Planner also is updated to support multicore applications and expanded device compatibility. It now includes interactive memory allocation, improved peripherals setup, and streamlined pin assignment.
CodeFusion Studio 2.0 adds interactive memory allocation (Source: Analog Devices Inc.)
The growing complexity in managing cores, memory, and peripherals in embedded systems is becoming overwhelming, Griffin said. The system planner gives “developers a clear graphical view of the entire SoC, letting them visualize cores, assign peripherals, and define inter-core communication all in one workspace.”
In addition, with cross-core awareness, the environment validates shared resources automatically.
Another challenge is system optimization, which is addressed with multicore profiling tools, including the Zephyr AI profiler, system event viewer, and ELF file explorer.
“Understanding how the system behaves in real time, and finding where your performance can improve is where the Zephyr AI profiler comes in,” Griffin said. “It measures and optimizes AI workflows across ADI hardware from ultra-low-power edge devices to high-performance multicore systems. It supports frameworks like TensorFlow Lite Micro and TVM, profiling latency, memory and throughput in a consistent and streamlined way.”
Griffin said the system event viewer acts like a built-in logic analyzer, letting developers monitor events, set triggers, and stream data to see exactly how the system behaves. It’s invaluable for analyzing, synchronization, and timing across cores, he said.
The ELF file explorer provides a graphical map of memory and flash usage, helping teams make smarter optimized decisions.
CodeFusion Studio 2.0 also gives developers the ability to download SDKs, toolchains, and plugins on demand, with optional telemetry for diagnostic and multicore support.
Doubling down on AICodeFusion Studio 2.0 simplifies the development of AI-enabled embedded systems with support for complete end-to-end AI workflows. This enables developers to bring their own models and deploy them in ADI’s range of processors from low-power edge devices to high-performance DSPs.
“We’ve made the workflow dramatically easier,” Griffin said. “Developers can now import, convert, and deploy AI models directly to ADI hardware. No more stitching together separate tools. With the AI deployment tools, you can assign models to specific cores, verify compatibility, and profile performance before runtime, ensuring every model runs efficiently on the silicon right from the start.”
Manage AI models with CodeFusion Studio 2.0 from import to deployment (Source: Analog Devices Inc.)
Easier debugging
CodeFusion Studio 2.0 also adds new integrated debugging features that bring real-time visibility across multicore and heterogeneous systems, enabling faster issue resolution, shorter debug cycles, and more intuitive troubleshooting in a unified debug experience.
One of the toughest parts of embedded development is debugging multicore systems, Griffin noted. “Each core runs its own firmware on its own schedule often with its own toolchain making full visibility a challenge.”
CodeFusion Studio 2.0 solves this problem, he said. “Our new unified debug experience gives developers real-time visibility across all cores—CPUs, DSPs, and MPUs—in one environment. You can trace interactions, inspect shared resources, and resolve issues faster without switching between tools.”
Developers spend more than 60% of their time doing debugging, Griffin said, and ADI wanted to address this challenge and reduce that time sink.
CodeFusion Studio 2.0 now includes core dump analysis and advanced GDB integration, which includes custom JSON and Python scripts for both Windows and Linux with multicore support.
A big advance is debugging with multicore GDP core dump analysis and RTOS awareness working together in one intelligent uniform experience, Griffin said.
“We’ve added core dump analysis, built around Zephyr RTOS, to automatically extract and visualize crash data; it helps pinpoint root causes quickly and confidently,” he continued. “And the new GDB toolbox provides advanced scripting performance, tracing and automation, making it the most capable debugging suite ADI has ever offered.”
The ultimate goal is to accelerate development and reduce risk for customers, which is what the unified workflows and automation provides, he added.
Future releases are expected to focus on deeper hardware-software integration, expanded runtime environments, and new capabilities, targeting growing developer requirements in physical AI.
CodeFusion Studio 2.0 is now available for download. Other resources include documentation and community support.
The post ADI upgrades its embedded development platform for AI appeared first on EDN.
32-bit MCUs deliver industrial-grade performance

GigaDevice Semiconductor Inc. launches a new family of high-performance GD32 32-bit general-purpose microcontrollers (MCUs) for a range of industrial applications. The GD32F503/505 32-bit MCUs expand the company’s portfolio based on the Arm Cortex-M33 core. Applications include digital power supplies, industrial automation, motor control, robotic vacuum cleaners, battery management systems, and humanoid robots.
(Source: GigaDevice Semiconductor Inc.)
Built on the Arm v8-M architecture, the GD32F503/505 series offers flexible memory configurations, high integration, and built-in security functions, and features an advanced digital signal processor, hardware accelerator and a single-precision floating-point unit. The GD32F505 operates at a frequency of 280 MHz, while the GD32F503 runs at 252 MHz. Both devices achieve up to 4.10 CoreMark/MHz and 1.51 DMIPS/MHz.
The series offers up to 1024 KB of Flash and 192 KB of SRAM. Users can allocate code-flash, data-flash, and SRAM location through scatter loading based on their specific application, which allows users to tailor memory resources according to their requirements, GigaDevice said.
The GD32F503/505 series also integrates a set of peripheral resources, including three analog-to-digital converters with a sampling rate of up to 3 Ms/s (supporting up to 25 channels), one fast comparator, and one digital-to-analog converter. For connectivity, it supports up to three SPIs, two I2Ss, two I2Cs, three USARTs, two UARTs, two CAN-FDs, and one USBFS interface.
The timing system features one 32-bit general-purpose timer, five 16-bit general-purpose timers, two 16-bit basic timers, and two 16-bit PWM advanced timers. This translates into precise and flexible waveform control and robust protection mechanisms for applications such as digital power supplies and motor control.
The operating voltage range of the GD32F503/505 series is 2.6V to 3.6 V, and it operates over the industrial-grade temperature range of -40°C to 105°C. It also offers three power-saving modes for maximizing power efficiency.
These MCUs also provide high-level ESD protection with contact discharge up to 8 kV and air discharge up to 15 kV. Their HBM/CDM immunity is stable at 4,000 V/1,000 V even after three Zap tests, demonstrating reliability margins that exceed conventional standards for industries such as industrial and home appliances, GigaDevice said.
In addition, the MCUs provide multi-level protection of code and data, supporting firmware upgrades, integrity and authenticity verification, and anti-rollback checks. Device security includes a secure boot and secure firmware update platform, along with hardware security features such as user secure storage areas. Other features include a built-in hardware security engine integrating SHA-256 hash algorithms, AES-128/256 encryption algorithms, and a true random number generator. Each device has a unique independent UID for device authentication and lifecycle management.
A multi-layered hardware security mechanism is centered around multi-channel watchdogs, power and clock monitoring, and hardware CRC. In addition, the GD32F5xx series’ software test library is certified to the German IEC 61508 SC3 (SIL 2/SIL 3) for functional safety. The series provides a complete safety package, including key documents such as a safety manual, FMEDA report, and safety self-test library.
The GD32 MCUs feature a full-chain development ecosystem. This includes the free GD32 Embedded Builder IDE, GD-LINK debugging, and the GD32 all-in-one programmer. Tool providers such as Arm, KEIL, IAR, and SEGGER also support this series, including compilation development and trace debugging.
The GD32F503/505 series is available in several package types, including LQFP100/64/48, QFN64/48/32, and BGA64. Samples are available, along with datasheets, software libraries, ecosystem guides, and supporting tools. Development boards are available on request. Mass production is scheduled to start in December. The series will be available through authorized distributors.
The post 32-bit MCUs deliver industrial-grade performance appeared first on EDN.
Board-to-board connectors reduce EMI

Molex LLC develops a quad-row shield connector, claiming the industry’s first space-saving, four-row signal pin layout with a metal electromagnetic interference (EMI) shield. The quad-row board-to-board connector achieves up to a 25-dB reduction in EMI compared to a non-shielded quad-row solution. These connectors are suited for space-constrained applications such as smart watches and other wearables, mobile devices, AR/VR applications, laptops, and gaming devices.
Shielding protects connectors from external electromagnetic noise such as nearby components and far-field devices that can cause signal degradation, data errors, and system faults. The quad-row connectors eliminate the need for external shielding parts and grounding by incorporating the EMI shields, which saves space and simplifies assembly. It also improves reliability and signal integrity.
(Source: Molex LLC)
The quad-row board-to-board connectors, first released in 2020, offers a 30% size reduction over dual-row connector designs, Molex said. The space-saving staggered-circuit layout positions pins across four rows at a signal contact pitch of 0.175 mm.
Targeting EMI challenges at 2.4-6 GHz and higher, the quad-row layout with the addition of an EMI shield mitigates both electromagnetic and radio frequency (RF) interference, as well as signal integrity issues that create noise.
The quad-row shield meets stringent EMI/EMC standards to lower regulatory testing burdens and speed product approvals, Molex said.
The new design also addresses the most significant requirements related to signal interference and incremental power requirements. These include how best to achieve 80 times the signal connections and four times the power delivery compared to a single-pin connector, Molex said.
The quad-row connectors offer a 3-A current rating to meet customer requirements for high power in a compact design. Other specifications include a voltage rating of 50 V, a dielectric withstanding voltage of 250 V, and a rated insulation resistance of 100 megohms. Click here for the datasheet.
Samples of the quad-row shield connectors are available now to support custom inquiries. Commercial availability will follow in the second quarter of 2026.
The post Board-to-board connectors reduce EMI appeared first on EDN.
5-V ovens (some assembly required)—part 2

In the first part of this Design Idea (DI), we looked at simple ways of keeping critical components at a constant temperature using a linear approach. In this second part, we’ll investigate something PWM-based, which should be more controllable and hence give better results.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Adding PWM to the oven
As before, this starts with a module based on a TO-220 package, the tab of which makes a decent hotplate on which our target component(s) can be mounted. Figure 1 shows this new circuit, which compares the voltage from a thermistor/resistor pair with a tri-wave and uses the result to vary the duty cycle of the heating current. Varying the amplitude and level of that tri-wave lets us tune the circuit’s performance.
This looks too simple and perhaps obvious to be completely original, but a quick search found nothing very similar. At least this was designed from scratch.
Figure 1 A tri-wave oscillator, a thermistor, and a comparator work together to pulse-width modulate the current through R7, the main heating element. Q1 switches that current and also helps with the heating.
U1a forms a conventional oscillator running at around 1 kHz. Neither the frequency nor the exact wave-shape on C1 is critical. R1 and R2+R3 determine the tri-wave’s offset, and R4 its amplitude. U1b compares the voltage across the thermistor with the tri-wave, as shown in Figure 2. When the temperature is low so that voltage is higher than any part of the tri-wave, U1b’s output will be solidly low, turning on Q1 to heat up R7 as fast as possible.
As the temperature rises, the voltages start to overlap and proportional control kicks in, progressively reducing the on-time so that the heat input is proportional to the difference between the actual and target temperatures. By the time the set-point has been reached, the on-time is down to ~18%. This scheme minimizes or even eliminates overshoot. (Thermal time-constants—ignored for the moment—can upset this a little.)

Figure 2 Oscilloscope captures showing the operation of Figure 1’s circuit.
Once the circuit is stable, Th1 will have the same resistance as R6, or 3.36 kΩ at our nominal target of 50°C (or 50.03007…°C, assuming perfect components), so Figure 1’s point B will be at half-rail. To keep that balance, the tri-wave must be offset upwards so that slicing gives our 18% figure at the set-point. Setting R3 to 1k0 achieved that. The performance after starting can be seen in Figure 3. (The first 40 seconds or so is omitted because it’s boring.)

Figure 3 From cold, Figure 1’s circuit stabilizes in two to three minutes. The upper trace is U1b’s output, heavily filtered. Also shown are Th1’s temperature (magenta) and that of the hotplate as measured by an external thermistor probe (cyan).
The use of Q1 as an over-driven emitter follower needs some explanation. First thoughts were to use an NPN Darlington or an n-MOSFET as a switch (with U1b’s inputs swapped), but that meant that the collector or drain—which we want to use as a hotplate—would be flapping up and down at the switching frequency.
While the edges are slowish, they could still couple capacitively to a target device: potentially bad news. With a PNP Darlington, the collector can be at ground, give or take a handful of millivolts. (The fine copper wire used to connect the module to the outside world has a resistance of about 1 Ω per meter.) Q1 drops ~1.3 V and so provides about a third of the heating, rather like the corresponding device in Part 1. This is a good reason to stay with the idea of using a TO-220’s tab as that hotplate—at least for the moment. Q1 could be a p-MOSFET, but R7 would then need to be adjusted to suit its (highly variable) VGS(on): fiddly and unrealistic.
LED1 starts to turn on once the set-point is near and becomes brighter as the duty cycle falls. This worked as well in practice as the long-tailed pair approach used in Part 1’s Figure 4.
The duty cycle is given as 18%, but where does that figure come from? It’s the proportion of the input heat that leaks out once the circuit has stabilized, and that depends on how well the module is thermally insulated and how thin the lead-out wires are. With a maximum heating current of 120 mA (600 mW in), practical tests gave that 18% figure, implying that ~108 mW is being lost. With a temperature differential of ~30°C, that corresponds to an overall thermal resistance of ~280°C/W. (Many DIL ICs are quoted as around 100°C/W.)
Some more assembly required
The final build is mechanically quite different and uses a custom-built hotplate instead of a TO-220’s tab. It’s shown in Figure 4.

Figure 4 Our new hotplate is a scrap of copper sheet with the heater resistors glued to it symmetrically, with Th1 on one side and room for the target component(s) on the other. The third picture shows it fixed to the lower block of insulating foam, with fine wires meandered and ready for terminating. Not shown: an extra wire to ground the copper. Please excuse the blobby epoxy. I’d never get a job on a production line.
R7 now comprises four -33 Ω resistors in series/parallel, which are epoxied towards the ends of a piece of copper, two on each side, with Th1 centered on one side. The other side becomes our hotplate area, with a sweet spot directly above the thermistor. Thermally, it is symmetrical, so that—all other things being equal, which they rarely are—our target component will be heated exactly like Th1.
The drive circuit is a variant on Figure 1, the main difference being Q1, which can now be a small but low-RON n-MOSFET as it’s no longer intended to dissipate any power. R3 and R4 are changed to give a tri-wave amplitude of ~500 mV pk–pk at a frequency of ~500 Hz to optimize the proportional control. Figure 5 and Figure 6 show the schematic and its performance. It now stabilizes within a degree after one minute and perhaps a tenth after two, with decent tracking between the internal (Th1) and hotplate temperatures. The duty cycle is higher, largely owing to the different construction; more (and bulkier) insulation would have reduced it, improving efficiency.

Figure 5 The driving circuit for the new hotplate.

Figure 6 How Figure 5’s circuit performs.
The intro to Part 1 touched on my original oven, which needed to stabilize the operation of a logarithmically tuned oscillator. It used a circuit similar to Part 1’s Figure 5 but had a separate power transistor, whose dissipation was wasted. The logging diode was surrounded by a thermally-insulated cradle of heating resistors and the control thermistor.
It worked well and still does, but these circuits improve on it. Time for a rebuild? If so, I’ll probably go for the simplest, Part 1/Figure 1 approach. For higher-power use, Figure 5 (above) could probably be scaled to use different heating resistors fed from a separate and larger voltage. Time for some more experimental fun, anyway.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- 5-V ovens (some assembly required)—part 1
- Fixing a fundamental flaw of self-sensing transistor thermostats
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Dropping a PRTD into a thermistor slot—impossible?
The post 5-V ovens (some assembly required)—part 2 appeared first on EDN.



