EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 4 години 5 хв тому

Kioxia adds 245-TB SSD to enterprise lineup

Птн, 07/25/2025 - 17:31

Raising the bar for enterprise NVMe SSDs, Kioxia’s LC9 drive delivers 245.76 TB in both U.2 (2.5-inch) and E3.L form factors. It joins the previously announced 122.88-TB model in the U.2 (2.5-inch) and E3.S form factors, expanding the LC9 series to meet the performance and efficiency demands of generative AI—while helping replace multiple power-hungry HDDs.

LC9 drives feature a PCIe 5.0 interface, offering up to 128 GT/s via a Gen5 single x4 or dual x2 configuration. They are compliant with NVMe 2.0 and NVMe-MI 1.2c specifications and meet many of the requirements in the Open Compute Project (OCP) Datacenter NVMe SSD specification v2.5.

These high-capacity SSDs integrate multiple 8-TB devices, each built from 32 stacked dies of 2-Tb BiCS8 3D QLC NAND in a compact 154-ball package. The drives also incorporate a controller and firmware, along with features such as die failure recovery, parity protection, and power loss protection. Dual-port operation enables high availability, and data security options include SIE, SED, and planned FIPS 140-3 compliance.

The LC9 series of SSDs is now sampling to select customers.

Kioxia

The post Kioxia adds 245-TB SSD to enterprise lineup appeared first on EDN.

When a ring isn’t really a ring

Чтв, 07/24/2025 - 18:07

Early in my engineering career, I worked with a couple of colleagues on an outside project. We had a concept for a security system for gaming arcades. At the time, arcades were very popular, hosting games like Pac-Man, Space Invaders, and Pinball. One of the business problems, though, was the theft of coins from the gaming machines. Apparently, when staff members were emptying the coin boxes, they would pocket a handful of coins. Theft in these arcades was said to be around 25%.

Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale

Our concept for preventing these thefts was a device that consisted of two parts. One micro-based device was installed in each of the arcade games. This counted the coins as they entered the slot. Then, periodically, the total coin count and game ID were transmitted, via the power line, to the back office. In the back office was the receiver. It monitored the power line and collected all the transmissions from the various games. This back-office device was also connected to a telephone landline, and once a day, the central office would call into the back-office device to have the daily data sent to it. The hand count of coins could then be reconciled with the electronic coin count from all the machines.

My colleagues and I divided up the work, with one doing the schematic and PCB prototypes. Another did the enclosures, labeling, etc. I did the firmware for the two pieces of equipment. After many months of evening work, we had a system that performed just as we expected. We also got a test site identified to install a complete system. As the arcade was more than 1000 miles away, we had someone at the other end install the system. After a few days, we got a call from the arcade operator telling us the office device would not answer the phone call into it. The hardware design was rechecked to see if the opto-isolator, signaling the firmware of a high voltage on the ring line, was designed correctly to take into account lower-level ring voltages—no issue there. This issue fell on me as it appeared to be a firmware issue. I tested my firmware dozens of times with various changes using an actual landline—it always worked. After many days of testing, I announced that I could not find any issues.

As a last resort, we had the hardware engineer fly to the arcade site with a raft of test equipment. After only a few hours, he called and said he had found the issue. The standard for ringing for a landline is defined by ANSI T1.401-1988 section 5.4.2, which I followed for the firmware. According to this standard, the ring cadence consists of 2 seconds of ringing followed by 4 seconds of silence. The phone system, in the town where the arcade was, followed this…sort of. During the ring, there was a short dropout ( about 80 ms, if I remember correctly). So, what the firmware saw was about 1 second of ring, no ring for 80 ms, then 920 ms of ring, and then 4 seconds of silence. The firmware, noting that the ring was only one second long, determined that it wasn’t a valid ring and therefore wouldn’t answer. The discovery of the issue was long, difficult, and expensive. The fix was easy to implement in firmware. After updating the firmware, the arcade system worked very well (we never got rich off it, though…another, non-technical, story).

The takeaway here is not how to construct landline phone answering firmware; those days are long gone. But the lesson here is that when you have an issue, suspect everything. We continued to have discussions on why the system would not answer the phone when we knew it was sensing the ring. We never thought that maybe the cadence, defined by an ANSI standard, would not be correct. Why the town’s telephone ring system had an 80 ms gap was never discovered, but it obviously didn’t meet the spec. So, if you can’t find a problem in your device, maybe it’s the other device(s) you’re connecting to. And at that point, the other system needs to be checked against its specs.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post When a ring isn’t really a ring appeared first on EDN.

PCB design tips for EMI and thermal management in 800G systems

Чтв, 07/24/2025 - 10:34

As the industry accelerates toward 800G Ethernet and optical interconnects, engineers face new challenges in managing electromagnetic interference (EMI) while ensuring signal integrity at unprecedented speeds. The transition to 112G pulse amplitude modulation 4-level (PAM4) SerDes introduces faster edge rates and dense spectral content, elevating the risk of radiated and conducted emissions.

Simultaneously, compact module form factors such as QSFP-DD and OSFP force high-speed lanes, DC-DC converters, and control circuitry into tight spaces, increasing the potential for crosstalk and noise coupling. Power delivery noise, insufficient shielding, and poor return path design can easily transform an 800G design from lab success to compliance failure during emissions testing.

To avoid late-stage surprises, it’s critical to address EMI systematically from the PCB level up, balancing stack-up, routing, and grounding decisions with high-speed signal integrity and practical manufacturability.

This article provides engineers with actionable PCB design strategies to reduce EMI in 800G systems while maintaining high performance in data center and telecom environments.

Layout considerations

For chip-to-chip 112G PAM4 signaling, the key frequency is the Nyquist frequency, which is half of the baud rate. PAM4 encodes 2 bits per symbol.

  • Therefore, the baud rate (symbol rate) is half of the bit rate. For 112 Gbps, the baud rate is 112 Gbps / 2 = 56 Gbaud (gigabaud).
  • The Nyquist frequency is half of the baud rate. So, the Nyquist frequency for 112G PAM4 is 56 Gbaud / 2 = 28 GHz.

The maximum insertion at 29 GHz for 112G medium range PAM4 is 20 dB. Megtron 7 offers a low dissipation factor (Df) of 0.003 at 29 GHz, which is adequate for 112G. Df of 0.003 is squarely in the “very low loss” category. It means that the material will dissipate a minimal amount of the signal’s energy, allowing more of the original signal strength to reach the receiver.

This helps preserve the critical amplitude differences between the PAM4 levels, enabling a lower bit error rate (BER). Low-cost FR-4 material typically has Df value of 0.015, which is excessive for 112G PAM4.

Aperture and shielding effectiveness

To avoid EMI, the wavelength relationship is essential, especially when considering wires or openings that may serve as unintentional antennas. An EMI shield’s seam, slot, or hole can all function as a slot antenna. When this opening’s dimensions get close to a sizable portion of an interfering signal’s wavelength, it turns into an effective radiator, letting EMI escape, perhaps failing the radiated emission test in an anechoic chamber.

As a general guideline, the maximum size of any aperture should be less than λ/20 (one-twentieth of the wavelength) of the highest frequency of concern to achieve efficient EMI shielding. See Figure 1 for typical airflow management openings.

Figure 1 Airflow apertures and shielded ventilation are shown for airflow management. Source: Author

The wavelength is calculated as lambda = c / f = (3 * 108) / (28 * 109) = 10.7 mm

Opening dimension = lambda / 20 = 0.536 mm

To reduce EMI problems, all apertures for equipment that operate at or are vulnerable to 28-GHz signals should ideally be less than 0.536 mm. The permitted dimensions for apertures decrease with increasing frequencies.

Routing guidelines and via stub impact at 112G PAM4

The spacing rule between two differential pairs is different for TX-to-TX and TX-to-RX. Generally, the allowed serpentine routing length for 112G PAM4 is less than previous speeds. Serpentine lines have less impact on a differential pair that is weakly connected.

A via stub is the unused portion of a through-hole via that extends beyond the layer where the signal transitions (Figure 2). For example, if a signal goes from the top layer to an inner layer via a through-hole, the part of the via extending from that inner layer to the bottom of the board forms a stub.

Figure 2 The diagram provides an overview of PCB via stub. Source: Author

f = c/(4*L*√ℇeff)

f = resonant frequency of a via stub = 28 GHz

c = speed of light = 3 x 108 m/s

L = Length of via stub = 1.533 mm = 60.35 mils

ℇeff = 3.05 at 28GHz

A via stub length of ~60 mils will resonate near 28 GHz in Megtron 7. For 112G PAM4 designs, this length is too long and can cause serious signal integrity issues.

Power considerations

Generally, 800G transceivers consume between 13 W and 18 W per port for short range but exact value is mentioned in module manufacturer datasheet. These transceivers contain 8 lanes for 112G to transmit 800G. A 1RU appliance with 32 QSFP-DD would need 25.6T switch. See Figure 3 for a simplified diagram of 1RU appliance with one ASIC.

Figure 3 Airflow management is shown for 1U high-speed systems incorporating a single ASIC. Source: Author

  • Power consumption for 112G PAM4 SerDes is high (typically 0.5–1.0 W per lane). For example, SerDes system will consume worst-case scenario Power = 8 * 1 W = 8 W.
  • Tcase_max = 90°C, Tambient_max = 50°C. Rth = (90 – 50) / 8 = 5° C/W. System designers should ensure heatsink and thermal interface material provides ≤ 5 ° C/W.
  • Q = Power to be dissipated (watts). ΔT = Allowable air temperature rise across the system (°C). Conversation factor = 3.16
  • CFM = Q* 3.16/ΔT = 2000 * 3.16/15 = 421
  • In 1RU, engineers use multiple 40 x 40 x 56 mm high-RPM fans for airfield distribution that typically pushes ~25-30 CFM. Fans required = 421/25 = 16.8 ≈ 17 fans. Accommodating this high number of fans is difficult because external power supplies occupy rear space.

Design recommendations

As 800G hardware and 112G PAM4 SerDes become standard in next-generation data center and telecom systems, engineers face a multifaceted design challenge: maintaining signal integrity, controlling EMI, and managing thermal constraints within high-density 1RU systems.

Careful PCB material selection, such as low-loss Megtron 7, precise routing to minimize via stub resonance, and disciplined aperture management for shielding are essential to avoid signal degradation and EMI test failures. Simultaneously, the high-power density of 800G optics and SerDes require advanced thermal design, airflow planning, and redundancy considerations to meet operational and reliability targets.

By systematically addressing EMI and thermal factors early in the design cycle, engineers can confidently build 800G systems that pass compliance testing while delivering high performance under real-world conditions. Doing so not only avoids costly late-stage redesigns but also ensures robust deployment of high-speed systems critical for the evolving demands of cloud and AI workloads.

Ujjwal Sharma is a hardware engineer specializing in high-speed system design, signal/power integrity, and optical modules for data center hardware.

Related Content

The post PCB design tips for EMI and thermal management in 800G systems appeared first on EDN.

PWM + Quadrac = Pure Power Play

Срд, 07/23/2025 - 16:58

It’s just a fact, I’m curiously fond of topologies that combine PWM switching and filtering circuitry with power handling devices like adjustable voltage regulator chips. This scheme makes power-capable DACs with double-digit wattage outputs. For example, “0 V to -10 V, 1.5 A LM337 PWM power DAC.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

The simple circuit in Figure 1 joins this favored family but makes its siblings look weak and wimpy by upping the power ante by more than a factor of 10. It attains output capabilities over a kilowatt and gets there with a total parts count of only nine inexpensive discretes. Here’s how it works.

Figure 1 The quadrac Q2 conduction-angle triggering time constant = R1C1 / DF, where DF is the PWM duty factor from 0 to 100%.

The power control method in play is variable AC phase angle conduction via a quadrac (also sometimes called an alternistor). Quadracs are bidirectional thyristors that comprise the dual functions of a triac (to do the power switching) and an integrated diac (to trigger the triac).

They’re popular in applications like variable-speed power tools and lamp dimmers because they’re cheap, efficient, and durable. What’s also nice is that the only support components they need for AC power control are a small potentiometer and a timing capacitor (both also cheap) to adjust triggering delay and thereby the phase angle of conduction, thence power output

Q2 is wired in exactly that traditional way ,except that opto-isolator Q1 and R1 fill the role of the pot. The duty factor (DF) of Q1’s PWM input sets its average conductance and thereby the effective trigger delay from a

DF = 1 minimum of ~1.7 ms for an upper 95% output power, down to a DF = 0 delay that’s longer than the entire 8.33 ms AC half-cycle. Which is to say: OFF. The PWM cycle rate isn’t critical but should be at least 10 kHz to avoid possible annoying beat frequencies since it’s not synchronized with the 60 Hz AC cycle.

The relationship between DF, phase angle, and percent power output is equal to the time integral of [(Vpk*sin(r)) 2], which is shown in Figure 2.

Figure 2 The (Vpk*sin(r))2 power output versus the PWM DF. The right axis is the voltage of the trigger capacitor (C1), the left axis is the fraction of the full output power versus trigger phase, and the x-axis is the AC phase in radians.

Because Q1, unlike Q2, isn’t bidirectional, the D1-4 diode bridge is necessary to keep it upright despite 60-Hz phase reversals. Q1’s typical current transfer ratio of 80% makes ~10 mA of PWM drive current necessary. Current limiter R2’s 330 Ω assumes a 5-V rail and a low impedance driver and will need adjustment if either assumption is violated. The Vc1 trigger voltage is 38 V ±5 V with ±3 V max asymmetry. These tolerances place a limit on DF versus power precision.

The full throttle Q3 power output efficiency is around 99%, but Q2’s max junction temperature rating is only 110 °C. Adequate heatsinking of Q2 will therefore be wise if outputs greater than 200 W and/or toasty ambient temperatures are expected.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post PWM + Quadrac = Pure Power Play appeared first on EDN.

A battery backup for a solar-mains hybrid lamp 

Втр, 07/22/2025 - 17:35
1. The solar-mains hybrid lamp

In the April 4, 2024, issue of EDN, the design of a solar mains hybrid lamp (HL) was featured. The lamp receives power from both a solar panel and a mains power supply to turn on an array of LED lamps. Even when solar power is widely variable, it supplies a constant light output by dynamically drawing balanced power from the mains supply. Also, it tracks the maximum power point very closely. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

1.1 Advantages

The advantages of the HL are as follows:

  1. It utilizes all the solar power generated and draws only the necessary power from the grid to maintain constant light output.
  2. It does not inject power into the grid; hence, it does not contribute to any grid-related issues.
  3. It uses a localized power flow with short cabling, resulting in negligible transmission losses.
  4. It uses DC operation, resulting in a simple, reliable, and low-cost system.
  5. Generated PV power is utilized even if the grid fails, thus acting as an emergency lamp in the event of a grid failure during the daytime.
  6. It has a lengthy lifespan of 15 years with minimal or no maintenance, resulting in a good return on investment.
1.2 Disadvantages

The limitations of the HL are as follows: 

  1. It does not provide light if the grid fails after sunset.
  2. Solar power is not utilized outside of office hours or on holidays.

As mentioned above, the HL’s utility can be fully realized in places such as hospitals, airports, and malls, as it can be used every day of the week.

In offices that are open for work only 5 days per week, the generated PV power will be wasted on weekends and outside of office hours (early mornings and evenings). 

For such applications, to fully utilize the generated PV power, a battery backup scheme is proposed. It is designed as an optional add-on feature to the existing HL. The PV power, which would otherwise go to waste, can now be stored in the battery whenever the HL is not in use. The stored energy can be utilized instead of mains power on workdays to reduce the electricity bill. In cases where the grid fails, it will work as an emergency lamp. 

2. Battery backup block diagram

The block diagram of the proposed scheme is shown in Figure 1. It consists of a HL having an array of 9 LED lamps, A1 to A9. Each HL has five 1-W white LEDs connected in series, mounted on a metal core PCB (MCPCB). For more details, refer to the previous article, “Solar-mains HL.” Here, the HL is used as is, without any changes. 

The PV voltage (Vpv) is supplied through a two-pole two-way switch S1 to the HL. Switch S1A is used to connect the PV panel to either the lamp or to the battery. As shown in the figure, the PV panel is connected to the battery through an Overvoltage Cutoff circuit. This circuit disconnects PV power when the battery voltage reaches its maximum value of Vb(MAX). 

A single-pole two-way switch S2 is used to select either MAINS or BAT to feed power to the VM terminal of the HL. When S2 is in the BAT position, battery power is fed through the undervoltage trip circuit. Whenever the battery voltage drops to the minimum value Vb(MIN), the HL is disconnected from the battery. Switch S1B is used to disconnect the battery/mains power to the HL when S1 is in the CHARGE position.

Figure 1 The proposed add-on battery backup system for HL.

Note: This simple battery cutoff and trip circuit has been implemented to prove the concept of battery backup using the existing HL. In the final design, the Overvoltage Cutoff circuit should be replaced with a solar charge controller, which will track the maximum power point as the battery charges. Readily available off-the-shelf solar charge controllers could be used. The selection of a solar charge controller is given in Section 5.

Here are the lamp specifications: 

  1. Solar PV panel: 30 Wp, Vmp = 17.5 V, Imp = 1.7 A
  2. Adapter specifications Va = 18 V; Current 2 A
  3. Lead Acid Battery: 6 V 5 Ah. (3 batteries connected in series)
  4. Battery nominal voltage Vb = 18V, Vb(MAX) = 19 V, Vb(MIN) = 17 V
  5. Lamp power output: 30 W
3. Overvoltage and undervoltage circuits 

The circuit diagram of the battery Overvoltage Cutoff and Undervoltage Trip is shown in Figure 2. Three lead-acid batteries (6 V, 5 Ah) connected in series are used for storing solar energy. The battery is connected to the solar panel Vpv through a P-channel MOSFET T1 (IRF9540). The Schottky diode D1 (1N5822) is connected in series to prevent the battery from getting discharged into the solar panel when it is not producing any power. 

T1 is controlled using comparator CMP1 of IC1 (LM393). The battery voltage is sensed using the potential divider R6 and R7. The reference to the comparator non-inverting pin (3) is generated from a +12-V power supply implemented using the IC2 (LM431) shunt regulator. If the battery voltage is lower than the reference voltage, the CMP1 output (pin 1) is high. This turns on transistor T3, which turns on T1. The green LED_G indicates that the battery is being charged.

Figure 2 The circuit diagram of Overvoltage Cutoff and Undervoltage Trip circuits.

The battery is connected to the load through MOSFET T2 (IRF9540). T2 is controlled using comparator CMP2 of IC1. The battery voltage is sensed using the potential divider R14 and R15, and is connected to the non-inverting terminal (Pin 5). The reference voltage is connected to the inverting terminal (Pin 6). 

So long as the battery voltage is higher than the reference, the CMP2 output remains high. This drives transistor T4, which turns on T2. When the battery voltage drops below the reference, T2 is turned off, thus disconnecting the lamp load. LED_R indicates the battery voltage is within the Vb(MIN) and Vb(MAX) range.

Figure 3 shows the PCB assembled according to the circuit diagram in Figure 2. The connections for the solar panel Vpv, battery Vb, and battery output Vb+ (through the MOSFET T2) are made using three 2-pin screw terminals. 

Figure 3: The assembled PCB for battery overvoltage cutoff and undervoltage trip circuit.

Figure 4 shows the interconnections of the battery charger circuit with the HL.

Figure 4 A top view of the interconnections of the battery charger circuit with the HL.

The modes of operation of this circuit are captured in Table 1. When S1 is in the CHARGE position, the PV voltage is supplied to the batteries for charging. In this mode, the position of S2 does not affect the charging process. 

When S1 is in the PV position, the HL turns ON. Using S2 we can select either mains power or battery power.

S1

S2

Function

CHARGE

X

Battery charging

PV

MAINS

Hybrid with mains power

PV

BAT

Hybrid with battery power

Table 1 Operating modes of the battery backup circuit: battery charging, hybrid with mains power, and hybrid with battery power. 

4. Integration and testing

Figure 5 shows the integration of the battery protection circuit with the HL and three batteries. The cable from the PV panel is connected to the 2-pin screw terminal labeled as Vpv. Three 6-V batteries in series are connected to the screw terminal Vb. A DC socket labeled Va is mounted for plugging into the adapter pin. In the photograph, S1 is in CHARGE position, so the battery is being charged using PV power. In this case, the position of S2 is irrelevant and will not affect the charging process.

Figure 5 An image of the circuit in Battery Charging mode. The green LED indicates the battery is being charged from the PV panel. The red LED indicates battery power is available for use.

Figure 6 shows the HL turned on using PV power and a battery. In this case, S1 is in the PV position, and S2 is in the BAT position. Note that the LED lamp array (A1 to A9) is facing downwards. On the HL PCB, there are nine red and nine green indicator LEDs. Each pair of LEDs represents 11% of the total power. The photograph shows four green LEDs are ON, which means 44% of the power is coming from solar. The remaining 55% of power is being drawn from the battery. The green and red LED combination changes as the sunlight varies. 

Figure 6 The lamp in Hybrid mode. Four green LEDs indicate 44% of the power is coming from the PV panel. Five red LEDs indicate 55% of the power is being drawn from the battery.

5. Design Example of a 90-W HL with battery backup

Here, the design of a 90-W HL with a battery backup is proposed. The nominal working voltage selected is 48 V. 

5.1 HL specs

The specifications for the HL design are as follows: 

  1. Solar Panel Specifications: Power = 30 Wp, Vmp = 17.5 V, Imp = 1.7 A
  2. Number of Solar Panels connected in series: 3
  3. Solar Array Voltage: Vpv = 3 x 17.5 = 52.5 V; Voc = 60 V
  4. Number of LEDs in each MCPCB (A1 to A9): 15 white LEDs of 1 Watt each.
  5. Forward voltage of LED: 3.12 V
  6. Voltage across each lamp (A1 to A9): 15 x 3.12 = 46.8 V
  7. Current through LED lamps: 0.2 A (selected) 
  8. Current limiting resistor [1]: R1 to R9 = (52.5 – 46.8)/0.2 = 28.5 Ω (select 27Ω/2W)
  9. Adapter specifications: 48 V, 2 A

As stated earlier, this lamp can be used without a battery backup in facilities that are open all seven days a week. In these applications, the solar power generated is fully utilized, so the cost of this lamp is minimal. The deployment of a large number of such lamps can significantly reduce the electricity bill. 

However, in offices that operate 5 days a week, the power generated during weekends goes to waste. In cases where another load can utilize the available PV power on weekends, such as a pump, vacuum cleaner, or a battery that needs charging, the PV panel’s output can be connected to that load. This way, we can still use the HL as is. However, if there is no other load that can utilize the PV power, then we must resort to battery backup.

5.2 Battery selection

The battery selection can be as follows: 

  1. Lithium-ion Battery: 13S (13 cells in series), Nominal voltage 48 V
  2. Battery voltages: Vb(MIN) = 42 V, Vb = 46.8 V, Vb(MAX) = 54.6 V
  3. Energy storage capacity (24 Ah):  48 x 24 = 1152 Wh
  4. Solar energy generation per day: 90 W x 6 hrs = 540 Wh
  5. Battery storage: 1152 Wh / 540 Wh = 2.1 or 2 days  
5.3 Solar charge controller specs

A wide range of solar charge controllers is available on the market. To select a suitable charge controller, the following specifications are provided as guidelines:

  1. Battery Type: Li-ion, Life-Po4
  2. Nominal Voltage: 48 V
  3. Controller type: MPPT
  4. Maximum output current: 5 A
  5. Protections: Battery reverse polarity, solar panel reversal, short circuit protection, battery overvoltage cutoff, battery low voltage trip.

Note that the open-circuit voltage (Voc) of the solar array is 60 V; therefore, the selected components should have a voltage rating greater than 60 V. 

This design is for a 90-W HL; however, higher-wattage lamps can also be designed. In that case, the lamp MCPCB selected should have a higher power rating. Alternately, the number of MCPCBs can be increased to around 16. This way, the array can be arranged in a 4×4 layout. With an increased number of arrays, both the hardware and software of HL have to be upgraded. 

It may be possible to connect two MCPCBs in parallel to increase the lamp power. However, in this case, the two MCPCBs should have a matching LED array forward voltage. This will ensure equal division of lamp current. 

5.4 Scheduling

The design shown here uses manual switches which can be replaced with semiconductor switches. In this case, the operation of the HL can be automated with a weekly programming cycle. On weekdays, it will work in hybrid mode. In this mode we can either select mains power or battery power. The duration of battery power consumption can be planned to ensure that battery is available for charging during weekends. 

6. Storing the HL’s excess energy

The solar-mains HL proposed earlier, provides constant light irrespective of the sunlight conditions. It is a very cost-effective design and can be deployed in large numbers to reduce electricity costs. However, if it is not used on all 7 days of the week, then the solar power gets wasted. To avoid any power wastage, a battery backup system has been proposed here as an add-on feature. Using batteries, the excess solar energy can be stored. The battery backup makes this lamp work as an emergency lamp, also during grid failures.  

Vijay Deshpande recently retired after a 30-year career focused on power electronics and DSP projects, and now works mainly on solar PV systems.

Related Content

The post A battery backup for a solar-mains hybrid lamp  appeared first on EDN.

How to prevent overvoltage conditions during prototyping

Втр, 07/22/2025 - 16:40

The good thing about being a field applications engineer is that you get to work on many different circuits, often all at the same time. While this is interesting, it also presents problems. Jumping from one circuit to another involves disconnecting a spaghetti of leads and probes, and the chance for something going wrong increases exponentially with the number of wires involved.

It’s often the most basic things that are overlooked. While the probes and leads are checked and double checked to ensure everything is in place, if the voltage on the bench power supply is not adjusted correctly, the damage can be catastrophic, causing hours of rework.

The circuit described in this article helps save the day. Being a field applications engineer also results in a myriad of evaluation boards being collected, each in a state of modification, some of which can be repurposed for personal use. This circuit is based on an overvoltage/reverse voltage protection component, designed to protect downstream electronics from incorrect voltages being applied in automotive circuits.

Such events are caused by the automotive battery being connected the wrong way or a load dump event where the alternator becomes disconnected from the battery, causing a rise in voltage applied to the electronics.

Circuit’s design details

As shown in Figure 1, MAX16126 is a load dump protection controller designed to protect downstream electronics from over-/reverse-voltage faults in automotive circuits. It has an internal charge pump that drives two back-to-back N-channel MOSFETs to provide a low loss forward path if the input voltage is within a certain range, configured using external resistors. If the input voltage goes too high or too low, the drive to the gates of the MOSFETs is removed and the path is blocked, collapsing the supply to the load.

Figure 1 This is how over-/reverse-voltage protection circuit works. Source: Analog Devices Inc.

MAX16127 is similar to MAX16126, but in the case of an overvoltage, it oscillates the MOSFETs to maintain the voltage across the load. If a reverse voltage occurs on the input, an internal 1 MΩ between the GATE and SRC pins of the MAX16126 ensures MOSFETs Q1 and Q2 are held off, so the negative voltage does not reach the output. The MOSFETs are connected in opposing orientations to ensure the body diodes don’t conduct current.

The undervoltage pin, UVSET, is used to configure the minimum trip threshold of the circuit while the overvoltage pin, OVSET, is used to configure the maximum trip threshold. There is also a TERM pin connected via an internal switch to the input pin and this switch is open circuited when the part is in shutdown, so the resistive divider networks on the UVSET and OVSET pins don’t load the input voltage.

In this design, the UVSET pin is tied to the TERM pin, so the MOSFETs are turned on when the device reaches its minimum operating voltage of 3 V. The OVSET pin is connected to a potentiometer, which is adjusted to change the overvoltage trip threshold of the circuit.

To set the trip threshold to the maximum voltage, the potentiometer needs to be adjusted to its minimum value and likewise for the minimum trip threshold the potentiometer is at its maximum value. The IC switches off the MOSFETs when the OVSET pin rises above 1.225 V.

The overvoltage clamping range should be limited to between 5 V and 30 V, so resistors are inserted above and below the potentiometer to set the upper and lower thresholds. There are Zener diodes connected across the UVSET and OVSET pins to limit the voltage of these pins to less than 5.1 V.

Assuming a 47-kΩ resistor is used, the upper and lower resistor values of Figure 1 can be calculated.

To achieve a trip threshold of 30 V, Equation 1 is used:

To achieve a trip threshold of 5 V, Equation 2 is used:

Equating the previous equations gives Equation 3:

So,

From this,

Using preferred values, let R3 = 10 kΩ and R2 = 180 kΩ. This gives an upper limit of 29 V and a lower limit of 5.09 V. This is perfect for a 30 V bench power supply.

Circuit testing

Figure 2 shows the prototype PCB. The trip threshold voltage was adjusted to 12 V and the circuit was tested.

Figure 2 Modified evaluation kit illustrate the circuit testing. Source: Analog Devices Inc.

The lower threshold was measured at 5.06 V and the upper threshold was measured at 28.5 V. With a 10-V input and a 1-A load, the voltage measured between input and output was measured at 19 mV, which aligns with the MOSFET datasheet ON resistance of about 10 mΩ.

Figure 3 shows the response of the circuit when a 10-V step was applied. The yellow trace is the input voltage, and the blue trace shows the output voltage. The trip threshold was set to 12 V, so the input voltage is passed through to the output with very little voltage drop.

Figure 3 A 10-V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The input voltage was increased to 15 V and retested. Figure 4 shows that the output voltage stays at 0 V.

Figure 4 A 15-V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The input voltage was reversed, and a –7 V step was applied to the input, with the results shown in Figure 5.

Figure 5 A –7 V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The negative input voltage was increased to –15 V and reapplied to the input of the circuit. The results are shown in Figure 6.

Figure 6 A –15 V step is applied to the input of MAX16126. Source: Analog Devices Inc.

Caution should be exercised when probing the gate pins of the MOSFETs when the input is taken to a negative voltage. Referring to Figure 1, the body diode of Q1 pulls the two source pins toward VIN, which is at a negative voltage. There is an internal 1 MΩ resistor between the GATE and SRC connections of MAX16126, so when a ground referenced 1 MΩ oscilloscope probe is attached to the gate pins of the MOSFETs, the oscilloscope probe acts like a 1 MΩ pull-up resistor to 0 V.

As the input is pulled negative, a resistive divider is formed between 0 V, the gate voltage, and the source of Q2, which is being pulled negative by the body diode of Q1. When the input voltage is pulled to lower than twice the turn-on voltage of Q2, this MOSFET turns on and the output starts to go negative. Using a higher impedance oscilloscope probe overcomes this problem.

A simple modification to the MAX16126 evaluation kit provides reassuring protection from user-generated load dump events caused by momentary lapses in concentration when testing circuits on the bench. If the components in the evaluation kit are used, the circuit presents a low loss protection circuit that is rated to 90 V with load currents up to 50 A.

Simon Bramble specializes in analog electronics and power. He has spent his career in analog electronics and worked at Maxim and Linear Technology, both now part of Analog Devices Inc.

Related Content

The post How to prevent overvoltage conditions during prototyping appeared first on EDN.

Firmware-upgrade functional defection and resurrection

Пн, 07/21/2025 - 17:37

My first job out of college was with Intel, in the company’s nonvolatile memory division. After an initial couple of years dabbling with specialty EPROMs, I was the first member from that group to move over to the then-embryonic flash memory team to launch the company’s first BootBlock storage device, the 28F001BX. Your part number decode is correct: it was a whopping 1 Mbit (not Gbit!) in capacity 😂. Its then-uniqueness derived from two primary factors:

  • Two separately erasable blocks, asymmetrical in size
  • One of which (the smaller block) was hardware-lockable to prevent unintentional alteration of its contents, perhaps obviously to allow for graceful recovery in case the main (larger) block’s contents, the bulk of system firmware, somehow got corrupted.

The 28F001BX single-handedly (in admitted coordination with Intel’s motherboard group, the first to adopt it) kickstarted the concept of upgradable BIOS for computers already in the field. Its larger-capacity successors did the same thing for digital cellular phones, although by then I was off working on even larger capacity devices with even more (symmetrical, this time) erase blocks for solid-state storage subsystems…which we now refer to as SSDs, USB flash sticks, and the like. This all may explain why in-system firmware updates (which involve much larger code payloads nowadays, of course)—both capabilities and pitfalls—have long been of interest to me.

The concept got personal not too long ago. Hopefully, at least some of you have by now read the previous post in my ongoing EcoFlow portable power station (and peripheral) series, which covered the supplemental Smart Extra Battery I’d gotten for my DELTA 2 main unit:

Here’s what they look like stacked, with the smart extra battery on top and the XT150 cable interconnecting them, admittedly unkempt:

The timeline

Although that earlier writeup was published on April 23, I’d actually submitted it on March 11. A bit more than a week post-submission, the DELTA 2 locked up. A week (and a day) after the earlier writeup appeared at EDN.com, I succeeded in bringing it back to life (also the day before my birthday, ironically). And in between those two points in time, a surrogate system also entered my life. The paragraphs that follow will delve into more detail on all these topics, including the role that firmware updates played at both the tale’s beginning and end points.

A locked-up DELTA 2

To start, let’s rewind to mid-March. For about a week, every time I went into the furnace room where the gear was stored, I’d hear the fan running on the DELTA 2. This wasn’t necessarily atypical; every time the device fired up its recharge circuits to top off the battery, the fan would briefly go on. And everything looked normal remotely, through the app:

But eventually, the fan-running repetition, seemingly more than mere coincidence, captured my attention, and I punched the DELTA 2’s front panel power button to see what was going on. What I found was deeply disturbing. For one thing, the smart extra battery was no longer showing as recognized by the main unit, even though it was still connected. And more troubling, in contrast to what the app was telling me, the display indicated the battery pack was drained. Not to mention the bright red indicator, suggestive that the battery pack was actually dead:

So, I tried turning the DELTA 2 off, which led to my next bout of woe. It wouldn’t shut down, no matter how long I held the power button. I tried unplugging it, no luck. It kept going. And going. I realized that I was going to need to leave it unplugged with the fan whining away, while in parallel I reached out to customer support, until the battery drained (the zeroed-out integrated display info was obviously incorrect, but I had no idea whether the “full” report from the app was right, either). Three days later, it was still going. I eventually plugged an illuminated workbench light to one of its AC outlets, whose incremental current draw finally did the trick.

I tried plugging the DELTA 2 back in. It turned on but wouldn’t recharge. It also still ignored subsequent manual power-down attempts, requiring that I again drain the battery to force a shutoff. And although it now correctly reported a zeroed battery charge status, the dead-battery icon was now joined by another error message, this indicating overload of device output(s) (?):

At this point, I paused and pondered what might have gone wrong. I’d owned the DELTA 2 for about six months at that point, and I’d periodically installed firmware updates to it via the app running on my phone (and in response to new-firmware-available notices displayed in that app) with no issues. But I’d only recently added the Smart Extra Battery to the mix. Something amiss about the most recent firmware rev apparently didn’t like the peripheral’s presence, I guessed:

So, while I was waiting for customer service to respond, I hit up Reddit. And lo and behold, I found that others had experienced the exact same issue:

Resuscitation

It turns out that V1.0.1.182 wasn’t the most recent firmware rev available, but for reasons that to this day escape me (but seem to be longstanding company practice), EcoFlow didn’t make the V1.0.1.183 successor generally available. Instead, I needed to file a ticket with technical support, providing my EcoFlow account info and my unit’s serial number, along with a description of the issue I was having, and requesting that they “push” the new version to me through the app. I did so, and with less than 24 hours of turnaround, they did so as well:

Fingers crossed, I initiated the update to the main unit:

Which succeeded:

Unfortunately, for unknown reasons, the subsequent firmware update attempt on the smart extra battery failed, rendering it inaccessible (only temporarily, thankfully, it turned out):

And even on the base unit, I still wasn’t done. Although it was now once again responding normally to front-panel power-off requests, its display was still wonky:

However, a subsequent reset and recalibration of the battery management system (BMS), which EcoFlow technical support hadn’t clued me in on but Reddit research had suggested might also be necessary, kicked off (and eventually completed) the necessary recharge cycle successfully:

(Longstanding readers may remember my earlier DJI drone-themed tutorial on what the BMS is and why periodic battery cycling to recalibrate it is necessary for lithium-based batteries):

And re-attempt of the smart extra battery firmware update later that day was successful as well:

Voila: everything was now back to normal. Hallelujah:

That said, I think I’ll wait for a critical mass of other brave souls to tackle the V1.0.1.200 firmware update more recently made publicly available, before following their footsteps:

The surrogate

And what of that “surrogate system” that “also entered my life”, which I mentioned earlier in this piece? This writeup’s already running long, so I won’t delve into too much detail on this part of the story here, saving it for a separate planned post to come. But the “customer service” folks I mentioned I’d initially reached out to, prior to my subsequent direct connection to technical support, were specific to EcoFlow’s eBay storefront, where I’d originally bought the DELTA 2.

They ended up sending me a DELTA 3 Plus and DELTA 3 Series Smart Extra Battery (both of which I’ve already introduced in prior coverage) as replacements, presumably operating under the assumption that my existing units were dead parrots, not just resting. They even indicated that I didn’t need to bother sending the DELTA 2-generation devices back to them; I should just responsibly dispose of them myself. “Teardown” immediately popped into my head; here’s an EcoFlow-published video I’d already found as prep prior to their subsequent happy restoration:

And here are the DELTA 3 successors, both standalone:

and alongside their predecessors. The much shorter height (and consequent overall decreased volume) of the DELTA 3 Series Smart Extra Battery versus its precursor is particularly striking:

As previously mentioned, I’ll have more on the DELTA 3 products in dedicated coverage to come shortly. Until then, I welcome your thoughts in the comments on what I’ve covered here, whether in general or related to firmware-update snafus you’ve personally experienced!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Firmware-upgrade functional defection and resurrection appeared first on EDN.

Two new runtime tools to accelerate edge AI deployment

Пн, 07/21/2025 - 16:03

While traditional artificial intelligence (AI) frameworks often struggle in ultra-low-power scenarios, two new edge AI runtime solutions aim to accelerate the deployment of sophisticated AI models in battery-powered devices like wearables, hearables, Internet of Things (IoT) sensors, and industrial monitors.

Ambiq Micro, the company that develops low-power microcontrollers using sub-threshold transistors, has unveiled two new edge AI runtime solutions optimized for its Apollo system-on-chips (SoCs). These developer-centric tools—HeliosRT (runtime) and HeliosAOT (ahead-of-time)—offer deployment options for edge AI across a wide range of applications, spanning from digital health and smart homes to industrial automation.

Figure 1 The new runtime tools allow developers to deploy sophisticated AI models in battery-powered devices. Source: Ambiq

The industry has seen numerous failures in the edge AI space because users dislike it when the battery runs out in an hour. It’s imperative that devices running AI can operate for days, even weeks or months, on battery power.

But what’s edge AI, and what’s causing failures in the edge AI space? Edge AI is anything that’s not running on a server or in the cloud; for instance, AI running on a smartwatch or home monitor. The problem is that AI is power-intensive, and sending data to the cloud over a wireless link is also power-intensive. Moreover, the cloud computing is expensive.

“What we aim is to take the low-power compute and turn it into sophisticated AI,” said Carlos Morales, VP of AI at Ambiq. “Every model that we create must go through runtime, which is firmware that runs on a device to take the model and execute it.”

LiteRT and HeliosAOT tools

LiteRT, formerly known as TensorFlow Lite for microcontrollers, is a firmware version for TensorFlow platform. HeliosRT, a performance-enhanced implementation of LiteRT, is tailored for energy-constrained environments and is compatible with existing TensorFlow workflows.

HeliosRT optimizes custom AI kernels for the Apollo510 chip’s vector acceleration hardware. It also improves numeric support for audio and speech processing models. Finally, it delivers up to 3x gains in inference speed and power efficiency over standard LiteRT implementations.

Next, HeliosAOT introduces a ground-up, ahead-of-time compiler that transforms TensorFlow Lite models directly into embedded C code for edge AI deployment. “AOT interpretation, which developers can perform on their PC or laptop, produces C code, and developers can take that code and link it to the rest of the firmware,” Morales said. “So, developers can save a lot of memory on the code size.”

HeliosAOT provides a 15–50% reduction in memory footprint compared to traditional runtime-based deployments. Furthermore, with granular memory control, it enables per-layer weight distribution across the Apollo chip’s memory hierarchy. It also streamlines deployment with direct integration of generated C code into embedded applications.

Figure 2 HeliosRT and HeliosAOT tools are optimized for Apollo SoCs. Ambiq

“HeliosRT and HeliosAOT are designed to integrate seamlessly with existing AI development pipelines while delivering the performance and efficiency gains that edge applications demand,” said Morales. He added that both solutions are built on Ambiq’s sub-threshold power optimized technology (SPOT).

HeliosRT is now available in beta via the neuralSPOT SDK, while a general release is expected in the third quarter of 2025. On the other hand, HeliosAOT is currently available as a technical preview for select partners, and general release is planned for the fourth quarter of 2025.

Related Content

The post Two new runtime tools to accelerate edge AI deployment appeared first on EDN.

Did connectivity sunsetting kill your embedded-system battery?

Птн, 07/18/2025 - 22:12

You’re likely familiar with the concept of “sunsetting,” where a connectivity standard or application is scheduled to be phased out, such that users who depend on it are often simply “out of luck.” It’s frustrating, as it can render an established working system that is doing its job properly either partially or totally useless. The industry generally rationalizes sunsetting as an inevitable consequence of the progress and new standards not only superseding old ones but making them obsolete.

Sunsetting can leave unintended or unknowing victims, but it goes far beyond just loss of connectivity, and I am speaking from recent experience. My 2019 ICE Subaru Outback wouldn’t start despite its fairly new battery; it was totally dead as if the battery was missing. I jumped the battery and recharged it by running the car for about 30 minutes, but it was dead again the next morning. I assumed it was either a defective charging system or a low- or medium-resistance short circuit somewhere.

(As an added punch to the gut, with the battery dead, there was no way to electronically unlock the doors or get to the internal hood release, so it seemed it would have to be towed. Fortunately, the electronic key fob has a tiny “secret” metal key that can be used in its old-fashioned, back-up mechanical door lock just for such situations.)

I jump-started it again and drove directly to the dealer, who verified the battery and charging system were good. Then the service technician pulled a technical rabbit out of his hat—apparently, this problem was no surprise to the service team.

The vampire (drain) did it—but not the usual way

The reason for the battery being drained is subtle but totally avoidable. It was an aggravated case of parasitic battery drain (often called “vampire drain” or “standby power”; I prefer the former) where the many small functions in the car still drain a few milliamps each as their keep-alive current. The aggregate vampire power drawn by the many functions in the car, even when the car is purportedly “off,” can kill the battery.

Subaru used 3G connectivity to link the car to their basic Starlink Safety and Security emergency system, a free feature even if you don’t pay for its many add-on subscription functions (I don’t). However, 3G cellular service is being phased out or “sunsetted” in industry parlance. Despite this sunsetting, the car’s 3G transponder, formally called a Telematics Data Communication Module (TDCM or DCM), just kept trying, thus killing the battery.

The dealer was apologetic and replaced the 3G unit at no cost with a 4G-compatible unit that they conveniently had in stock. I suspect they were prepared for this occurrence all along and were hoping to keep it quiet. There have been some class-action suits and settlements on this issue, but the filing deadline had passed, so I was out of luck on that.

An open-market replacement DCM unit is available for around $500. While the dealer pays less, it’s still not cheap, and swapping them is complicated and time-consuming. It takes at least an hour for physical access, setup, software initialization, and check-out—if you know what you are doing. There are many caveats in the 12-page instruction DCM section for removal and replacement of the module (Figure 1) as well as in the companion 14-page guide for the alternative Data Communication Module (DCM) Bypass Box (Figure 2), which details some tricky wire-harness “fixing.”
Figure 1 The offending unit is behind the console (dashboard) and takes some time to remove and then replace. Source: Subaru via NHTSA

Figure 2 There are also some cable and connector issues of which the service technician must be aware and use care. Source: Subaru via NHTSA

While automakers impose strict limits on the associated standby drain current for each function, it still adds up and can kill the battery of a car parked and unused for anywhere from a few days to a month. The period depends on the magnitude of the drain and the battery’s condition. I strongly suspect that the 3G link transponder uses far more power than any of the other functions, so it’s a more worrisome vampire.

Sunsetting + vampire drain = trouble

What’s the problem here? Although 3G was being sunsetted, that was not the real problem; discontinuing a standard is inevitable at some point. Further, there could also be many other reasons for not being able to connect, even if 3G was still available, such as being parked in a concrete garage. After all, both short- and long-term link problems should be expected.

No, the problem is a short-sighted design that allowed a secondary, non-core function over which you have little or no control (here, the viability of the link) to become a priority and single-handedly drain power and deplete the battery. Keep in mind that the car is perfectly safe to use without this connectivity feature being available.

There’s no message to the car’s owner that something is wrong; it just keeps chugging away, attempting to fulfill its mission, regardless of the fact that it depletes the car’s battery. It has a mission objective and nothing will stop it from trying to complete it, somewhat like the relentless title character in the classic 1984 film The Terminator.

A properly vetted design would include a path that says if connectivity is lost for any reason, keep trying for a while and then go to a much lower checking rate, and perhaps eventually stop.

This embedded design problem is not just an issue for cars. What if the 3G or other link was part of a hard-to-reach, long-term data-collection system that was periodically reporting, but also had internal memory to store the data? Or perhaps it was part of a closed-loop measurement and control that could function autonomously, regardless of reporting functionality?

Continuously trying to connect despite the cost in power is a case of the connectivity tail not only wagging the core-function dog but also beating it to death. It is not a case of an application going bad due to forced “upgrades” leading to incompatibilities (you probably have your own list of such stories). Instead, it’s a design oversight of allowing a secondary, non-core function to take over the power budget (in some cases, also the CPU), thus disabling all the functionality.

Have you ever been involved with a design where a non-critical function was inadvertently allowed to demand and get excessive system resources? Have you ever been involved with a debug challenge or product-design review where this unpleasant fact had initially been overlooked, but was caught in time?

Whatever happens, I will keep checking to see how long 4G is available in my area. The various industry “experts” say 10 to 15 years, but these experts are often wrong! Will 4G connectivity sunset before my car does? Abd if it does, will the car’s module keep trying to connect and, once again, kill the battery? That remains to be seen!

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

The post Did connectivity sunsetting kill your embedded-system battery? appeared first on EDN.

Evaluation board powers small robotics and drones

Птн, 07/18/2025 - 21:05

The EPC91118 reference design from EPC integrates power, sensing, and control on a compact circular PCB for humanoid robot joints and UAVs. Driven by the EPC23104 GaN-based power stage, the three-phase BLDC inverter delivers up to 10 A RMS steady-state output and 15 A RMS pulsed.

Complementing the GaN power stage are all the key functions for a complete motor drive inverter, including a microcontroller, rotor shaft magnetic encoder, regulated auxiliary rails, voltage and current sensing, and protection features. Housekeeping supplies are derived from the inverter’s main input, with a 5-V rail powering the GaN stage and a 3.3-V rail supplying the controller, sensors, and RS-485 interface. All these functions fit on a 32-mm diameter board, expanding to 55 mm including an external frame for mechanical integration.

The inverter’s small size allows integration directly into humanoid joint motors. GaN’s high switching frequency allows the use of compact MLCCs in place of bulkier electrolytic capacitors, helping reduce overall size while enhancing reliability. With a footprint reportedly 66% smaller than comparable silicon MOSFET designs, the EPC91118 enables a space-saving motor drive architecture.

EPC91118 reference design boards are priced at $394.02 each. The EPC23104 eGaN power stage IC costs $2.69 each in 3000-unit reels. Both are available for immediate delivery from Digi-Key.

EPC91118 product page

Efficient Power Conversion

The post Evaluation board powers small robotics and drones appeared first on EDN.

Real-time AI fuels faster, smarter defect detection

Птн, 07/18/2025 - 21:05

TDK SensEI’s edgeRX Vision system, powered by advanced AI, accurately detects defects in components as small as 1.0×0.5 mm in real time. Operating at speeds up to 2000 parts per minute, it reduces false positives and enhances efficiency in high-throughput manufacturing.

AI-driven vision systems now offer real-time processing, improved label efficiency, and multi-modal interaction through integration with language models. With transformer-based models like DINOv2 and SAM enabling versatile vision tasks without retraining, edge-based solutions are more scalable and cost-effective than ever—making this a timely entry point for edgeRX Vision in high-volume manufacturing.

edgeRX Vision integrates with the company’s edgeRX sensors and industrial machine health monitoring platform. By enhancing existing hardware infrastructure, it helps minimize unnecessary machine stoppages. Together, the system offers manufacturers a smart, integrated approach to demanding production challenges.

Request a demonstration of the edgeRX Vision defect detection system via the product page link below.

edgeRX Vision product page

TDK SenseEI 

The post Real-time AI fuels faster, smarter defect detection appeared first on EDN.

Open-source plugin streamlines edge AI deployment

Птн, 07/18/2025 - 21:05

Analog Devices and Antmicro have released AutoML for Embedded, a tool that simplifies AI deployment on edge devices. Part of Antmicro’s hardware-agnostic, open-source Kenning framework, it automates model selection and optimization for resource-constrained systems. The tool helps users deploy models more easily without deep expertise in AI or embedded development.

AutoML for Embedded is a Visual Studio Code plugin designed to integrate seamlessly into existing development workflows. It works with CodeFusion Studio and supports direct deployment to ADI’s MAX78002 AI accelerator MCU and MAX32690 ultra-low power MCU. The tool also enables rapid prototyping and testing through Renode-based simulation and Zephyr RTOS workflows. Its support for general-purpose, open-source tools allows flexible model optimization without locking developers into a specific platform.

With step-by-step tutorials, reproducible pipelines, and example datasets, users can move from raw data to edge AI deployment quickly without needing data science expertise. AutoML for Embedded is available now on the Visual Studio Code Marketplace and GitHub. Additional resources are available on the ADI developer portal.

AutoML for Embedded product page 

Analog Devices

The post Open-source plugin streamlines edge AI deployment appeared first on EDN.

Foundry PDK drives reliable automotive chip design

Птн, 07/18/2025 - 21:05

SK keyfoundry, in collaboration with Siemens EDA Korea, has introduced a 130-nm automotive process design kit (PDK) compatible with Calibre PERC software. The process node supports both schematic and layout verification, including interconnect reliability checks. With this PDK, fabless companies in Korea and abroad can optimize automotive power semiconductor designs while performing detailed reliability verification.

According to Siemens, while the 130-nm process has been a reliable choice for analog and power semiconductor designs, growing design complexity has made it harder to meet performance targets. The new PDK from SK keyfoundry enables designers to use Siemens’ Calibre PERC with the foundry’s process technology, supporting layout-level verification that accounts for manufacturing constraints.

SK keyfoundry aims to deepen collaboration with Siemens through optimized design solutions, enhanced manufacturing reliability, and a stronger foundry market position.

To learn more about Siemen’s Calibre PERC reliability verification software, click here.

Siemens Digital Industries Software 

SK keyfoundry 

The post Foundry PDK drives reliable automotive chip design appeared first on EDN.

SiC diodes maintain stable, efficient switching

Птн, 07/18/2025 - 21:05

Nexperia’s 1200-V, 20-A SiC Schottky diodes contribute to high-efficiency power conversion in AI server infrastructure and solar inverters. The PSC20120J comes in a D2PAK Real-2-Pin (TO-263-2) surface-mount package, while the PSC20120L uses a TO-247 Real-2-Pin (TO-247-2) through-hole package. Both thermally stable plastic packages ensure reliable operation up to +175°C.

These Schottky diodes offer temperature-independent capacitive switching and virtually zero reverse recovery, resulting in a low figure of merit (QC×VF). Their switching performance remains consistent across varying current levels and switching speeds.

Built on a merged PiN Schottky (MPS) structure, the diodes also provide strong surge current handling, as shown by their high peak forward current (IFSM). This robustness reduces the need for external protection circuitry, helping engineers simplify designs, improve efficiency, and shrink system size in high-voltage, harsh-environment applications.

Use the product page links below to view datasheets and check availability for the PSC20120J and PSC20120L SiC Schottky diodes.

PSC20120J product page 

PSC20120L product page

Nexperia

The post SiC diodes maintain stable, efficient switching appeared first on EDN.

Electronic water softener design ideas to transform hard water

Чтв, 07/17/2025 - 11:39

If you are tired of scale buildup, scratchy laundry, or cloudy glassware, it’s probably time to take hard water into your own hands, literally. This blog delves into inventive, affordable, and unexpectedly easy design concepts for building your own electronic water softener.

Whether you are an engineer armed with blueprints or a hands-on do-it-yourself enthusiast ready to roll up your sleeves, the pointers shared here will help you transform a persistent plumbing issue into a smooth-flowing success.

So, what’s an electronic water softener (descaler)? It’s a simple oscillator circuit tailored to create a magnetic field around a water pipe to reduce the chances of smaller deposits sticking to the inside of the pipes.

Not new, the concept of water conditioning dates back to the 1930s. Hard water has a high concentration of minerals, the most abundant of which is calcium particles. The makeup of deposits leads to the term hard water and reduces the effectiveness of soaps and detergents. Over time, these tiny deposits can stick to the inside of pipes, clog filters, faucets and shower heads, and leave residue on kettles.

The idea behind the electronic/electromagnetic water softener is that a magnetic field around the water pipe causes calcium particles to clump together. Such a system consists of two coils wound around the water pipe with a gap between them.

The circuit driving them is often a high frequency oscillator that generates pulses of 15 kHz or so. As a result, large particles are formed, which pass through the water pipe and do not cling to the inside.

Thus, the electronic water softener operates by wrapping coils of wire around the incoming water main to pass a magnetic field through the water. This causes the calcium in the water to stay in solution, thereby bottling it up from clinging to taps and kettles. Also, the impact of electromagnetic flux makes the water physically soft as the magnetic flux breaks the hard molecules and makes it soft by nature.

Below is a visual summary of the process.

Figure 1 The original image was sourced from Google Images and has been retouched by author for visual clarity.

Most electronic descalers typically operate with two coils to increases the time for which the water is exposed to the electromagnetic waveform, but a few use only one coil.

Figure 2 Here is how electronic descalers operate with two coils or one coil. Source: Author

A quick inspection of the most common water softener circuits found on the web shows that the drive frequency is about 2 to 20 kHz in the 5- to 15-V amplitude range. The coils to be wound outside the pipe are just about 20- to 30-turn inductors made of 18 to 24 SWG insulated or copper wire.

It has also been noted that neither the material of the water pipe (PVC or metal) nor its diameter has a significant effect on the efficiency of the lime solver.

When I stumbled upon a blogpost from 2013, it felt like the perfect moment to explore the idea more deeply. This marks the beginning of a hands-on learning journey—less of a formal project and more of a series of small, practical experiments and functional blueprints.

The focus is not on making a polished product, but on picking up new skills and exploring where the process leads. So, after learning from several sources about how electronic water softeners work, I decided to give it a try.

The first step in my process involved developing a universal (and exploratory) driver circuit for the pipe coil(s). The outcome is shown below.

Figure 3 The schematic shows a driver circuit for the pipe coil. Source: Author

Below is the list of parts.

  • C1 and C2: 470 uF/25 V
  • C3: 1,000 uF/25 V
  • D1: 1N4007
  • L1: 470 uH/1 A
  • IC1: MC34151

Note that the single-layer coil L2 on the 20-mm diameter PVC water pipe is made of around 60 turns of 18AWG insulated wire. The single-layer coil on pipe has an inductance of about 20 uH when measured with an LCR meter. The 470 uH drum core inductor L1 (empirically selected part) throttles the peak current through the pipe coil L2.

A single-channel MOSFET gate driver is adequate for IC1 in this setup; however, I opted for the MC34151 gate driver during prototyping as it was readily on hand. Next comes a bit different blueprint for the pipe coil driver.

Figure 4 Arduino Uno was used to drive the pulse input of the pipe coil driver circuitry. Source: Author

To drive the pulse input of the pipe coil driver circuitry, an Arduino Uno was used (just for convenience) to generate a sweeping frequency between 500 Hz and 5 kHz (the adapted code is available upon request). Although selected without a specific technical justification, this empirically optimized range has demonstrated enhanced performance in some targeted zones.

At this stage, opting for a microcontroller-based oscillator or pulse generator is advisable to ensure scalability and facilitate future enhancements. That said, a solution using discrete components continues to be a valid choice (an adaptable textbook pointer is provided below).

Figure 5 An adaptable textbook pointer highlights the above solution. Source: Author

Nevertheless, the setup ought to be capable of delivering a pulsed current that generates time-varying magnetic fields within the water pipe, thereby inducing an internal electric field. For optimal induction efficiency, a square-wave pulsed current is always advocated.

The experiment is still ongoing, and I am drawing a tentative conclusion at this stage. But for now, it’s your chance to dive in, experiment, and truly make it your own.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Electronic water softener design ideas to transform hard water appeared first on EDN.

What makes today’s design debugging so complex

Срд, 07/16/2025 - 17:51

Why does the task of circuit debugging keep getting complex year by year? It’s no longer looking at the schematic diagram and sorting out the signal flow path from input to output. Here is a sneak peek at the factors leading to a steady increase in challenges in debugging electronics circuits. It shows how the intermingled software/hardware approach has made prototyping electronic designs so complex.

Read the full blog on EDN’s sister publication, Planet Analog.

Related content

The post What makes today’s design debugging so complex appeared first on EDN.

Headlights In Massachusetts

Срд, 07/16/2025 - 16:25

From January 5, 2024, please see: “The dangers of light glare from high-brightness LEDs.”

I have just become aware that at least one state has wisely chosen to address the safety issue of automotive headlight glare. As to the remaining forty-nine states, I have not yet seen any indication(s) of similar statutes. Now please see the following screenshots and links:

One question at hand of course is how well the Massachusetts statute will be enforced. What may be on the books is one thing but what will happen on the road remains to be seen.

I am hopeful.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post Headlights In Massachusetts appeared first on EDN.

Another weird 555 ADC

Втр, 07/15/2025 - 20:11

Integrating ADCs that provide accurate results without requiring a precision integrator capacitor has been around for a long time. A venerable example is that multimeter favorite, the dual-slope ADC. That classic topology uses just one integrator to alternately accumulate both incoming signal and complementary voltage references with the same RC time constant. It thus automatically ratios out time constant tolerance. Slick. 

This Design Idea (DI) will describe a (possibly) new integrating converter that reaches a similar goal of accurate conversions without needing an accurate capacitor. But it gets there via a significantly different route. Along the route, it picks up some advantageous wrinkles.

Wow the engineering world with your unique design: Design Ideas Submission Guide

As Figure 1 shows, the design starts off with an old friend, the 555-analog timer.

Figure 1 Op-amp A1 continuously integrates the incoming Vin signal, thus minimizing noise. Conversion occurs in alternating phases, T- and T+. The T-/T+ phase duration ratio is independent of the RC time constant, is therefore insensitive to C1 tolerance, and contains both Vin magnitude and polarity information.

Incoming signal Vin is summed with the voltage at node X and accumulated by differential integrator A1. A conversion cycle begins when A1’s output (node Y) reaches 4.096 V and lifts timer U1’s threshold pin (Thr) through the R2/R3 divider to the 2.048-V reference supplied by voltage reference Z1. This switches on U1’s Dch pin, grounding A1’s noninverting input through the R4/R5 divider, outputs a zero to the GPIO bit (node Z), and begins the T- phase as A1’s output ramps down. The duration of this T- phase is given by:

T- = R1C1/(1 + Vin/Vfullscale)

Vfullscale = ±2.048v(R1/R6) = ±0.683v

The T- phase ends when A1’s output reaches U1’s trigger (Trg) voltage set to 1.024 V by Z1 and U1’s internal 2:1 divider. See the LMC555 datasheet for the gritty details.

This starts the T+ conversion phase with an output of one on the GPIO bit, and the release of Dch by U1, which drives A1’s noninverting input to 1.024 V, set by Z1 and the R4/R5 divider. The T+ positive-going ramp continues until A1’s output reaches the 4.096 VThr threshold described above and initiates the next conversion cycle. 

T+ phase duration is:

T+ = R1C1/(1 – Vin/Vfullscale)

 This frenetic frenzy of activity is summarized in Figure 2.

Figure 2 Various conversion signals found at circuit nodes X, Y, and Z.

Meanwhile, the GPIO pin is assumed to be connected to a suitable microcontroller counter/time peripheral that is accumulating T- and T+ durations for a chosen resolution and conversion rate. Something between 1 µs and 100 ns should work for the subsequent Vin calculation. This brings up that claim of immunity to integrator capacitor tolerance you might be wondering about.

The durations of the T+ and T- ramps are proportional to C1, as shown in Figure 3.

Figure 3 Black = Vin, Red = T+ duration in ms, Blue = T- duration, C1 = 0.001 µF.

However, software arithmetic saves the day (and maybe even my reputation!) because recovery of Vin from the raw phase duration timeouts involves a bit of divide-and-conquer.

Vin = Vfullscale ((1 – (T-/T+))/(1 + (T-/T+)))

And, of course, when T- is divided by T+, the R1C1 terms conveniently disappear, taking sensitivity to C1 tolerance away with them!

A final word about Vfullscale. The ±0.683 V figure derived above is a minimum value, but any larger span can be easily accommodated by adding one resistor (R8) and changing another (R1). Here’s the scale-changing arithmetic:

R1 = 1M * Vfullscale/0.683

R8 = 1/(1/1M – 1/R1)

 For example, ±10 V is illustrated in Figure 4.

Figure 4 A ±10-V Vin span is easily accommodated – if you can find a 15 MΩ precision resistor.

Note that R1 would probably need to be a series string to get to 15 MΩ using OTS resistors.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Another weird 555 ADC appeared first on EDN.

Circuits to help verify matched resistors

Втр, 07/15/2025 - 17:06

Analog designers often need matched resistors for their circuits [1]. The best solution is to buy integrated resistor networks [2], but what can you do if the parts vendors do not offer the desired values or matching grade?

Wow the engineering world with your unique design: Design Ideas Submission Guide

The circuit in Figure 1 can help. It is made of two voltage dividers (a Wheatstone bridge) followed by an instrumentation amplifier, IA, with a gain of 160. R3 is the reference resistor, and R4 is its match. The circuit subtracts the voltages coming out of the two dividers and amplifies the difference.

Figure 1 The intuitive solution is a circuit made of a Wheatstone bridge and an instrumentation amplifier.

Calculations show that the circuit provides a perfectly linear response between output voltage and resistor mismatch (see Figure 2). The slope of the line is 1 V per 1% of resistor mismatch; for example, a Vout of -1 V means -1% deviation between R3 and R4.

Figure 2 Circuit response is perfectly linear with a 1:1 ratio between output voltage and resistor mismatch.

A possible drawback is the price: instrumentation amplifiers with a power supply of ±5 V and more start at about 6.20 USD. Figure 3 shows another circuit using a dual op-amp, which is 2.6 times cheaper than the cheapest instrumentation amplifier.

Figure 3 This circuit also provides a perfect 1:1 response, but at a lower cost.

The transfer function is:

Assuming,

converts the transfer function into the form,

If the term within the brackets equals unity and R5 equals R6, the transfer function becomesIn other words, the output voltage equals the percentage deviation of R4 with respect to R3. This voltage can be positive, negative, or, in the case of a perfect match between R3 and R4, zero.

The circuit is tested for R3 = 10.001 kΩ and R4 = 10 kΩ ±1%. As Figure 4 shows, the transfer function is perfectly linear (the R2 factor equals unity) and provides a one-to-one relation between output voltage and resistor mismatch. The slope of the line is adjusted to unity using potentiometer R2 and the two end values of R4. A minor offset is present due to the imperfect match between R5 and R6 and the offset voltage VIO of the op-amps.  

Figure 4 The transfer function provides a convenient one-to-one reading.

A funny detail is that the circuit can be used to find a pair of matched resistors, R5 and R6, for itself. As mentioned before, it is better to buy a network of matched resistors. It may look expensive, but it is worth the money.

Equation 3 shows that circuit sensitivity can be increased by increasing R7 and/or VREF. For example, if R7 goes up to 402 kΩ, the slope of the response line will increase to 10 V per 1% of resistor mismatch. A mismatch of 0.01% will generate an output voltage of 100 mV, which can be measured with high confidence.

Watch the current capacity of VREF and op-amps when you deal with small resistors. A reference resistor of 100 Ω, for example, will draw 25 mA from VREF into the output of the first op-amp. Another 2.5 mA will flow through R5.

Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.

 Related Content

References

  1. Bill Schweber. The why and how of matched resistors (a two part series). https://www.powerelectronictips.com/the-why-and-how-of-matched-resistors-part-1/.
  2. Art Kay. Should you use discrete resistors or a resistor network? https://www.planetanalog.com/should-you-use-discrete-resistors-or-a-resistor-network/ .

The post Circuits to help verify matched resistors appeared first on EDN.

Real-time motor control for robotics with neuromorphic chips

Втр, 07/15/2025 - 10:06

Robotic controls started with simplistic direct-current motors. Engineers had limited mobility because they had few feedback mechanisms. Now, neuromorphic chips are entering the field, mimicking the way the human brain functions. Their relevance in future robotic endeavors is unprecedented, especially as electronic design engineers persist through and surpass Industry 4.0.

Here is how to explore real-time controllers and create better robots.

Robotics is a resource-intensive field, especially when depending on antiquated hardware. As corporations aim for greater sustainability, neuromorphic technologies promise better energy efficiency. Studies are proving the value of adjusting mapping algorithms to lower electrical needs.

Implementing these chips at scale could yield substantial power cuts, saving operations countless dollars in waste heat and energy. Some are so successful because of their lightweight materials that they lower usage by 99% with only 180 kilobytes of memory.

The real-time capabilities are also vital. The chips react to event-specific triggers; that’s crucial because facilities managing high demand with complex processes require responsive motor controls. Every interaction is a chance for the chip to learn and adapt to the next situation. This includes recognizing patterns, experiencing sensory stimuli, and altering range of motion.

How neuromorphic chips enable real-time motor control

Neuromorphic models change operations by encouraging greater trust on human operators. Because of their event-driven processing, they move from task to task with lower latency than conventional microcontrollers. Engineers could also potentially communicate with technology using brain-computer interfaces to monitor activity or refine algorithms.

Parallelism is also an inherent aspect of these neural networks that allows robots to understand several informational streams simultaneously. In production or testing settings, understanding spatial or sensory cues makes neuromorphic chips superior because they make decision-making more likely to produce outcomes like a human.

Case studies of the SpiNNaker neural hardware demonstrated how a multicore neuromorphic platform can delegate tasks to different units such as synaptic processing. It validated how well these models achieve load balancing to optimize computational power and output.

Chips with robust parallelism are less likely to produce faulty results because the computations are delegated to separate parts, collating into a more reasonable action. Compared to traditional robotics, this also lowers the risk of system failure because the spiking neurons will not overload the equipment.

Design considerations for engineers

Neuromorphic chips are advantageous, but interoperability concerns may arise with existing motor drivers and sensors. Engineers can also encounter problems as they program the models and toolchains. They may not conventionally operate with spiking neural networks, commonly found in machinery replicating neuron activity. The chips could render some software or coding obsolete or fail to communicate signals effectively.

Experts will need to tinker with signal timing to ensure information processes promptly in response to specific events. They will also need to use tools and data to predict trends to stay ahead of the competition. Companies will be exploring the scalability of neuromorphic equipment and new applications rapidly, so determining various industries’ needs can inform an organization about the features to prioritize.

Some early applications that could expand include:

  • Swarm robotics
  • Autonomous vehicles
  • Cobots
  • Brain-computer interfaces

Engineers must feel inspired and encouraged to continue developing real-time motor controls with neuromorphic solutions. Doing so will craft self-driven, capable machinery that will change everything from construction sites to production lines. The applications will be as endless as their versatility, which becomes nearly infinite, considering how robots function with a humanlike brain.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

 

 

Related Content

The post Real-time motor control for robotics with neuromorphic chips appeared first on EDN.

Сторінки