Українською
  In English
EDN Network
Tesla’s wireless-power “dream” gets closer to reality—maybe

You are likely at least slightly aware of the work that famed engineer, scientist, and researcher Nikola Tesla did in the early 1900s in his futile attempt to wirelessly transmit usable power via a 200-foot tower. The project is described extensively on many credible web sites, such as “What became of Nikola Tesla’s wireless dream?” and “Tesla’s Tower at Wardenclyffe” as well as many substantive books.
Since Tesla, there have been numerous other efforts to transmit power without wires using RF (microwave and millimeter waves) and optical wavelengths. Of course, both “bands” are wireless and governed by Maxwell’s equations, but there are very different practical implications.
Proponents of wireless transmitted power see it as a power-delivery source for both stationary and moving targets including drones and larger aircraft—very ambitious objectives, for sure. We are not talking about near-field charging for devices such as smartphones, nor the “trick” of wireless lighting of a fluorescent bulb that is positioned a few feet away from a desktop Tesla coil. We are talking about substantial distances and power.
Most early efforts to beam power were confined to microwave frequencies due to available technologies. However, they require relatively larger antennas to focus the transmitted beam, so millimeter waves or optical links are likely to work better.
The latest efforts and progress have been in the optical spectrum. These systems use a fiber-optic-based laser for a tightly confined beam. The “receivers” for optical power transmission are specialized photovoltaic cells optimized to convert a very narrow wavelength of light into electric power with very high efficiency. The reported efficiencies can exceed 70%, more than double that of a typical broader-spectrum solar cell.
In one design from Powerlight Technologies, the beam is contained within a virtual enclosure that senses an object impinging on it—such as a person, bird, or even airborne debris—and triggers the equipment to cut power to the main beam before any damage is done (Figure 1). The system monitors the volume the beam occupies, along with its immediate surroundings, allowing the power link to automatically reestablish itself when the path is once again clear.

Figure 1 This free-space optical-power path link includes a safety “curtain” which cuts off the beam within a millisecond if there is a path interruption. Source: Powerlight Technologies
Although this is nominally listed as a “power” project, as with any power-related technology, there’s a significant amount of analog-focused circuitry and components involved. These provide raw DC power to the laser driver and to the optical-conversion circuits, lasers, overall system management at both ends, and more.
Recent progress raises effectiveness
In May 2025, DARPA’s Persistent Optical Wireless Energy Relay (POWER) program achieved several new records for transmitting power over distance in a series of tests in New Mexico. The team’s POWER Receiver Array Demo (PRAD) recorded more than 800 watts of power delivered during a 30-second transmission from a laser 8.6 kilometers (5.3 miles) away. Over the course of the test campaign, more than a megajoule of energy was transferred.
In the never-ending power-versus-distance challenge, the previous greatest reported distance records for an appreciable amount of optical power (>1 microwatt) were 230 watts of average power at 1.7 kilometers for 25 seconds and a lesser (but undisclosed) amount of power at 3.7 kilometers (Figure 2).

Figure 2 The POWER Receiver Array Demo (PRAD) set the records for power and distance for optical power beaming; the graphic shows how it compares to previous notable efforts. Source: DARPA
To achieve the power and distance record, the power receiver array used a new receiver technology designed by Teravec Technologies with a compact aperture for the laser beam to shine. That’s to ensure that very little light escapes once it has entered the receiver. Inside the receiver, the laser strikes a parabolic mirror that reflects the beam onto dozens of photovoltaic cells to convert the energy back to usable power (Figure 3).

Figure 3 In the optical power-beaming receiver designed for PRAD, the laser enters the center aperture, strikes a parabolic mirror, and reflects onto dozens of photovoltaic cells (left) arranged around the inside of the device to convert the energy back to usable power (right). Source: Teravec Technologies
While it may seem logical to use a mirror or lens when it comes to redirecting laser beams, the project team instead found that diffractive optics were a better choice because they are good at efficiently handling monochromatic wavelengths of light. They used additive manufacturing to create optics and included an integrated cooling system.
Further details on this project are hard to come by, but that’s almost beside the point. The key message is that there has been significant progress. As is usually the case, some of it leverages progress in other disciplines, and much of it is “home made.” Nonetheless, there are significant technical costs, efficiency burdens, and limitations due to atmospheric density—especially at lower attitudes and ground level.
Do you think advances in various wireless-transmission components and technologies will reach to where it’s a viable power-delivery approach for broader uses besides highly specialized ones? Can it be made to work for moving targets as well as stationary ones? Or will this be one of those technologies where success is always “just around the corner”? And finally, is there any relationship between this project and the work on directed laser energy systems to “shoot” drones out of the sky, which has parallels to the beam generation/emission part?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- What…You’re Using Lasers for Area Heating?
- Forget Tesla coils and check out Marx generators
- Pulsed high-power systems are redefining weapons
- Measuring powerful laser output takes a forceful approach
The post Tesla’s wireless-power “dream” gets closer to reality—maybe appeared first on EDN.
Intel releases more details about Panther Lake AI processor

Intel Corp. unveils new details about its next-generation client processor for AI PCs, the Core Ultra series 3, code-named Panther Lake, which is expected to begin shipping later this year. The company also gave a peek into its Xeon6+ server processor, code-named Clearwater Forest, expected to launch in the first half of 2026.
Core Ultra series 3 client processor (Source: Intel Corp.)
Panther Lake is the company’s first product built on the advanced Intel 18A semiconductor process, the first 2-nanometer class node manufactured in the United States. It delivers up to 15% better performance per watt and 30% improved chip density compared to Intel 35 thanks to two key advances—RibbonFET and PowerVia.
The RibbonFET transistor architecture, Intel’s first in over a decade, delivers greater scaling and more efficient switching for better performance and energy efficiency. The PowerVia backside power delivery system improves power flow and signal delivery.
Also contributing to its greater flexibility and scalability is Foveros, Intel’s advanced packaging and 3D chip stacking technology for integrating multiple chiplets into advanced SoCs.
Panther LakeThe Core Ultra series 3 processors offer scalable AI PC performance, targeting a range of consumer and commercial AI PCs, gaming devices, and edge solutions. Intel said the multi-chiplet architecture offers flexibility across form factors, segments, and price points.
The Panther lake processors offer Lunar Lake-level power efficiency and Arrow Lake-class performance, according to Intel. They offer up to 16 CPU cores, up to 96-GB LPDDR5, and up to 180 TOPS across the platform. They also feature new P- and E-cores, along with a new GPU and next-generation IPU 7.5 and NPU 5, delivering higher-performance and greater efficiency over previous generations.
Key features include up to 16 new performance-cores (P-cores) and efficient-cores (E-cores) delivering more than 50% faster CPU performance versus the previous generation; 30% lower power consumption versus Lunar Lake; and a new Intel Xe3 Arc GPU with up to 12 Xe cores delivering more than 50% faster graphics performance versus the previous generation, along with up to 12 ray tracking units and up to 16-MB L2 cache.
Panther Lake also features the next-gen NPU 5 with up to 50 trillion of operations per second (TOPS), offering >40% TOPS/area versus Lunar Lake and 3.8× TOPS versus Arrow Lake-H.
The IPU 7.5 offers AI-based noise reduction and local tone mapping. It delivers 16-MP stills and 120 frames per second slow motion and supports up to three concurrent cameras. It also offers a 1.5-W reduction in power with hardware staggered HDR compared to Lunar Lake.
Other features include enhanced power management, up to 12 lanes PCIe 5, integrated Thunderbolt 4, integrated Intel Wi-Fi 7 (R2) and dual Intel Bluetooth Core 6, and LPCAMM support.
Panther Lake will also extend to edge applications including robotics, Intel said. A new Intel Robotics AI software suite and reference board is available with AI capabilities to develop robots using Panther Lake for both controls and AI/perception. The suite includes vision libraries, real-time control frameworks, AI inference engines, orchestration-ready modules, and hardware-aware tuning
Panther Lake will begin ramping high-volume production this year, with the first SKU scheduled to ship before the end of the year. General market availability will start in January 2026.
Recommended Intel’s confidence shows as it readies new processors on 18A
Clearwater ForestIntel also provided a sneak peek into the Xeon 6+, its first 18A-based server processor. It is also touted as the company’s most efficient server processor. Both Panther Lake and Clearwater Forest, built on Intel 18A, are being manufactured at Intel’s new Fab 52, which is Intel’s fifth high-volume fab at its Ocotillo campus in Chandler, Arizona.
Xeon 6+ server processor (Source: Intel Corp.)
Clearwater Forest is Intel’s next-generation E-core processor, featuring up to 288 E-cores, and a 17% increase in instructions per cycle (IPC) over the previous generation. Expected to offer significant improvements in density, throughput, and power efficiency, Intel plans to launch Xeon 6+ in the first half of 2026. This server processor series targets hyperscale data centers, cloud providers, and telcos.
The post Intel releases more details about Panther Lake AI processor appeared first on EDN.
Broadcom debuts 102.4-Tbits/s CPO Ethernet switch

Broadcom Inc. launches the Tomahawk 6 – Davisson (TH6-Davisson), the company’s third-generation co-packaged optics (CPO) Ethernet switch, delivering the bandwidth, efficiency, and reliability for next-generation AI networks. The TH6-Davisson provides advances in power efficiency and traffic stability for higher optical interconnect performance required to scale-up and scale-out AI clusters.
The trend toward CPOs in data centers is to increase bandwidth and lower energy consumption. With the TH6-Davisson, Broadcom claims the industry’s first 102.4 Tbits/s of optically enabled switching capacity, doubling the bandwidth of any CPO switch available today. This sets a new benchmark for data-center performance, Broadcom said.
(Source: Broadcom)
Designed for power efficiency, the TH6-Davisson heterogeneously integrates TSMC Compact Universal Photonic Engine (TSMC COUPE) technology-based optical engines with advanced substrate-level multi-chip packaging. This is reported to dramatically reduce the need for signal conditioning and minimize trace loss and reflections, resulting in a 70% reduction in optical interconnect power consumption. This is more than 3.5× lower than traditional pluggable optics, delivering a significant improvement in energy efficiency for hyperscale and AI data centers, Broadcom said.
In addition to power efficiency, the TH6-Davisson Ethernet switch addresses link stability, which has become a critical bottleneck as AI training jobs scale, the company added, with even minor interruptions causing losses in XPU and GPU utilization.
The TH6-Davisson solves this challenge by directly integrating optical engines onto a common package with the Ethernet switch. The integration eliminates many of the sources of manufacturing and test variability inherent in pluggable transceivers, resulting in significantly improved link flap performance and higher cluster reliability, according to Broadcom.
In addition, operating at 200 Gbits/s per channel, TH6-Davisson doubles the line rate and overall bandwidth of Broadcom’s second-generation TH5-Bailly CPO solution. It seamlessly interconnects with DR-based transceivers as well as NPO and CPO optical interconnects running at 200 Gbits/s per channel, enabling connectivity with advanced NICs, XPUs, and fabric switches.
The TH6-Davisson BCM78919 supports a scale-up cluster size of 512 XPUs and up to 100,000+ XPUs in two-tier networks at 200 Gbits/s per link. Other features include 16 × 6.4 Tbits/s Davisson DR optical engines and field-replaceable ELSFP laser modules.
Broadcom is now developing its fourth-generation CPO solution. The new platform will double per-channel bandwidth to 400 Gbits/s and deliver higher levels of energy efficiency.
The TH6-Davisson BCM78919 is IEEE 802.3 compliant and interoperable with existing 400G and 800G standards. Broadcom is currently sampling the Ethernet switch to its early access customers and partners.
The post Broadcom debuts 102.4-Tbits/s CPO Ethernet switch appeared first on EDN.
Analog frequency doublers

High school trigonometry combined with four-quadrant multipliers can be exploited to yield sinusoidal frequency doublers. Nothing non-linear is involved, which means no possibly strident filtering requirements.
Starting with some sinusoidal signal and needing to derive new sinusoidal signals at multiples of the original sinusoidal frequency, a little trigonometry and four-quadrant multipliers can be useful. Consider the following SPICE simulation in Figure 1.

Figure 1 Two analog frequency doublers, A1 + U1 and A2 + U2, in cascade to form a frequency quadruple.
The above sketch shows the pair A1 and U1 configured as a frequency doubler from V1 to V2, and the pair A2 and U2 configured as another frequency doubler from V2 to V3. Together, the two of them form a frequency quadrupler from V1 to V3. With more circuits, you can make an octupler and so on within the bandwidth limits of the active semiconductors, of course.
Frequency doubler operation is based on these trigonometric identities:
sin² (x) = 0.5 * ( 1 – cos (2x) ) and cos² (x) = 0.5 * ( 1 + cos (2x) )
sin² (x) = 0.5 – 0.5 * cos (2x) and cos² (x) = 0.5 + 0.5* cos (2x)
Take your pick, both equations yield a DC offset plus a sinusoid at twice the frequency you started with. Do a DC block as with C1 and R1 above, and you are left with a doubled-frequency sinusoid at half the original amplitude. Follow that up with a times two gain stage, and you have made a sinusoid at twice the original frequency and at the same amplitude with which you started.
This way of doing things takes less stuff than having to do some non-linear process on the input sinusoid to generate a harmonic comb and then having to filter out everything except the one frequency you want.
Although there might actually be some other harmonics at each op-amp output, depending on how non-ideal the multiplier and op-amp might be, this process does not nominally generate other unwanted harmonics. Such harmonics as might incidentally arise won’t require a high-performance filter for their removal.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Frequency doubler with 50 percent duty cycle
- A 50 MHz 50:50-square wave output frequency doubler/quadrupler
- Frequency doubler operates on triangle wave
- Fast(er) frequency doubler with square wave output
- Triangle waves drive simple frequency doubler
The post Analog frequency doublers appeared first on EDN.
Dual-input inductive sensor simplifies design

Melexis introduces the MLX90514, a dual-input inductive sensor IC that simultaneously processes signals from two sets of coils to compute differential or vernier angles on-chip. The inductive sensor targets automotive applications, such as steering torque feedback, steering angle sensing, and steering rack motor control.
(Source: Melexis)
Traditionally, designers have combined two single-channel ICs or used magnetic sensors for many applications, Melexis said. However, with the move to electrification, autonomy, and advanced driver-assistance systems (ADAS), vehicle control systems have become more complex particularly in systems such as steering torque feedback, steering rack motor control, including steer-by-wire implementations, which need dual-channel position sensing to deliver accurate torque and angle measurements.
By integrating differential and vernier angle calculations on-chip, the MLX90514 reduces processing demands on the host system, enabling smaller and more streamlined sensor designs. By computing complex position information (such as differential or vernier angles) directly at the sensor it eliminates the need for multiple ICs, which reduces design complexity and component count.
The MLX90514 is Melexis’ first dual inductive application-specific standard product (ASSP). It offers several interface options—including SENT, SPC, and PWM for a standalone module, and SPI for embedded modules—with integrated on-chip processing. The SENT/SPC output accommodates up to a 24-bit payload, enabling high-fidelity transmission of two synchronized 12-bit channels, which is required for high-accuracy torque and angle sensing.
Key features include zero-latency synchronized dual-channel operation, external pulse-width-modulation (PWM) signal integration that allows reading PWM signals from external sources, and the capability to handle small inductive signals, which supports compact coil designs and tighter printed-circuit-board layouts for smaller sensing modules.
The MLX90514 enables ASIL-D-compliant sensing systems, as a Safety Element out of Context (SEooC), for automotive steering torque and angle applications. The inductive interface sensor is available now.
The post Dual-input inductive sensor simplifies design appeared first on EDN.
Reference designs advance AI factories

Schneider Electric offers two reference designs co-engineered with NVIDIA to accelerate deployment of AI-ready infrastructure for AI factories. The controls reference design uses a plug-and-play MQTT architecture to bridge OT and IT systems, enabling operators to access and act on data from every layer.

The first reference design integrates power management and liquid cooling controls with NVIDIA Mission Control software, enabling smooth orchestration of AI clusters. It also supports Schneider’s data-center reference designs for NVIDIA Grace Blackwell systems, giving operators precise control over power and cooling to meet the demands of accelerated AI workloads.
The second reference design supports AI factories running NVIDIA GB300 NVL72 systems at up to 142 kW per rack. It delivers a complete blueprint for facility power, cooling, IT space, and lifecycle software, compatible with both ANSI and IEC standards. Using Schneider’s validated models and digital twins, operators can plan high-density AI data halls, optimize designs, and ensure efficiency, reliability, and scalability for NVIDIA Blackwell Ultra systems.
For more information about these new reference designs, as well as other data-center reference designs developed with NVIDIA, click here.
The post Reference designs advance AI factories appeared first on EDN.
Adaptable gate driver powers 48-V automotive systems

ST’s L98GD8 multichannel gate driver offers flexible output configurations in 48-V automotive power systems. Its eight independent, configurable outputs can drive MOSFETs as individual power switches or as high- and low-side pairs in up to two H-bridges for DC motor control. The device also supports peak-and-hold operation for electrically actuated valves.

Programmable gate current helps minimize MOSFET switching noise to meet EMC requirements. The driver operates from a 3.8-V to 58-V battery supply and a 4.5-V to 5.5-V VDD supply. Its I/O is compatible with both 3.3-V and 5-V logic levels.
To ensure safety and reliability, each output provides comprehensive diagnostics, including short-to-battery, short-to-ground, and open-load conditions. Output status is continuously monitored through dedicated SPI registers. The L98GD8 features fast overcurrent shutdown with dual-redundant failsafe pins, battery undervoltage detection, and an ADC for monitoring battery voltage and die temperature. Additional safety functions include Built-In Self-Test (BIST), Hardware Self-Check (HWSC), and a Communication Check (CC) watchdog timer.
The L98GD8 driver is available now, with prices starting at $3.94 each in lots of 1000 units.
The post Adaptable gate driver powers 48-V automotive systems appeared first on EDN.
Thales introduces quantum-safe smartcard

According to Thales, its MultiApp 5.2 Premium PQC is Europe’s first quantum-resistant smartcard to receive high-level security certification from ANSSI (the French National Cybersecurity Agency). Certified to the EAL6+ level under the Common Criteria framework, the smartcard also uses digital signature algorithms standardized by NIST in the U.S.

The MultiApp 5.2 Premium PQC leverages post-quantum cryptography to protect digital identity data in ID cards, health cards, and driving licenses. This new generation of cryptographic signatures is designed to withstand the vast computational power of quantum computers, both today and in the future.
“This first certification for a solution incorporating post-quantum cryptography reflects ANSSI’s commitment to supporting innovation, while upholding the highest cybersecurity standards,” said Franck Sadmi, Head of National Certification Center, French Cybersecurity Agency (ANSSI). “The joint work of Thales, CEA-Leti IT Security Evaluation Facility, and ANSSI is a strong signal that Europe is ready to lead the way in post-quantum security, enabling organizations and governments to deploy solutions that anticipate future risks, rather than waiting for quantum computers to become mainstream.”
The post Thales introduces quantum-safe smartcard appeared first on EDN.
HDR sensor improves automotive cabin monitoring

Joining Omnivision’s Nyxel NIR line, the OX05C1S global-shutter HDR image sensor targets in-cabin driver and occupant monitoring systems (DMS and OMS). The 5-Mpixel sensor, with 2.2-µm backside-illuminated pixels, captures clear images of the entire cabin, enhancing algorithm accuracy even under challenging high-brightness conditions.

The OX05C1S leverages Nyxel technology to achieve high quantum efficiency at the 940-nm NIR wavelength, improving DMS and OMS performance in low-light environments. On-chip RGB-IR separation reduces the need for a dedicated image signal processor and backend processing.
With package dimensions of 6.61×5.34 mm, the OX05C1S is 30% smaller than the previous-generation OX05B (7.94×6.34 mm), providing greater mechanical design flexibility for in-cabin camera integration. Lens compatibility with the OX05B enables reuse of existing optics, simplifying system upgrades and reducing overall design cost.
The OX05C1S sensor is offered in both color filter array (RGB-IR) and monochrome configurations. Samples are available now, with mass production scheduled for 2026.
The post HDR sensor improves automotive cabin monitoring appeared first on EDN.
Vishay launches extensive line of inductors

Expanding its line of inductors and frequency control devices (FCDs), Vishay has added more than 2000 new SKUs across nearly 100 series. The broader offering simplifies sourcing and supports more applications with wider inductance and voltage ranges, improved noise suppression, and additional sizes for compact PCB layouts.

Recent additions include wireless charging inductors, common-mode chokes, high-current ferrite impedance beads, and TLVR inductors, along with nearly 15 new FCD products. To meet the demand for diversified manufacturing, the company is expanding production in Asia, Mexico, and the Dominican Republic. IHLP series power inductors are now shipping from the company’s Gomez Palacio, Durango, Mexico facility.
Product rollouts will continue through 2025, with additional series scheduled to launch in the coming months. In total, Vishay expects to surpass 3000 new SKUs of inductors and FDCs, supporting design activity across industrial, telecom, and consumer markets.
The post Vishay launches extensive line of inductors appeared first on EDN.
Watchdog versus the truck

One of the first jobs I had, when I first got out of college, was for a company that designed and manufactured monitors for large trucks, the kind used in mining operations. This company was a small entity with around 25 employees and a couple of engineers. The main product was a monitor that sat on the dash of these trucks and watched over things like oil pressure, coolant temperature, and level, hydraulic pressure, etc. Variations of this monitor had 4, 5, or 6 indicator lights that lit if the monitored point went out of spec. An alarm also sounded, and the truck was shut down by a relay connection.
Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale
Another engineer and I decided it was time to bring this analog monitor into the microprocessor era. The idea was to monitor the same functions, but only have one indicator light with an LCD showing the issue. Along with the alarming function, we could also add more information on the LCD, like temperatures and pressure readings. It wasn’t a very complex design. At that time, micros didn’t typically have watchdog circuits, so we added one of the few external watchdogs available at the time. Our concern was that some transient would throw the micro off course, and we wanted the watchdog to reset the monitor in that case. The 24-V input voltage and all sensor inputs had some level of transient suppression (but, after several decades, I have forgotten what the circuit consisted of).
We completed a design, and it worked very well on the bench. Next, we hit it with various transients that we could generate. Not having access to any transient test equipment, we had to invent some methods to test this. Worse, we had no specs or general information on what kind of transients these trucks can experience, but we ourselves were satisfied that it was ready for a beta test.
After testing, we sent the monitor to a local mining company to have it installed on a working truck. We also sent a harness system with leads long enough to get to the sensors located around the truck. The company called us after they got the monitor mounted on the dash and all the sensors wired to the harness, so a visit was scheduled to test the monitor on a running truck.
I need to stop at this point to describe the truck. It was a 175-ton dump truck. There are bigger trucks now, but it was very large for the time. Picture tires 10 feet high and a 1600 HP diesel/electric generator system powering electric motors turning each wheel. The driver’s cab was about 18 feet off the ground and was reached using an attached ladder. The driver and the two of us climbed this ladder to begin the test.
To add to the pressure, there were a dozen or so managers and workers on the ground watching the tests. The mining company managers gave the go-ahead to begin. The driver started the truck (quit a roar)—the monitor fired up, and the LCD began showing the status of the monitored points… great!
After a few seconds, the truck shut down… not great. We looked at each other—a few seconds later, the truck roared alive again—monitor working—a couple more seconds, the truck shuts down—a few seconds later, the truck restarts, etc., etc., etc.
After a half dozen of these cycles, we told the driver to shut the truck down. We couldn’t tie up the million-dollar truck any longer, so we could not do any more investigation. We packed up our equipment and left with our heads down.
Back at the shop, we talked through what went on. We concluded that the monitor’s micro was disrupted by an unknown transient. The watchdog then discovered the code running amok and tripped the shutdown relay. The watchdog then rebooted the micro, resetting the relay, which allowed the truck to restart itself.
One of the major design issues was that some sensors required tens of feet of wire and were unshielded single leads (most sensors used chassis ground). These single wires (or should I call them antennas) could have been close to various relays and electric actuators on the truck, or worse yet, near the cabling used for the generator-to-motor system. Also, the watchdog, which did discover the issue, did not fulfill its function—it allowed the truck to restart.
This is where “Tales from the Cube” articles tell us how they fixed the issue by adding a larger resistor, fixing a bad solder joint, or reworking a reversed diode. In this tale, there is no happy ending. The boss didn’t want to continue with the project, and I’m sure the customer was not impressed. The project was cancelled. So why did I write this up?
I thought it was a good example of what can happen on engineering projects—sometimes they fail (moving from the lab to the field often exposes design issues), and sometimes you don’t get a chance to fix the design. Young engineers should understand this and not be disenchanted when it does. Don’t let it get you down. Remember, we learn a lot by failure.
Shortly after this project, we got the opportunity to design a full, micro-based dashboard for a large articulated truck. One of the things we designed was a fiber-optic cable data-transfer system to the back portion of the truck. This minimized the length of sensor wires, providing antennas for the transient. In this design, the system worked flawlessly.
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- When a ring isn’t really a ring
- In the days of old, when engineers were bold
- Software sings the cold-weather blues
- Going against the grain dust
The post Watchdog versus the truck appeared first on EDN.
PWM nonlinearity that software can’t fix

There’s been interest recently here in the land of Design Ideas (DIs) in a family of simple interface circuits for pulse width modulation (PWM) control of generic voltage regulators (both linear and switching). Members of the family rely on the regulator’s internal voltage reference and a discrete FET connected in series with the regulator’s programming voltage divider.
PWM uses the FET as a switch to modulate the bottom resistor (R1) of the divider, so that the 0 to 100% PWM duty factor (DF) varies the time-averaged effective conductance of R1 from 0 to 100% of its nominal value. This variation programs the regulator output from Vo = Vs (its feedback pin reference voltage) at DF = 0 to Vo = Vs(R2/R1 + 1) at DF = 100%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Some of these circuits establish a linear functionality between DF and Vo. Figure 1 is an example of that genre as described in “PWM buck regulator interface generalized design equations.”
Figure 1 PWM programs Vo linearly where Vo = Vs(R2/(R1/DF) + 1).
For others, like Figure 2’s concept designed by frequent contributor Christopher Paul and explained in “Improve PWM controller-induced ripple in voltage regulators”…it’s nonlinear…

Figure 2 PWM programs Vo nonlinearly where Vo = Vs(R2/(R1a/DF + R1b + R1c) + 1).
Note that for clarity, Figure 2 does not include many exciting details of Paul’s innovative design. See his article at the link for the whole story.
The nonlinearity problemHowever, to explore the implications of Figure 2’s nonlinearity a bit further, in the example of the circuit provided in Paul’ DI:
R1a = 2490 Ω
R1b = 2490 Ω
R1c = 4990 Ω
Vs = 0.800 V
R2 = 53600 Ω
Which, if we assume 8-bit PWM resolution, provides the response curves shown in Figure 3.

Figure 3 The 8-bit PWM setting versus DF = X/255. The left axis (blue curve) is Vo = 0.8(53600/(2490/(X/255) + 7480) + 1). The right axis (red curve) is Vo volts increment per PWM least significant bit (LSBit) increment.
Paul says of this nonlinear response: “Although the output voltage is no longer a linear function of the PWM duty cycle, a simple software-based lookup table renders this a mere inconvenience. (Yup, ‘we can fix it in software!’)”Of course, he’s absolutely right: For any chosen Vo, a corresponding DF can be easily calculated and stored in a small (256-entry) lookup table.
However, translating from the computed DF to an integer 8-bit PWM code is a different matter. Figure 3’s increment-vs-increment red curve provides an important caveat to Paul’s otherwise accurate statement.
If the conversion from 8-bit 0 to 255 code to the 0.8 V to 5.1 V, or 4.3V Vo span, were linear, then each LSBit increment would bump Vo by a constant 15.8 mV (= 4.3 V/256). But it isn’t.
And, as Figure 3’s red curve shows, due to the strong nonlinearity of the conversion, the 8-bit resolution criterion is exceeded for all PWM codes < 75 and Vo < 3.77 V = 74% of full scale.
And it gets worse: For Vo values down near Vs = 0.8 V, the LSBit increment soars to 67 mV (= 4.3 V/64). This, therefore, equates to a resolution of not 8 bits, but barely 6.
The fixUnfortunately, there’s very little any software fix can do about that. Which might make nonlinearity for some applications perhaps more than just an “inconvenience?” So what could fix it?
The nonlinearity basically arises from the fact that only a fraction (R1a) of the total R1abc resistance is modulated by PWM, as the PWM DF changes, that fraction changes, which in turn changes the rate of change of Vo versus DF. In fact, it changes this by quite a lot.
Getting to specifics, in the example of Paul’s circuit provided in his DI, we see they make the modulated resistance R1a only 25% of the total R1 resistance at DF = 100%, with this proportion increasing to 100% as DF goes to 0%. This is obviously a big change concentrated toward lower DF.
A clue to a possible (at least partial) fix is found back in the observation that the nonlinearity and resolution loss originally arose from the fact that only a small fraction (25% R1a) of the total R1abc resistance is modulated by PWM. So, perhaps a bigger R1a fraction of R1abc could recover some of the lost resolution.
As an experiment, I changed Paul’s R1 resistor values to the following.
R1a = 7960 Ω
R1b = 1000 Ω
R1c = 1000 Ω
This makes R1a now 80% of R1abc instead of only 25%. Figure 4 illustrates the effect on the response curves. 
Figure 4 The impact of making R1a 80% of R1abc. The left axis (blue curve) is Vo = 0.8(53600/(7960/(X/255) + 2000) + 1). The right axis (red curve) is Vo volts increment per PWM LSBit increment.
Figure 4’s blue Vo versus PWM curve is obviously still nonlinear, but significantly less so. But perhaps the more important improvement is to the red curve: Unlike the previous erosion of resolution at the left end of the curve to 67 mV per PWM LSBit to just 6 bits, Figure 3 maxes out at 21 mV, or 7.7 bits.
Is this a “fix?” Well, obviously, 7.7 bits is better than 6 bits, but it’s still not 8 bits, so resolution recovery isn’t perfect. Also, my arbitrary shuffling of R1 ratios is almost certain to adversely impact the spectacular ripple attenuation cited in Christopher Paul’s original article. Mid-frequency loop gain may also suffer from the heavier loading on C2 and R2 imposed by the reduced R1c value. This could lead to a possible deterioration of the transient response and noise rejection. Perhaps C2 could be increased to moderate that effect.
Still, it would be fair to call it a start at a fix for nonlinearity that lay beyond the reach of software.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Improve PWM controller-induced ripple in voltage regulators
- PWM buck regulator interface generalized design equations
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Brute force mitigation of PWM Vdd and ground “saturation” errors
The post PWM nonlinearity that software can’t fix appeared first on EDN.
A closer look at isolated comparators

How do isolated comparators differ from standard comparators? What are their primary applications in analog and power electronics? Here is a brief review of this critical building block and what design engineers need to understand about its application. The article also presents a few popular isolated comparators and what makes them suitable for specific designs.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- Designing with comparators
- Comparators Rival Respect of Sibling Amps
- Understanding the analog voltage comparator
- Understanding precision comparator applications
- Hysteresis–Understanding more about the analog voltage comparator
The post A closer look at isolated comparators appeared first on EDN.
Dropping a PRTD into a thermistor slot—impossible?

Up front: some background. The air-temperature sensor attached to my (home-brew) rain gauge became flaky. Short-term solution: fix it (done). Longer-term goal: improve it (read on).
That sensor is a standard Vishay NTC (negative temperature coefficient) thermistor: 10k at 25°C and with a beta value of 3977. In conjunction with a load resistor, it feeds a PIC microcontroller (MCU), which samples the resulting voltage (8 bits) for radio-linking back to base for processing and display. Figure 1 shows the utterly conventional circuit together with its response to temperature.
Figure 1 A basic thermistor circuit, together with its calculated response.
The load resistor’s value of 15699 Ω may seem strange, but that is the thermistor’s resistance at 15°C, the mid-point of the desired -9 to +40°C measuring range. Around every 30 seconds, the PIC strobes it for just long enough for the reading to settle.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The plot shows the calculated response together with a straight line running through the two actual calibration points of 0°C (melting, crushed ice) and 30°C (comparison with a known-good thermometer). That response was calculated using the extended Steinhart–Hart equations rather than the less accurate exponential approximation. Steinhart and Hart (S-H) are to NTC thermistors as Callender and Van Dusen are to platinum resistance temperature detectors (PRTDs), modifying the exponential curve just as Callender-Van Dusen (CVD) tweaks an otherwise straight line.
The relevant Wikipedia article is, of course, informative. Still, a brief and useful guide to the S–H equations, complete with all the necessary constants, can be found on page 4 of Vishay’s relevant datasheet. Curiously, their tables of resistance versus temperature show truncated rather than rounded values, so they quote our device’s R15 as 15698 ohms rather than 15699. The S–H figure is 15698.76639545805…, give or take a few pico-ohms.
You’ll notice that Figure 1’s plot is upside down! That is deliberate, so a higher temperature shows a higher output, though the voltage actually falls. I think that’s more intuitive; you may disagree.
Matching an RTD to an NTC
That straight line, derived from the S–H values at 0 and 30°C, is the key to this idea. Making the PRTD generate a signal that matches it will avoid any major changes to the processing code, especially the calibration points, and it will also provide a much wider range with greater accuracy than an NTC. Because the voltage from the thermistor circuit is ratiometric, the PRTD must output a level that is a proportion of the supply.
To do that, we amplify the voltage developed across the PRTD, compensate for the CVD departure from linearity, and add an offset. The simplest circuit that can do all these is shown in Figure 2a.

Figure 2 Probably the simplest circuit (2a) that can give an output from a PRTD to match a thermistor’s response, with a slightly better variant (2b). These are both flawed, and the component values are not optimized. They are to show the principle, not the practice.
That simplicity leads to complications, because pretty much every component in Figure 2a interacts with every other one. It’s bad enough to design, even with ideal (simulated) parts, but final calibration could require hours of iterative frustration. Buffering the offset voltage, as shown in Figure 2b, helps, but that extra op-amp can be put to better use.
A practical circuit
If we split the circuit into two, life becomes easier. Figure 3 shows how.

Figure 3 The final, workable circuit. Amplification and offsetting are now separate, making calibration much easier.
The processor turns Q1 on to deliver power. (The previously active-high GPIO pin powering the thermistor must now be active-low to drive Q1’s gate, and that was the only code change needed.) The FDC604 has a low RDS(ON) of a few tens of milliohms, so it drops only 100 µV or so, which is insignificant, even if the measuring ADC’s reference is the Vdd rail. (Offsets within the MCU itself will probably be greater.) Because the circuit is only active for a millisecond every half minute or so, self-heating of the RTD can be ignored. Consumption was about 3 mA at 5 V or 2 mA at 3.3 V.
R1 feeds current through the RTD, producing a voltage that is amplified by A1a, whose gain can be trimmed by R5. R6 feeds back into the RTD and R1 to compensate for both CVD and the varying drive to the RTD as its resistance changes. Its value is fairly critical: 33k works well enough for our purposes, but 31k95—33k||1M0—is almost perfect, with a predicted error of way under 1 millidegree over a 100°C span—theoretically—so we’ll use that. Obviously, this is ridiculous overkill with 8-bit output sampling, but if a single extra resistor can eliminate one source of errors, it’s worth going for.
A1b now amplifies the signal further (and inverts it) and applies a trimmable offset. Its output as a fraction of the supply voltage is now directly proportional to the PRTD’s temperature. Note that the gain of this stage is preset: R7 and R8 should be selected so that their ratio is as close as possible to 3.9, though their absolute values are not critical. The result is shown in Figure 4.

Figure 4 Plotting the output against the RTD’s resistance now gives a result that is almost indistinguishable from the straight-line target, the (idealized) error corresponding to much less than 1 millidegree. This shows the performance limit for this circuit; don’t expect to match it in real life.
Modeling and plotting
A simple program (Python plus Pygame) to plot the circuit’s operation at different scales made it easy to see the effects of changing both R6 and A1a’s gain, with the error curve tilting (gain error) and bending (compensation error). That curve needs to be as straight and flat as possible.
Modeling the first section needed iteration, starting with a (notional) unit voltage feeding R1 and ~0.7 driving R6. Calculating the voltage across the PRTD and amplifying that gave the stage’s output, ready to feed back into R6 for recalculating V_RTD. (Repeating until successive results matched to eight significant figures took no more than ten iterations.) The section representing A1b was trivial: take A1a’s output and multiply by 3.9 while subtracting the offset.
As a cross-check, I put the derived values into LTspice and got almost the same results. The slight differences are probably because even simulated op-amp gain stages have finite performance, unlike multiplication signs.
The program also generated Table 1, which may prove useful. It shows the resistance of the PRTD at various temperatures (centered on 15°C) together with the output voltage referred to Vdd and given as a proportion of it. That output is also shown, scaled from 0–255 in both decimal and hex.
The long numbers the program generated have been rounded to more reasonable lengths, which, deliberately, are still more accurate than most test kits can resolve. Too many digits may be useful; too few never are.

Table 1 The PRTD’s resistance and Figure 3’s output calculated against temperature, centered on 15°C. The output is shown as decimals, both raw and rounded, and hex.
Compensating for long leads
As it stands, the circuit does not lend itself to true 3- or 4-wire compensation for the length of the leads to the RTD—unnecessary with an NTC’s multi-kΩ resistance. However, using a 4-wire Kelvin connection, where the power-feed and sensing lines are separate, should work well and reduce the cable’s effect, as shown in Figure 5. With less than a meter separating the RTD from the circuitry, I used speaker cable. (Copper’s TCR is close to that of a PRTD.)

Figure 5 Long leads to a PRTD can cause offset errors. Using a 4-wire Kelvin arrangement minimizes these. If the µC’s A–D has external reference-voltage pins, they can be driven from the circuit for (notionally) improved accuracy.
Figure 5 also shows how accuracy could be improved by driving the ADC’s reference pins from the circuit’s power rails, though this is academic for coarse sampling. It would also compensate for any voltage drop across Q1, should that be important. Q1 could then even be omitted, the circuit being powered directly from an active-high pin. That would drop the rail voltage, which wouldn’t matter if it were fed back to REF+.
This circuit is optimized for a center temperature of 25°C, as that is the point at which most thermistors are specified, with the load resistor equaling the R(25) value. Unlike the 15°-centered version in Figure 3, I’ve not built or tried it, but believe it to be clean. Its plot—error curve included—looked very close to that in Figure 4, but shifted by 10°C.
Errors, both theoretical and practical
The input offset voltage of op-amps changes with temperature and is a potential source of errors. The quoted figure for the MCP6002 is ±2 µV/°C (typ.), which is good but not insignificant. Heating the circuit by ~40°C (with a 100R resistor replacing the PRTD) gave an output shift corresponding to less than 0.05°, which is acceptable, and in line with calculations. (An old hairdryer is part of my workbench kit.) Here, the circuitry and the PRTD will both be outside, and thus at about the same temperature.
So how does it perform in reality? It’s now built and calibrated exactly as in Figure 3, but not yet installed, allowing testing with a PRTD simulator kludged up from resistors, both fixed and variable, plus switches so the resistance can be connected to either the circuit or a (well-calibrated) meter for precise adjustment. Checking at simulated temperatures from -10 to +50°C showed errors ranging from zero at -10° to -0.22° at +50° with either 3.3 V or 5 V supplies. This could be improved with extra fiddling (I suspect a slight mismatch in R7/8’s ratio; available parts had unhelpful spreads), but the errors are less than the MCU’s 8-bit resolution (~0.351 degrees/count, or ~2.85 counts/degree), so it’ll do the job it’s intended for, and do it well.
While this approach doesn’t substitute for a “proper” PRTD circuit, it does make a nice drop-in replacement for a thermistor, giving a wider measurement range with much better linearity while needing no extra processing. I hope the true experts in the field won’t find too many problems with it. BTW, “expert” derives etymologically from “stuff you’ve learned the hard way: been there, done that, worn the hair shirt”. Never trust an armchair expert unless you’re shopping for comfortable seating.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- DIY RTD for a DMM
- Improved PRTD circuit is product of EDN DI teamwork
- Fake contacts, bounced to order
- Calculation of temperature from PRTD resistance
The post Dropping a PRTD into a thermistor slot—impossible? appeared first on EDN.
Next-gen UWB radio to enable radar sensing and data streaming applications

Since the early 2000s, ultra-wideband (UWB) technology has gradually found its way into a variety of commercial applications that require secure and fine-ranging capabilities. Well-known examples are handsfree entry solutions for cars and buildings, locating assets in warehouses, hospitals, and factories, and navigation support in large spaces like airports and shopping malls.
A characteristic of UWB wireless signal transmission is the emission of very short pulses in the time domain. In impulse-radio (IR) UWB technology, this is taken to the extreme by transmitting pulses of nanoseconds or even picoseconds. Consequently, in the frequency domain, it occupies a bandwidth that is much wider than wireless ‘narrowband’ communication techniques like Wi-Fi and Bluetooth.
UWB technology operates over a broad frequency range (ranging typically from 6 to 10 GHz) and uses channel bandwidths of around 500 MHz and higher. And because of that, its ranging accuracy is much higher than that of narrowband technologies.
Today, UWB can provide cm- to mm-level location information between a transmitter (TX) and receiver (RX) that are typically 10-15 meters apart. In addition, enhancements to the UWB physical layer—as part of the adoption of the IEEE 802.15.4z amendment to the IEEE standard for low-rate wireless networks—have been instrumental in enabling secure ranging capabilities.

Figure 1 Here is a representation of UWB and narrowband signal transmission, in the (top) frequency and (bottom) time domain. Source: imec
Over the years, imec has contributed significantly to advancing UWB technology and overcoming the challenges that have hindered its widespread adoption. That includes reducing its power consumption, enhancing its bit rate, increasing its ranging precision, making the receiver chip more resilient against interference from other wireless technologies operating in the same frequency band, and enabling cost-effective CMOS silicon chip implementations.
Imec researchers developed multiple generations of UWB radio chips, compliant with the IEEE 802.15.4z standard for ranging and communication. Imec’s transmitter circuits operate through innovative pulse shape and modulation techniques, enabled by advanced polar transmitter, digital phase-locked loop (PLL), and ring oscillator-based architectures—offering mm-scale ranging precision at low power consumption.
At the receiver side, circuit design innovations have contributed to an outstanding interference resilience while minimizing power consumption. The various generations of UWB prototype transmitter and transceiver chips have all been fabricated with cost-effective CMOS-compatible processing techniques and are marked by small silicon areas.
The potential of UWB for radar sensing
Encouraged by the outstanding performance of UWB technology, experts have been claiming for some time that UWB’s potential is much larger than ‘accurate and secure ranging.’ They were seeing opportunities in radar-like applications which, as opposed to ranging, employ a single device that emits UWB pulses and analyzes the reflected signals to detect ‘passive’ objects.
When combined with UWB’s precise ranging capabilities, this could broaden the applications to automotive use cases such as in-cabin presence detection and monitoring the occupants’ gestures and breathing, aimed at increasing their safety.
Or think about smart homes, where UWB radar sensors could be used to adjust the lighting environment based on people’s presence. In nursing homes, the technology could be deployed to initiate an alert based on fall detection without the need for intrusive camera monitoring.
Enabling such UWB use cases will be facilitated by IEEE 802.15.4ab, the next-generation standard for wireless technology, which is expected to be officially released around year-end. 802.15.4ab will offer multiple enhancements, including radar functionality in IR-UWB devices, turning them into sensing-capable devices.
Fourth gen IR-UWB radio compliant with 802.15.4z/ab
At the 2025 Symposium on VLSI Technology and Circuits (VLSI 2025), imec presented its fourth-generation UWB transceiver, compliant with the baseline for radar sensing as defined by preliminary versions of 802.15.4ab. Baseline characteristics include, among others, enhanced modulation supported by high data rates.
Additionally, imec’s UWB radar sensing technology implements unique features offering enhanced radar sensing capabilities (such as extended range) and a record-high data rate of 124.8 Mbps integrated in a system-on-chip (SoC). Being also compliant with the current 802.15.4z standard, the new radio combines its radar sensing capabilities with communication and secure ranging.

Figure 2 The photograph captures fourth-generation IR-UWB radio system. Source: imec
A unique feature of imec’s IR-UWB radar sensing system is the 2×2 MIMO architecture, with two transmitters and two receivers configured in full duplex mode. In this configuration, a duplexer controls whether the transceiver operates in transmit or receive mode. Also, the TXs and RXs are paired together—TX1-RX1, TX1-RX2, and TX2-RX2—connected by the duplexer.
This allows the radar to simultaneously operate in transmit and receive mode without having to use RF switches to toggle from one mode to the other. This way of working enables reducing the nearest distance over which the radar can operate—a metric that is traditionally limited by the time needed to switch between both modes.
Imec’s full-duplex-based radar can operate in the range between 30 cm and 3 m, a breakthrough achievement. In this full-duplex MIMO configuration, the nearest distance is only restricted by the radar’s 500-MHz bandwidth.
The IR-UWB 2TRX radar physically implements two antenna elements, each antenna being shared between one TX and one RX. The 2×2 MIMO full-duplex configuration, however, enables an array with three antennas virtually, which substantially improves the radar’s angular resolution and area consumption.
Compared with state-of-the-art single-input-single-output (SISO) radars, the radar consumes 1.7x smaller area with 2.5 fewer antennas, making it a highly performant, compact, and cost-effective solution. Advanced techniques are used to isolate the TX from the RX signals, resulting in >30dB isolation over a 500-MHz bandwidth.

Figure 3 This architecture of the 2TRX was presented at VLSI 2025. Source: imec
Signal transmission relies on a hybrid analog/digital polar transmitter, introducing filtering effects in the analog domain for signal modulation. This results in a clean transmit signal spectrum, supporting the good performance and low power operation of the UWB radar sensor.
Finally, in addition to the MIMO-based analog/RF part, the UWB radar sensing device features an advanced digital baseband (or modem), responsible for signal processing. This component extracts relevant information such as the distance between the radar and the object, and an estimation of the angle of arrival.
Proof-of-concept: MIMO radar for in-cabin monitoring
The features of IR-UWB MIMO-based radar technology are particularly attractive for automotive use cases, where the UWB radar can be used not only to detect whether someone is present in the car, for example, child presence detection, but also to map the vehicle’s occupancy and monitor vital signs such as breathing. This capability is currently on the roadmap of several automotive OEMs and tier-1 suppliers.
But today, no radar technology can deliver this functionality with the required accuracy. Particularly challenging is achieving the angular resolution needed to detect two targets at the same (short) distance from the radar. In addition, for breathing monitoring, small movements of the target must be discerned within a period of a few seconds.

Figure 4 The in-cabin IR-UWB radar was demonstrated at PIMRC 2025. Source: imec
At the 2025 IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (IEEE PIMRC 2025), imec researchers presented the first proof-of-concept, showing the ability of IR-UWB MIMO radar system to perform two in-cabin sensing tasks: occupancy detection and breathing rate estimation. In-cabin measurements were carried out inside a small car.
The UWB platform was placed in front of an array of two in-house developed antenna elements placed in the center of the car ceiling, close to the rear-view mirror. The distance from the antennas to the center of the driver and front passenger seats was 55 cm.
The experimental results confirm achieving a high precision for estimating the angle-of-arrival and breathing rate. For instance, for a scenario where both passenger and driver seats are occupied, the UWB radar system achieves a standard deviation of less than 1.90 degrees and 2.95 bpm, for angle-of-arrival and breathing rate estimations, respectively.

Figure 5 Extracted breathing signals for driver and passenger were presented at PIMRC 2025. Source: imec
Imec researchers also highlight an additional benefit of using UWB technology for in-cabin monitoring: the TRX architecture, which is already used in some cars for keyless entry, can be re-purposed for the radar applications, cutting the overall costs.
High data rate opens doors to data streaming applications
In addition to radar sensing capabilities, this IR-UWB transceiver offers another feature that sets it apart from existing UWB solutions: it provides a record-high data rate of 124.8 Mbps, the highest data rate that is still compatible with the upcoming 802.15.4ab standard.
This is about a factor of 20 higher than the 6.8 Mbps data rate currently in use in ranging and communication applications; it results from an optimization of both the analog front-end and digital baseband. The high data rate also comes with a low energy per bit—much lower than consumed by Wi-Fi—especially at the transmit side.
These features will unlock new applications in both audio and video data streaming. Possible use cases are next-generation smart glasses or VR/AR devices, for which the UWB TRX’s small form factor is an added advantage.
Adding advanced ranging to UWB portfolio
In the last two decades, IEEE 802.15.4z-compliant UWB technology has proven its ability to support mass-market secure-ranging and localization deployments, enabling use cases across the automotive, smart industry, smart home, and smart building markets. Supported by the upcoming IEEE 802.15.4ab standard, emerging UWB devices can now also be equipped with radar functionality.
Imec’s fourth generation of IR-UWB technology is the first (publicly reported) 802.15.4ab compliant radar-sensing device, showing robust radar-sensing capabilities; it’s suitable for automotive as well as smart home use cases. The record high data rate also shows UWB’s potential to tap new markets: low-power data streaming for smart glasses or AR/VR devices.
The IEEE 802.15.4ab standard supports yet another feature: advanced ranging. This will enhance the link budget for signal transmission, translating into a fourfold increase in the ranging distance—up to 100 m in the case of a free line of sight. This feature is expected to significantly enhance the user experience for keyless entry solutions for cars and smart buildings.
Not only can it improve the operating distance, but it can also better address challenging environments such as when the signal is blocked by another object, for example, body blocking. Ongoing developments will enable this advanced ranging capability as a new feature in imec’s fifth generation of UWB technology.
The future looks bright for UWB technology. Not only do technological advances follow each other at a rapid pace, but ongoing standardization efforts help shape current and future UWB applications.
Christian Bachmann is the portfolio director of wireless and edge technologies at imec. He oversees UWB and Bluetooth programs enabling next-generation low-power connectivity for automotive, medical, consumer, and IoT applications. He joined imec in 2011 after working with Infineon Technologies and the Graz University of Technology.
Related Content
- Ultra-wideband tech gets a boost in capabilities
- The transformative force of ultra-wideband (UWB) radar
- All Ultra-Wideband (UWB) systems are not created equal
- A short primer on ultra-wideband (UWB) radar technology
- A look at the many lives of ultra-wideband (UWB) standard
The post Next-gen UWB radio to enable radar sensing and data streaming applications appeared first on EDN.
A digital frequency detector

I designed the circuit in Figure 1 as a part of a data transmission system that has a carrier frequency of 400 kHz using on-off keying (OOK) modulation.
I needed to detect the presence of the carrier by distinguishing it from other signals of different frequencies. It was converted to digital with a 5-V logic. I wanted to avoid using programmable devices and timers based on RC circuits.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The resulting circuit is made up of four chips, including a crystal time base. In brief, this system measures the time between the rising edges of the received signal on a cycle-by-cycle basis. Thus, it detects if the incoming signal is valid or not in a short time (approximately one carrier cycle, that is ~2.5 µs). This is done independently of the signal duty cycle and in less time than other systems, such as a phase-locked loop (PLL), which may take several cycles to detect a frequency.
Figure 1 A digital frequency divider circuit that detects the presence of a 400-kHz carrier, distinguishing it from signals of other frequencies, after it has been converted to digital using 5-V logic.
In the schematic, IC1A and IC1B are the 6.144 MHz crystal oscillator and a buffer, respectively. For X1, I used a standard quartz crystal salvaged from an old microprocessor board.
The flip-flops IC2A and IC2B are interconnected such that a rising edge at the IC2A clock input (connected to the signal input) produces, through its
output and IC2B
input, a low logic level at IC2B Q output. Immediately afterwards, the low logic level resets IC2A, thereby leaving IC2B ready to receive a rising edge at its clock input, which causes its Q output to return to high again. Since the IC2B clock input is continuously receiving the 6.144 MHz clock, the low logic level at its output will have a very short duration. That very narrow pulse presets IC3, which takes its counting outputs to “0000”.
If IC4A is in a reset condition, that pulse will also set it in the way explained below, with the effect of releasing IC4B by deactivating its
input (pin 4 of IC4) and enabling IC3 by pulling its
input low.
From that instant, IC3 will count the 6.144 MHz pulses, and, if the next rising edge of the input signal occurs when IC3’s count is at “1110” or “1111”, IC1C’s output will be at a low level, so the IC4B
output will go high, indicating that a cycle with about the correct period (2.5µs) has been received. Simultaneously, IC3 will be preset to start a new count. If the next rising edge occurred when the IC3 count was not yet at “1110”, IC3 would still be preset, but the circuit output would go low. This last scenario corresponds to an input frequency higher than 400 kHz.
On the contrary, if, after the last rising edge, a longer time than a valid period passes, the functioning of the circuit will be the following. When the IC3 count reaches the value “1111”, a 6.144 MHz clock pulse will occur at the signal input instead of a rising edge. This will make the IC4A Q output take the low level present at the IC3
output and the IC4A data input.
The low level at IC4A Q output will set IC4B, and the circuit output will go low. As IC4A Q output is also connected to its own
input, that low level caused by a pulse at its clock input will prevent that flip-flop from responding to further clock pulses. From then on, the only way of taking IC4A out of that state will be by applying a low level (could be a very narrow pulse, as in this case) at its
input (pin 10 of IC4). That would establish a forbidden condition for an instant, making IC4A first pull high both Q and
, and immediately change
to low.
As a result of the circuit logic and timing, after a complete cycle with a period of approximately 2.5 µs is received, the circuit output goes high and remains in that state until a shorter cycle is received, or until a longer time than the correct period elapses without a complete cycle.
Testing the circuitI tested the circuit with signals from 0 to 10 MHz. The frequencies between 384 kHz and 405 kHz, or periods between 2.47 µs and 2.6 µs, produced a high level at the output. These values correspond to approximately 15 to 16 pulses of the 6.144 MHz clock, being the first of those pulses used to end the presetting of the counter IC3, so it is not counted.
Frequencies lower than 362 kHz or higher than 433 kHz produced a low logic level. For frequencies between 362 kHz and 384 kHz and between 405 kHz and 433 kHz, the circuit produced pulses at the output. That means that for an input period between 2.31 µs and 2.47 µs or between 2.60 µs and 2.76 µs, there will be some likelihood that the output will be in a high or low logic state. That state will depend on the phase difference between the input signal and the 6.144 MHz clock.
Figure 2 shows a five-pulse 400 kHz burst (lower trace), which is applied to the input of the circuit. The upper trace is the output; it can be seen that after the first cycle has been measured. The output goes high, and it stays in that state as more 2.5 µs cycles keep arriving. After a time slightly higher than 2.5 µs without a complete cycle (~2.76 µs), the output goes low.

Figure 2 A five-pulse 400-kHz burst applied to the input of the digital frequency divider circuit (CH2) and the output (CH2) after the first cycle has been measured.
Ariel Benvenuto is an Electronics Engineer and a PhD in physics, and works in research with IFIS Litoral in Santa Fe, Argentina.
Related Content
- Divider generates accurate 455kHz square-wave signal
- Frequency and phase locked loops
- Simplifying PLL Design
- Demystifying the PLL
The post A digital frequency detector appeared first on EDN.
Can a smart ring make me an Ultrahuman being?

In last month’s smart ring overview coverage, I mentioned two things that are particularly relevant to today’s post:
- I’d be following it up with a series of more in-depth write-ups, one per ring introduced in the overview, the first of which you’re reading here, and
- Given the pending ITC (International Trade Commission) block of further shipments of RingConn and Ultrahuman smart rings into the United States, save for warranty-replacements for existing owners, and a ruling announced a few days prior to my submission of the overview writeup to Aalyia, I planned to prioritize the RingConn and Ultrahuman posts in the hopes of getting them published prior to the October 21 deadline, in case US readers were interested in purchasing either of them ahead of time (note, too, that the ITC ruling doesn’t affect readers in other countries, of course).
Since the Ultrahuman Ring AIR was the first one that came into my possession, I’ll dive into its minutiae first. To start, I’ll note, in revisiting the photo from last time of all three manufacturers’ rings on my left index finger, that the Ultrahuman ring’s “Raw Titanium” color scheme option (it’s the one in the middle, straddling the Oura Gen3 Horizon to its left and the RingConn Gen 2 to its right) most closely matches the patina of my wedding band:

Here’s the Ultrahuman Ring AIR standalone:

Next up is sizing, discussed upfront in last month’s write-up. Ultrahuman is the only one of the three that offers a sizing app as a (potential) alternative to obtaining a kit, although candidly, I don’t recommend it, at least from my experiences with it. Take a look at the screenshots I took when using it again yesterday in prepping for this piece (and yes, I intentionally picked a size-calibrating credit card from my wallet whose account number wasn’t printed on the front!):
I’ll say upfront that the app was easy to figure out and use, including the ability to optionally disable “flash” supplemental illumination (which I took advantage of because with it “on”, the app labeled my speckled desktop as a “noisy background”).
That said, first off, it’s iOS-only, so folks using Android smartphones will be SOL unless they alternatively have an Apple tablet available (as I did; these were taken using my iPad mini 6). Secondly, the app’s finger-analysis selection was seemingly random (ring and middle finger on my right hand, but only middle finger on my left hand…in neither case the index finger, which was my preference). Thirdly, app sizing estimates undershot by one or multiple sizes (depending on the finger) what the kit indicated was the correct size. And lastly, the app was inconsistent use-to-use; the first time I’d tried it in late May, here’s what I got for my left hand (I didn’t also try my right hand then because it’s my dominant one and I therefore wasn’t planning on wearing the smart ring on it anyway):

Next, let’s delve a bit more into the previously mentioned seeming firmware-related battery life issue I came across with my initial ring. Judging from the June 2024 date stamps of the documentation on Ultrahuman’s website, the Ring AIR started shipping mid-last year (following up on the thicker and heavier but functionally equivalent original Ultrahuman R1).
Nearly a year later, when mine came into my possession, new firmware updates were still being released at a surprisingly (at least to me) rapid clip. As I’d mentioned last month, one of them had notably degraded my ring’s battery life from the normal week-ish to a half day, as well as extending the recharge time from less than an hour to nearly a full day. And none of the subsequent firmware updates I installed led to normal-operation recovery, nor did my attempted full battery drain followed by an extended delay before recharge in the hope of resetting the battery management system (BMS). I should also note at this point that other Redditors have reported that firmware updates not only killed rings’ batteries but also permanently neutered their wireless connectivity.
What happened to the original ring? My suspicion is that it actually had something to do with an inherently compromised (coupled with algorithm-worsened) charging scheme that led to battery overcharge and subsequent damage. Ultrahuman bundles a USB-C-to-USB-C cable with the ring, which would imply (incorrectly, as it turns out) that the ring charging dock circuitry can handle (including down-throttling the output as needed) any peak-wattage USB-C charger that you might want to feed it with, including (but not limited to) USB-PD-capable ones.
In actuality, product documentation claims that you should connect the dock to a charger with only a maximum output of 5W/2A. After doing research on Amazon and elsewhere, I wasn’t able to find any USB-C chargers that were that feeble. So, to get there at all, I had to dig out of storage an ancient Apple 5W USB-A charger, which I then mated to a third-party USB-A-to-USB-C cable.

That all said, following in the footsteps of others on the Ultrahuman subreddit who’d had similar experiences (and positive results), I reached out to the Reddit forum moderators (who are Ultrahuman employees, including the founder and CEO!) and after going through a few more debugging steps they’d suggested (which I’d already tried, but whatevah), got shipped a new ring.
It’s been stable through multiple subsequent firmware updates, with the stored charge dropping only ~10-15% per day (translating to the expected week-ish of between-charges operating life). And the pace of new firmware releases has also now notably slowed, suggestive of either increasing code stability or a refocus on development of the planned new product that aspires to avoid Oura patent infringement…I’m hoping for the more optimistic former option!
Other observationsMore comments, some of which echo general points made in last month’s write-up:
- Since this smart ring, like those from Oura, leverages wireless inductive charging, docks are ring-size-specific. If you go up or down a size or a few, you’ll need to re-purchase this accessory (one comes with each ring, so this is specifically a concern if, like me, you’ve already bought extras for travel, elsewhere in the house, etc.)

- There’s no battery case available that I’ve come across, not even a third-party option.
- That 10-15% per day battery drop metric I just mentioned is with the ring in its initial (sole) “Turbo” operating mode, not with the subsequently offered (and now default) “Chill” option. I did drop it down to “Chill” for a couple of days, which decreased the per-drop battery-level drop by a few percent, but nothing dramatic. That said, my comparative testing wasn’t extensive, so my results should be viewed as anecdotal, not scientific. Quoting again from last month’s writeup:
Chill Mode is designed to intelligently manage power while preserving the accuracy of your health data. It extends your Ring AIR battery life by up to 35% by tracking only what matters, when it matters. Chill Mode uses motion and context-based intelligence to track heart rate and temperature primarily during sleep and rest.
- It (like the other smart rings I also tested) misinterpreted keyboard presses and other finger-and-hand movements as steps, leading to over-measurement results, especially on my dominant right hand.
- While Bluetooth LE connectivity extends battery life compared to a “vanilla” Bluetooth alternative, it also notably reduces the ring-to-phone connection range. Practically speaking, this isn’t a huge deal, though, since the data is viewed on the phone. The act of picking the phone up (assuming your ring is also on your body) will also prompt a speedy close-proximity preparatory sync.
- Unlike Oura (and like RingConn), Ultrahuman provides membership-free full data capture and analysis capabilities. That said, the company sells optional Powerplug software add-ons to further expand app functionality, along with extended warranties that, depending on the duration, also include one free replacement ring in case your sizing changes due to, for example, ring-encouraged and fitness-induced weight loss.
- The app will also automatically sync with other health services, such as Fitbit and Android’s built-in Health Connect. That said, I wonder (but haven’t yet tested to confirm or deny) what happens if, for example, I wear both the ring and an inherently Fitbit-cognizant Google Pixel Watch (or, for that matter, my Garmin or Withings smartwatches).



- One other curious note: Ultrahuman claims that it’s been manufacturing rings not only in its headquarters country, India, but also in the United States since last November in partnership with a contractor, SVtronics. And in fact, if you look at Amazon’s product page for the Ring AIR, you’ll be able to select between “Made in India” and “Made in USA” product ordering options. Oura, conversely, has indicated that it believes the claimed images of US-located manufacturing facilities are “Photoshop edits” with no basis in reality. I don’t know, nor do I particularly care, what the truth is here. I bring it up only to exemplify the broader contentious nature of ongoing interactions between Oura and its upstart competitors (also including pointed exchanges with RingConn).
Speaking of RingConn, and nearing 1,600 words at this point, I’m going to wrap up my Ultrahuman coverage and switch gears for my other planned post for this month. Time (and ongoing litigation) will tell, I guess, as to whether I have more to say about Ultrahuman in the future, aside from the previously mentioned (and still planned) teardown of my original ring. Until then, reader thoughts are, as always, welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Smart ring allows wearer to “air-write” messages with a fingertip
- The 2025 CES: Safety, Longevity and Interoperability Remain a Mess
- Can wearable devices help detect COVID-19 cases?
The post Can a smart ring make me an Ultrahuman being? appeared first on EDN.
Universal homing sensor: A hands-on guide for makers, engineers

A homing sensor is a device used in certain machines to detect a fixed reference point, allowing the machine to determine its exact starting position. When powered on, the machine moves until it triggers the sensor, so it can accurately track movement from that point onward. It’s essential for precision and repeatability in automated motion systems.
Selecting the right homing sensor can have a big impact on accuracy, dependability, and overall cost. Here is a quick rundown of the three main types:
Mechanical homing sensors: These operate through contact-direct switches or levers to determine position.
- Advantages: Straightforward, budget-friendly, and easy to install.
- Drawbacks: Prone to wear over time, slower to respond, and less accurate.
Magnetic homing sensors: Relying on magnetic fields, often via Hall effect sensors, these do not require physical contact.
- Advantages: Long-lasting, effective in harsh environments, and maintenance-free.
- Drawbacks: Can be affected by magnetic interference and usually offer slightly less resolution than optical sensors.
Optical homing sensors: These use infrared light paired with slotted discs or reflective surfaces for detection.
- Advantages: Extremely precise, quick response time, and no mechanical degradation.
- Drawbacks: Sensitive to dust and misalignment and typically come at a higher cost.
In clean, high-precision applications like 3D printers or CNC machines, optical sensors shine. For more demanding or industrial environments, magnetic sensors often strike the right balance. And if simplicity and low cost are top priorities, mechanical sensors remain a solid choice.

Figure 1 Magnetic, mechanical, and optical homing sensors are available in standard configurations. Source: Author
The following parts of this post detail the design framework of a universal homing sensor adapter module.
We will start with a clean, simplified schematic of the universal homing sensor adapter module. Designed for broad compatibility, it accepts logic-level inputs—including both CMOS and TTL-compatible signals—from nearly any homing sensor head, whether it’s mechanical, magnetic, or optical, making it a flexible choice for diverse applications.

Figure 2 A minimalistic design highlights the inherent simplicity of constructing a universal homing sensor module. Source: Author
The circuit is simple, economical, and built using easily sourced, budget-friendly components. True to form, the onboard test button (SW1) mirrors the function of a mechanical homing sensor, offering a convenient stand-in for setup and troubleshooting tasks.
The 74LVC1G07 (IC1) is a single buffer with an open-drain output. Its inputs accept signals from both 3.3 V and 5 V devices, enabling seamless voltage translation in mixed-signal environments. Schmitt-trigger action at all inputs ensures reliable operation even with slow input rise and fall times.
Optional flair: LED1 is not strictly necessary, but it offers a helpful visual cue. I tested the setup with a red LED and a 1-KΩ resistor (R3)—simple, effective, and reassuringly responsive.
As usual, I whipped up a quick-and-dirty breadboard prototype using an SMD adapter PCB (SOT-353 to DIP-6) to host the core chip (Figure 3). I have skipped the prototype photo for now—there is only a tiny chip in play, and the breadboard layout does not offer much visual clarity anyway.

Figure 3 A good SMD adapter PCB gives even the tiniest chip time to shine. Source: Author
A personal note: I procured the 74LVC1G07 chip from Robu.in.
Just before the setup reaches its close, note that machine homing involves moving an axis toward its designated homing sensor—a specific physical location where a sensor or switch is installed. When the axis reaches this point, the controller uses it as a reference to accurately determine the axis position. For reliable operation, it’s essential that the homing sensor is mounted precisely in its intended location on the machine.
While wrapping up, here are a few additional design pointers for those exploring alternative options, since we have only touched on a straightforward approach so far. Let’s take a closer look at a few randomly picked additional components and devices that may be better suited for the homing task:
- SN74LVC1G16: Inverting buffer featuring Schmitt-trigger input and open-drain output; ideal for signal conditioning and noise immunity.
- SN74HCS05: Hex inverter with Schmitt-trigger inputs and open-drain outputs; useful for multi-channel logic interfacing.
- TCST1103/1202/1300: Transmissive optical sensor with phototransistor output; ideal for applications that require position sensing or the detection of an object’s presence or absence.
- TCRT5000: Reflective optical sensor; ideal for close-proximity detection.
- MLX75305: Light-to-voltage sensor (EyeC series); converts ambient light into a proportional voltage signal, suitable for optical detection.
- OPBxxxx Series: Photologic slotted optical switches; designed for precise object detection and position sensing in automation setups.
Moreover, compact inductive proximity sensors like the Omron E2B-M18KN16-M1-B1 are often used as homing sensors to detect metal targets—typically a machine part or actuator—at a fixed reference point. Their non-contact operation ensures reliable, repeatable positioning with minimal wear, ideal for robotic arms, linear actuators, and CNC machines.

Figure 4 The Omron E2B-M18KN16-M1-B1 inductive proximity sensor supports homing applications by detecting metal targets at fixed reference points. That enables precise, contactless positioning in industrial setups. Source: Author
Finally, if this felt comfortably familiar, take it as a cue to go further; question the defaults, reframe the problem, and build what no datasheet dares to predict.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Reflective Object Sensors
- Smart PIR Sensor for Smart Homes
- Inductive Proximity Switch w/ Sensor
- The role of IoT sensors in smart homes and cities
- Radar sensors in home, office, school, factory and more
The post Universal homing sensor: A hands-on guide for makers, engineers appeared first on EDN.
Amazon and Google: Can you AI-upgrade the smart home while being frugal?

The chronological proximity of Amazon and Google’s dueling new technology and product launch events on Tuesday and Wednesday of this week was highly unlikely to have been a coincidence. Which company, therefore, reacted to the other? Judging solely from when the events were first announced, which is the only data point I have as an outsider, it looks like Google was the one who initially put the stake in the ground on September 2nd with an X (the service formerly known as Twitter) post, with Amazon subsequently responding (not to mention scheduling its event one day earlier in the calendar) two weeks later, on September 15.
Then again, who can say for sure? Maybe Amazon started working on its event ahead of Google, and simply took longer to finalize the planning. We’ll probably never know for sure. That said, it also seems from the sidelines that Amazon might have also gotten its hands on a leaked Google-event script (to be clear, I’m being completely facetious with what I just said). That’s because, although the product specifics might have differed, the overall theme was the same: both companies are enhancing their existing consumer-residence ecosystems with AI (hoped-for) smarts, something that they’ve both already announced as an intention in the past:
- Amazon, with a generative AI evolution-for-Alexa allusion two years ago, subsequently assigned the “Alexa+” marketing moniker back in February, and
- Google, which foreshadowed the smart home migration to come within its announcement of the Google Assistant-to-Gemini transition for mobile devices back in March.
Quoting from one of Google’s multiple event-tied blog posts as a descriptive example of what both companies seemingly aspire to achieve:
The idea of a helpful home is one that truly takes care of the people inside it. While the smart home has shown flashes of that promise over the last decade, the underlying AI wasn’t anywhere as capable as it is today, so the experience felt transactional, not conversational. You could issue simple commands, but the home was never truly conversational and seldom understood your context.
Today, we’re taking a massive step toward making the helpful home a reality with a fundamentally new foundation for Google Home, powered by our most capable AI yet, Gemini. This new era is built on four pillars: a new AI for your home, a redesigned app, new hardware engineered for this moment and a new service to bring it all together.
Amazon’s hardware “Hail Mary”Of the two companies, Amazon has probably got the most to lose if it fumbles the AI-enhancement service handoff. That’s because, as Ars Technica’s coverage title aptly notes, “Alexa’s survival hinges on you buying more expensive Amazon devices”:
Amazon hasn’t had a problem getting people to buy cheap, Alexa-powered gadgets. However, the Alexa in millions of homes today doesn’t make Amazon money. It’s largely used for simple tasks unrelated to commerce, like setting timers and checking the weather. As a result, Amazon’s Devices business has reportedly been siphoning money, and the clock is ticking for Alexa to prove its worth.
I’m ironically a case study of Amazon’s conundrum. Back in early March, when the Alexa+ early-access program launched, I’d signed up. I finally got my “Your free Early Access to Alexa+ starts now” email on September 24, a week and a day ago, as I’m writing this on October 2. But I haven’t yet upgraded my service, which is admittedly atypical behavior for a tech enthusiast such as myself.
Why? Price isn’t the barrier in my particular case (though it likely would be for others less Amazon-invested than me); mine’s an Amazon Prime-subscribing household, so Alexa+ is bundled versus costing $19.99 per month for non-subscribers. Do the math, though, and why anyone wouldn’t go the bundle-with-Prime route is the question (which, I’d argue, is Amazon’s core motivation); Prime is $14.99 per month or $139/year right now.
So, if it’s not the service price tag, then what alternatively explains my sloth? It’s the devices—more accurately, my dearth of relevant ones—with the exception of the rarely-used Alexa app on my smartphones and tablets (which, ironically, I generally fire up only when I’m activating a new standalone Alexa-cognizant device).
Alexa+ is only supported on newer-generation hardware, whereas more than half (and the dominant share in regular use) of the devices currently activated in my household are first-generation Echoes, early-generation Echo Dots, and a Tap. With the exception of the latter, which I sometimes need to power-cycle before it’ll start streaming Amazon Music-sourced music again, they’re all still working fine, at least for the “transactional” (per Google’s earlier lingo) functions I’ve historically tasked them with.
And therefore, as an example of “chicken and the egg” paralysis, in the absence of their functional failure, I’m not motivated to proactively spend money to replace them in order to gain access to additional Alexa+ services that might not end up rationalizing the upfront investment.
Speakers, displays, and stylus-augmented e-book readersAmazon unsurprisingly announced a bevy of new devices this week, strangely none of which seemingly justified a press release or, come to think of it, even an event video, in stark contrast to Apple’s prerecorded-only approach (blog posts were published a’plenty, however). Many of the new products are out-of-the-box Alexa+ capable and, generally speaking, they’re also more expensive than their generational precursors. First off is the curiously reshaped (compared to its predecessor) Echo Studio, in both graphite (shown) and “glacier” white color schemes:

There’s also a larger version of the now-globular Echo Dot (albeit still smaller than the also-now-globular Echo Studio), called the Echo Dot Max, with the same two color options:

And two also-redesigned-outside smart displays, the Echo Show 11 and latest-generation Echo Show 8, which basically (at least to me) look like varying-sized Echo Dots with LCDs stuck to their fronts. They both again come in both graphite and glacier white options:


and also have optional, added-price, more position-adjustable stands:

This new hardware begs the perhaps-predictable question: Why is my existing hardware not Alexa+ capable? Assuming all the deep learning inference heavy lifting is being done on the Amazon “cloud”, what resource limitations (if any) exist with the “edge” devices already residing in my (at least semi-) smart home?
Part of the answer might be with my assumption in the prior sentence; perhaps Amazon is intending for them to have limited (at least) ongoing standalone functionality if broadband goes down, which would require beefier processing and memory than that included with my archaic hardware. Perhaps, too, even if all the AI processing is done fully server-side, Amazon’s responsiveness expectations aren’t adequately served by my devices’ resources, in this case also including Wi-Fi connectivity. And yes, to at least some degree, it may just be another “obsolescence by design” case study. Sigh. More likely, my initial assumption was over-simplistic and at least a portion of the inference functions suite is running natively on the edge device using locally stored deep learning models, particularly for situations where rapid response time (vs edge-to-cloud-and-back round-trip extended latency) is necessary.
Other stuff announced this week included three new stylus-inclusive, therefore scribble-capable, Kindle Scribe 11” variants, one with a color screen, which this guy, who tends to buy—among other content—comics-themed e-books that are only full-spectrum appreciable on tablet and computer Kindle apps, found intriguing until he saw the $629.99-$679.99 price tag (in fairness, the company also sells stylus-less, but notably less expensive Colorsoft models):

and higher-resolution indoor and outdoor Blink security cameras, along with a panorama-stitching two-camera image combiner called the Blink Arc:

Speaking of security cameras, Ring founder Jamie Siminoff, who had previously left Amazon post-acquisition, has returned and was on hand this week to personally unveil also-resolution-bumped (this time branded as Retinal Vision) indoor- and outdoor-intended hardware, including an updated doorbell camera model:

Equally interesting to me are Ring’s community-themed added and enhanced services: Familiar Faces, Alexa+ Greetings, and (for finding lost dogs) Search Party. And then there’s this notable revision of past stance, passed along as a Wired coverage quote absent personal commentary:
It’s worth noting that Ring has brought back features that allow law enforcement to request footage from you in the event of an incident. Ring customers can choose to share video, and they can stay anonymous if they opt not to send the video. “There is no access that we’re giving police to anything other than the ability to, in a very privacy-centric way, request footage from someone who wants to do this because they want to live in a safe neighborhood,” Siminoff tells WIRED.
A new software chapterLast, but not least (especially in the last case) are several upgraded Fire TVs, still Fire OS-based:

and a new 4K Fire TV Stick, the latter the first out-of-box implementation example of Amazon’s newfound Linux embrace (and Linux-derived Android about-face), Vega OS:

We’d already known for a while that Amazon was shutting down its Appstore, but its Fire OS-to-Vega OS transition is more recent. Notably, there’s no more local app sideloading allowed; all apps come down from the Amazon cloud.
Google’s more modest (but comprehensive) response
Google’s counterpunch was more muted, albeit notably (and thankfully, from a skip-the-landfill standpoint) more inclusive of upgrades for existing hardware versus the day-prior comparative fixation on migrating folks to new devices, and reflective of a company that’s fundamentally a software supplier (with a software-licensing business model). Again from Wired’s coverage:
This month, Gemini will launch on every Google Assistant smart home device from the last decade, from the original 2016 Google Home speaker to the Nest Cam Indoor 2016. It’s rolling out in Early Access, and you can sign up to take part in the Google Home app.
There’s more:
Google is bringing Gemini Live to select Google Home devices (the Nest Audio, Google Nest Hub Max, and Nest Hub 2nd Gen, plus the new Google Home Speaker). That’s because Gemini Live has a few hardware dependencies, like better microphones and background noise suppression. With Gemini Live, you’ll be able to have a back-and-forth conversation with the chatbot, even have it craft a story to tell kids, with characters and voices.
But note the fine print, which shouldn’t be a surprise to anyone who’s already seen my past coverage: “Support doesn’t include third-party devices like Lenovo’s smart displays, which Google stopped updating in 2023.”
One other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, won’t ship until early next year. There was one other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, which won’t ship until early next year.

And, as the latest example of Google’s longstanding partnership with Walmart, the latter retailer has also launched a line of onn.-branded, Gemini-supportive security cameras and doorbells:

That’s what I’ve got for you today; we’ll have to see what, if anything else, Apple has for us before the end of the year, and whether it’ll take the form of an event or just a series of press releases. Until then, your fellow readers and I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling the Echo Studio, Amazon’s Apple HomePod foe
- Amazon’s Echo Auto Assistant: Legacy vehicle retrofit-relevant
- Lenovo’s Smart Clock 2: A “charged” device that met a premature demise
- The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance
- Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
The post Amazon and Google: Can you AI-upgrade the smart home while being frugal? appeared first on EDN.
PoE basics and beyond: What every engineer should know

Power over Ethernet (PoE) is not rocket science, but it’s not plug-and-play magic either. This short primer walks through the basics with a few practical nudges for those curious to try it out.
It’s a technology that delivers electrical power alongside data over standard twisted-pair Ethernet cables. It enables a single RJ45 cable to supply both network connectivity and power to powered devices (PDs) such as wireless access points, IP cameras, and VoIP phones, eliminating the need for separate power cables and simplifying installation.
PoE essentials: From devices to injectors
Any network device powered via PoE is known as a powered device or PD, with common examples including wireless access points, IP security cameras, and VoIP phones. These devices receive both data and electrical power through Ethernet cables from power sourcing equipment (PSE), which is classified as either “endspan” or “midspan.”
An endspan—also called an endpoint—is typically a PoE-enabled network switch that directly supplies power and data to connected PDs, eliminating the need for a separate power source. In contrast, when using a non-PoE network switch, an intermediary device is required to inject power into the connection. This midspan device, often referred to as a PoE injector, sits between the switch and the PD, enabling PoE functionality without replacing existing network infrastructure. A PoE injector sends data and power together through one Ethernet cable, simplifying network setups.

Figure 1 A PoE injector is shown with auto negotiation that manages power delivery safely and efficiently. Source: http://poe-world.com
The above figure shows a PoE injector with auto negotiation, a safety and compatibility feature that ensures power is delivered only when the connected device can accept it. Before supplying power, the injector initiates a handshake with the PD to detect its PoE capability and determine the appropriate power level. This prevents accidental damage to non-PoE devices and allows precise power delivery—whether it’s 15.4 W for Type 1, 25.5 W for Type 2, or up to 90 W for newer Type 4 devices.
Note at this point that the original IEEE 802.3af-2003 PoE standard provides up to 15.4 watts of DC power per port. This was later enhanced by the IEEE 802.3at-2009 standard—commonly referred to as PoE+ or PoE Plus—which supports up to 25.5 watts for Type 2 devices, making it suitable for powering VoIP phones, wireless access points, and security cameras.
To meet growing demands for higher power delivery, the IEEE introduced a new standard in 2018: IEEE 802.3bt. This advancement significantly increased capacity, enabling up to 60 watts (Type 3) and circa 100 watts (Type 4) of power at the source by utilizing all four pairs of wires in Ethernet cabling compared to earlier standards that used only two pairs.
As indicated previously, VoIP phones were among the earliest applications of PoE. Wireless access points (WAPs) and IP cameras are also ideal use cases, as all these devices require both data connectivity and power.

Figure 2 This PoE system is powering a fixed wireless access (FWA) device.
As a sidenote, an injector delivers power over the network cable, while a splitter extracts both data and power—providing an Ethernet output and a DC plug.
A practical intro to PoE for engineers and DIYers
So, PoE simplifies device deployment by delivering both power and data over a single cable. For engineers and DIYers looking to streamline installations or reduce cable clutter, PoE offers a clean, scalable solution.
This brief session outlines foundational use cases and practical considerations for first-time PoE users. No deep dives: just clear, actionable insights to help you get started with smarter, more efficient connectivity.
Up next is the tried-and-true schematic of a passive PoE injector I put together some time ago for an older IP security camera (24 VDC/12 W).

Figure 3 Schematic demonstrates how a passive PoE injector powers an IP camera. Source: Author
In this setup, the LAN port links the camera to the network, and the PoE port delivers power while completing the data path. As a cautionary note, use a passive PoE injector only when you are certain of the device’s power requirements. If you are unsure, take time to review the device specifications. Then, either configure a passive injector to match your setup or choose an active PoE solution with integrated negotiation and protection.
Fundamentally, most passive PoE installations operate across a range of voltages, with 24 V often serving as practical middle ground. Even lower voltages, such as 12 V, can be viable depending on cable length and power requirements. However, passive PoE should never be applied to devices not explicitly designed to accept it; doing so risks damaging the Ethernet port’s magnetics.
Unlike active PoE standards, passive PoE delivers power continuously without any form of negotiation. In its earliest and simplest form, it leveraged unused pairs in Fast Ethernet to transmit DC voltage—typically using pins 4–5 for positive and 7–8 for negative, echoing the layout of 802.3af Mode B. As Gigabit Ethernet became common, passive PoE evolved to use transformers that enabled both power and data to coexist on the same pins, though implementations vary.
Seen from another angle, PoE technology typically utilizes the two unused twisted pairs in standard Ethernet cables—but this applies only to 10BASE-T and 100BASE-TX networks, which use two pairs for data transmission.
In contrast, 1000BASE-T (Gigabit Ethernet) employs all four twisted pairs for data, so PoE is delivered differently—by superimposing power onto the data lines using a method known as phantom power. This technique allows power to be transmitted without interfering with data, leveraging the center tap of Ethernet transformers to extract the common-mode voltage.
PoE primer: Surface touched, more to come
Though we have only skimmed the surface, it’s time for a brief wrap-up.
Fortunately, even beginners exploring PoE projects can get started quickly, thanks to off-the-shelf controller chips and evaluation boards designed for immediate use. For instance, the EV8020-QV-00A evaluation board—shown below—demonstrates the capabilities of the MP8020, an IEEE 802.3af/at/bt-compliant PoE-powered device.

Figure 4 MPS showcases the EV8020-QV-00A evaluation board, configured to evaluate the MP8020’s IEEE 802.3af/at/bt-compliant PoE PD functionality. Source: MPS
Here are my quick picks for reliable, currently supported PoE PD interface ICs—the brains behind PoE:
- TI TPS23730 – IEEE 802.3bt Type 3 PD with integrated DC-DC controller
- TI TPS23731 – No-opto flyback controller; compact and efficient
- TI TPS23734 – Type 3 PD with robust thermal performance and DC-DC control
- onsemi NCP1081 – Integrated PoE-PD and DC-DC converter controller; 802.3at compliant
- onsemi NCP1083 – Similar to NCP1081, with auxiliary supply support for added flexibility
- TI TPS2372 – IEEE 802.3bt Type 4 high-power PD interface with automatic MPS (maintain power signature) and autoclass
Similarly, leading semiconductor manufacturers offer a broad spectrum of PSE controller ICs for PoE applications—ranging from basic single-port controllers to sophisticated multi-port managers that support the latest IEEE standards.
As a notable example, TI’s TPS23861 is a feature-rich, 4-channel IEEE 802.3at PSE controller that supports auto mode, external FET architecture, and four-point detection for enhanced reliability, with optional I²C control and efficient thermal design for compact, cost-effective PoE systems.
In short, fantastic ICs make today’s PoE designs smarter and more efficient, especially in dynamic or power-sensitive environments. Whether you are refining an existing layout or venturing into high-power applications, now is the time to explore, prototype, and push your PoE designs further. I will be here.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- More opportunities for PoE
- A PoE injector with a “virtual” usage precursor
- Simple circuit design tutorial for PoE applications
- Power over Ethernet (PoE) grows up: it’s now PoE+
- Power over Ethernet (PoE) to Power Home Security & Health Care Devices
The post PoE basics and beyond: What every engineer should know appeared first on EDN.



