EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 26 min ago

A closer look at isolated comparators

Wed, 10/08/2025 - 14:45

How do isolated comparators differ from standard comparators? What are their primary applications in analog and power electronics? Here is a brief review of this critical building block and what design engineers need to understand about its application. The article also presents a few popular isolated comparators and what makes them suitable for specific designs.

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post A closer look at isolated comparators appeared first on EDN.

Dropping a PRTD into a thermistor slot—impossible?

Tue, 10/07/2025 - 19:33

Up front: some background. The air-temperature sensor attached to my (home-brew) rain gauge became flaky. Short-term solution: fix it (done). Longer-term goal: improve it (read on).

That sensor is a standard Vishay NTC (negative temperature coefficient) thermistor: 10k at 25°C and with a beta value of 3977. In conjunction with a load resistor, it feeds a PIC microcontroller (MCU), which samples the resulting voltage (8 bits) for radio-linking back to base for processing and display. Figure 1 shows the utterly conventional circuit together with its response to temperature.

Figure 1 A basic thermistor circuit, together with its calculated response.

The load resistor’s value of 15699 Ω may seem strange, but that is the thermistor’s resistance at 15°C, the mid-point of the desired -9 to +40°C measuring range. Around every 30 seconds, the PIC strobes it for just long enough for the reading to settle.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The plot shows the calculated response together with a straight line running through the two actual calibration points of 0°C (melting, crushed ice) and 30°C (comparison with a known-good thermometer). That response was calculated using the extended Steinhart–Hart equations rather than the less accurate exponential approximation. Steinhart and Hart (S-H) are to NTC thermistors as Callender and Van Dusen are to platinum resistance temperature detectors (PRTDs), modifying the exponential curve just as Callender-Van Dusen (CVD) tweaks an otherwise straight line.

The relevant Wikipedia article is, of course, informative. Still, a brief and useful guide to the S–H equations, complete with all the necessary constants, can be found on page 4 of Vishay’s relevant datasheet. Curiously, their tables of resistance versus temperature show truncated rather than rounded values, so they quote our device’s R15 as 15698 ohms rather than 15699. The S–H figure is 15698.76639545805…, give or take a few pico-ohms.

You’ll notice that Figure 1’s plot is upside down! That is deliberate, so a higher temperature shows a higher output, though the voltage actually falls. I think that’s more intuitive; you may disagree.

Matching an RTD to an NTC

That straight line, derived from the S–H values at 0 and 30°C, is the key to this idea. Making the PRTD generate a signal that matches it will avoid any major changes to the processing code, especially the calibration points, and it will also provide a much wider range with greater accuracy than an NTC. Because the voltage from the thermistor circuit is ratiometric, the PRTD must output a level that is a proportion of the supply.

To do that, we amplify the voltage developed across the PRTD, compensate for the CVD departure from linearity, and add an offset. The simplest circuit that can do all these is shown in Figure 2a.

Figure 2 Probably the simplest circuit (2a) that can give an output from a PRTD to match a thermistor’s response, with a slightly better variant (2b). These are both flawed, and the component values are not optimized. They are to show the principle, not the practice.

That simplicity leads to complications, because pretty much every component in Figure 2a interacts with every other one. It’s bad enough to design, even with ideal (simulated) parts, but final calibration could require hours of iterative frustration. Buffering the offset voltage, as shown in Figure 2b, helps, but that extra op-amp can be put to better use.

A practical circuit

If we split the circuit into two, life becomes easier. Figure 3 shows how.

Figure 3 The final, workable circuit. Amplification and offsetting are now separate, making calibration much easier.

The processor turns Q1 on to deliver power. (The previously active-high GPIO pin powering the thermistor must now be active-low to drive Q1’s gate, and that was the only code change needed.) The FDC604 has a low RDS(ON) of a few tens of milliohms, so it drops only 100 µV or so, which is insignificant, even if the measuring ADC’s reference is the Vdd rail. (Offsets within the MCU itself will probably be greater.) Because the circuit is only active for a millisecond every half minute or so, self-heating of the RTD can be ignored. Consumption was about 3 mA at 5 V or 2 mA at 3.3 V.

R1 feeds current through the RTD, producing a voltage that is amplified by A1a, whose gain can be trimmed by R5. R6 feeds back into the RTD and R1 to compensate for both CVD and the varying drive to the RTD as its resistance changes. Its value is fairly critical: 33k works well enough for our purposes, but 31k95—33k||1M0—is almost perfect, with a predicted error of way under 1 millidegree over a 100°C span—theoretically—so we’ll use that. Obviously, this is ridiculous overkill with 8-bit output sampling, but if a single extra resistor can eliminate one source of errors, it’s worth going for.

A1b now amplifies the signal further (and inverts it) and applies a trimmable offset. Its output as a fraction of the supply voltage is now directly proportional to the PRTD’s temperature. Note that the gain of this stage is preset: R7 and R8 should be selected so that their ratio is as close as possible to 3.9, though their absolute values are not critical. The result is shown in Figure 4.

Figure 4 Plotting the output against the RTD’s resistance now gives a result that is almost indistinguishable from the straight-line target, the (idealized) error corresponding to much less than 1 millidegree. This shows the performance limit for this circuit; don’t expect to match it in real life.

Modeling and plotting

A simple program (Python plus Pygame) to plot the circuit’s operation at different scales made it easy to see the effects of changing both R6 and A1a’s gain, with the error curve tilting (gain error) and bending (compensation error). That curve needs to be as straight and flat as possible.

Modeling the first section needed iteration, starting with a (notional) unit voltage feeding R1 and ~0.7 driving R6. Calculating the voltage across the PRTD and amplifying that gave the stage’s output, ready to feed back into R6 for recalculating V_RTD. (Repeating until successive results matched to eight significant figures took no more than ten iterations.) The section representing A1b was trivial: take A1a’s output and multiply by 3.9 while subtracting the offset.

As a cross-check, I put the derived values into LTspice and got almost the same results. The slight differences are probably because even simulated op-amp gain stages have finite performance, unlike multiplication signs.

The program also generated Table 1, which may prove useful. It shows the resistance of the PRTD at various temperatures (centered on 15°C) together with the output voltage referred to Vdd and given as a proportion of it. That output is also shown, scaled from 0–255 in both decimal and hex.

The long numbers the program generated have been rounded to more reasonable lengths, which, deliberately, are still more accurate than most test kits can resolve. Too many digits may be useful; too few never are.

Table 1 The PRTD’s resistance and Figure 3’s output calculated against temperature, centered on 15°C. The output is shown as decimals, both raw and rounded, and hex.

Compensating for long leads

As it stands, the circuit does not lend itself to true 3- or 4-wire compensation for the length of the leads to the RTD—unnecessary with an NTC’s multi-kΩ resistance. However, using a 4-wire Kelvin connection, where the power-feed and sensing lines are separate, should work well and reduce the cable’s effect, as shown in Figure 5. With less than a meter separating the RTD from the circuitry, I used speaker cable. (Copper’s TCR is close to that of a PRTD.)

Figure 5 Long leads to a PRTD can cause offset errors. Using a 4-wire Kelvin arrangement minimizes these. If the µC’s A–D has external reference-voltage pins, they can be driven from the circuit for (notionally) improved accuracy.

Figure 5 also shows how accuracy could be improved by driving the ADC’s reference pins from the circuit’s power rails, though this is academic for coarse sampling. It would also compensate for any voltage drop across Q1, should that be important. Q1 could then even be omitted, the circuit being powered directly from an active-high pin. That would drop the rail voltage, which wouldn’t matter if it were fed back to REF+.

This circuit is optimized for a center temperature of 25°C, as that is the point at which most thermistors are specified, with the load resistor equaling the R(25) value. Unlike the 15°-centered version in Figure 3, I’ve not built or tried it, but believe it to be clean. Its plot—error curve included—looked very close to that in Figure 4, but shifted by 10°C.

Errors, both theoretical and practical

The input offset voltage of op-amps changes with temperature and is a potential source of errors. The quoted figure for the MCP6002 is ±2 µV/°C (typ.), which is good but not insignificant. Heating the circuit by ~40°C (with a 100R resistor replacing the PRTD) gave an output shift corresponding to less than 0.05°, which is acceptable, and in line with calculations. (An old hairdryer is part of my workbench kit.) Here, the circuitry and the PRTD will both be outside, and thus at about the same temperature.

So how does it perform in reality? It’s now built and calibrated exactly as in Figure 3, but not yet installed, allowing testing with a PRTD simulator kludged up from resistors, both fixed and variable, plus switches so the resistance can be connected to either the circuit or a (well-calibrated) meter for precise adjustment. Checking at simulated temperatures from -10 to +50°C showed errors ranging from zero at -10° to -0.22° at +50° with either 3.3 V or 5 V supplies. This could be improved with extra fiddling (I suspect a slight mismatch in R7/8’s ratio; available parts had unhelpful spreads), but the errors are less than the MCU’s 8-bit resolution (~0.351 degrees/count, or ~2.85 counts/degree), so it’ll do the job it’s intended for, and do it well.

While this approach doesn’t substitute for a “proper” PRTD circuit, it does make a nice drop-in replacement for a thermistor, giving a wider measurement range with much better linearity while needing no extra processing. I hope the true experts in the field won’t find too many problems with it. BTW, “expert” derives etymologically from “stuff you’ve learned the hard way: been there, done that, worn the hair shirt”. Never trust an armchair expert unless you’re shopping for comfortable seating.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

The post Dropping a PRTD into a thermistor slot—impossible? appeared first on EDN.

Next-gen UWB radio to enable radar sensing and data streaming applications

Tue, 10/07/2025 - 11:19

Since the early 2000s, ultra-wideband (UWB) technology has gradually found its way into a variety of commercial applications that require secure and fine-ranging capabilities. Well-known examples are handsfree entry solutions for cars and buildings, locating assets in warehouses, hospitals, and factories, and navigation support in large spaces like airports and shopping malls.

A characteristic of UWB wireless signal transmission is the emission of very short pulses in the time domain. In impulse-radio (IR) UWB technology, this is taken to the extreme by transmitting pulses of nanoseconds or even picoseconds. Consequently, in the frequency domain, it occupies a bandwidth that is much wider than wireless ‘narrowband’ communication techniques like Wi-Fi and Bluetooth.

UWB technology operates over a broad frequency range (ranging typically from 6 to 10 GHz) and uses channel bandwidths of around 500 MHz and higher. And because of that, its ranging accuracy is much higher than that of narrowband technologies.

Today, UWB can provide cm- to mm-level location information between a transmitter (TX) and receiver (RX) that are typically 10-15 meters apart. In addition, enhancements to the UWB physical layer—as part of the adoption of the IEEE 802.15.4z amendment to the IEEE standard for low-rate wireless networks—have been instrumental in enabling secure ranging capabilities.

Figure 1 Here is a representation of UWB and narrowband signal transmission, in the (top) frequency and (bottom) time domain. Source: imec

Over the years, imec has contributed significantly to advancing UWB technology and overcoming the challenges that have hindered its widespread adoption. That includes reducing its power consumption, enhancing its bit rate, increasing its ranging precision, making the receiver chip more resilient against interference from other wireless technologies operating in the same frequency band, and enabling cost-effective CMOS silicon chip implementations.

Imec researchers developed multiple generations of UWB radio chips, compliant with the IEEE 802.15.4z standard for ranging and communication. Imec’s transmitter circuits operate through innovative pulse shape and modulation techniques, enabled by advanced polar transmitter, digital phase-locked loop (PLL), and ring oscillator-based architectures—offering mm-scale ranging precision at low power consumption.

At the receiver side, circuit design innovations have contributed to an outstanding interference resilience while minimizing power consumption. The various generations of UWB prototype transmitter and transceiver chips have all been fabricated with cost-effective CMOS-compatible processing techniques and are marked by small silicon areas.

The potential of UWB for radar sensing

Encouraged by the outstanding performance of UWB technology, experts have been claiming for some time that UWB’s potential is much larger than ‘accurate and secure ranging.’ They were seeing opportunities in radar-like applications which, as opposed to ranging, employ a single device that emits UWB pulses and analyzes the reflected signals to detect ‘passive’ objects.

When combined with UWB’s precise ranging capabilities, this could broaden the applications to automotive use cases such as in-cabin presence detection and monitoring the occupants’ gestures and breathing, aimed at increasing their safety.

Or think about smart homes, where UWB radar sensors could be used to adjust the lighting environment based on people’s presence. In nursing homes, the technology could be deployed to initiate an alert based on fall detection without the need for intrusive camera monitoring.

Enabling such UWB use cases will be facilitated by IEEE 802.15.4ab, the next-generation standard for wireless technology, which is expected to be officially released around year-end. 802.15.4ab will offer multiple enhancements, including radar functionality in IR-UWB devices, turning them into sensing-capable devices.

Fourth gen IR-UWB radio compliant with 802.15.4z/ab

At the 2025 Symposium on VLSI Technology and Circuits (VLSI 2025), imec presented its fourth-generation UWB transceiver, compliant with the baseline for radar sensing as defined by preliminary versions of 802.15.4ab. Baseline characteristics include, among others, enhanced modulation supported by high data rates.

Additionally, imec’s UWB radar sensing technology implements unique features offering enhanced radar sensing capabilities (such as extended range) and a record-high data rate of 124.8 Mbps integrated in a system-on-chip (SoC). Being also compliant with the current 802.15.4z standard, the new radio combines its radar sensing capabilities with communication and secure ranging.

Figure 2 The photograph captures fourth-generation IR-UWB radio system. Source: imec

A unique feature of imec’s IR-UWB radar sensing system is the 2×2 MIMO architecture, with two transmitters and two receivers configured in full duplex mode. In this configuration, a duplexer controls whether the transceiver operates in transmit or receive mode. Also, the TXs and RXs are paired together—TX1-RX1, TX1-RX2, and TX2-RX2—connected by the duplexer.

This allows the radar to simultaneously operate in transmit and receive mode without having to use RF switches to toggle from one mode to the other. This way of working enables reducing the nearest distance over which the radar can operate—a metric that is traditionally limited by the time needed to switch between both modes.

Imec’s full-duplex-based radar can operate in the range between 30 cm and 3 m, a breakthrough achievement. In this full-duplex MIMO configuration, the nearest distance is only restricted by the radar’s 500-MHz bandwidth.

The IR-UWB 2TRX radar physically implements two antenna elements, each antenna being shared between one TX and one RX. The 2×2 MIMO full-duplex configuration, however, enables an array with three antennas virtually, which substantially improves the radar’s angular resolution and area consumption.

Compared with state-of-the-art single-input-single-output (SISO) radars, the radar consumes 1.7x smaller area with 2.5 fewer antennas, making it a highly performant, compact, and cost-effective solution. Advanced techniques are used to isolate the TX from the RX signals, resulting in >30dB isolation over a 500-MHz bandwidth.

Figure 3 This architecture of the 2TRX was presented at VLSI 2025. Source: imec

Signal transmission relies on a hybrid analog/digital polar transmitter, introducing filtering effects in the analog domain for signal modulation. This results in a clean transmit signal spectrum, supporting the good performance and low power operation of the UWB radar sensor.

Finally, in addition to the MIMO-based analog/RF part, the UWB radar sensing device features an advanced digital baseband (or modem), responsible for signal processing. This component extracts relevant information such as the distance between the radar and the object, and an estimation of the angle of arrival.

Proof-of-concept: MIMO radar for in-cabin monitoring

The features of IR-UWB MIMO-based radar technology are particularly attractive for automotive use cases, where the UWB radar can be used not only to detect whether someone is present in the car, for example, child presence detection, but also to map the vehicle’s occupancy and monitor vital signs such as breathing. This capability is currently on the roadmap of several automotive OEMs and tier-1 suppliers.

But today, no radar technology can deliver this functionality with the required accuracy. Particularly challenging is achieving the angular resolution needed to detect two targets at the same (short) distance from the radar. In addition, for breathing monitoring, small movements of the target must be discerned within a period of a few seconds.

Figure 4 The in-cabin IR-UWB radar was demonstrated at PIMRC 2025. Source: imec

At the 2025 IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (IEEE PIMRC 2025), imec researchers presented the first proof-of-concept, showing the ability of IR-UWB MIMO radar system to perform two in-cabin sensing tasks: occupancy detection and breathing rate estimation. In-cabin measurements were carried out inside a small car.

The UWB platform was placed in front of an array of two in-house developed antenna elements placed in the center of the car ceiling, close to the rear-view mirror. The distance from the antennas to the center of the driver and front passenger seats was 55 cm.

The experimental results confirm achieving a high precision for estimating the angle-of-arrival and breathing rate. For instance, for a scenario where both passenger and driver seats are occupied, the UWB radar system achieves a standard deviation of less than 1.90 degrees and 2.95 bpm, for angle-of-arrival and breathing rate estimations, respectively.

Figure 5 Extracted breathing signals for driver and passenger were presented at PIMRC 2025. Source: imec

Imec researchers also highlight an additional benefit of using UWB technology for in-cabin monitoring: the TRX architecture, which is already used in some cars for keyless entry, can be re-purposed for the radar applications, cutting the overall costs.

High data rate opens doors to data streaming applications

In addition to radar sensing capabilities, this IR-UWB transceiver offers another feature that sets it apart from existing UWB solutions: it provides a record-high data rate of 124.8 Mbps, the highest data rate that is still compatible with the upcoming 802.15.4ab standard.

This is about a factor of 20 higher than the 6.8 Mbps data rate currently in use in ranging and communication applications; it results from an optimization of both the analog front-end and digital baseband. The high data rate also comes with a low energy per bit—much lower than consumed by Wi-Fi—especially at the transmit side.

These features will unlock new applications in both audio and video data streaming. Possible use cases are next-generation smart glasses or VR/AR devices, for which the UWB TRX’s small form factor is an added advantage.

Adding advanced ranging to UWB portfolio

In the last two decades, IEEE 802.15.4z-compliant UWB technology has proven its ability to support mass-market secure-ranging and localization deployments, enabling use cases across the automotive, smart industry, smart home, and smart building markets. Supported by the upcoming IEEE 802.15.4ab standard, emerging UWB devices can now also be equipped with radar functionality.

Imec’s fourth generation of IR-UWB technology is the first (publicly reported) 802.15.4ab compliant radar-sensing device, showing robust radar-sensing capabilities; it’s suitable for automotive as well as smart home use cases. The record high data rate also shows UWB’s potential to tap new markets: low-power data streaming for smart glasses or AR/VR devices.

The IEEE 802.15.4ab standard supports yet another feature: advanced ranging. This will enhance the link budget for signal transmission, translating into a fourfold increase in the ranging distance—up to 100 m in the case of a free line of sight. This feature is expected to significantly enhance the user experience for keyless entry solutions for cars and smart buildings.

Not only can it improve the operating distance, but it can also better address challenging environments such as when the signal is blocked by another object, for example, body blocking. Ongoing developments will enable this advanced ranging capability as a new feature in imec’s fifth generation of UWB technology.

The future looks bright for UWB technology. Not only do technological advances follow each other at a rapid pace, but ongoing standardization efforts help shape current and future UWB applications.

Christian Bachmann is the portfolio director of wireless and edge technologies at imec. He oversees UWB and Bluetooth programs enabling next-generation low-power connectivity for automotive, medical, consumer, and IoT applications. He joined imec in 2011 after working with Infineon Technologies and the Graz University of Technology.

Related Content

The post Next-gen UWB radio to enable radar sensing and data streaming applications appeared first on EDN.

A digital frequency detector

Mon, 10/06/2025 - 19:21

I designed the circuit in Figure 1 as a part of a data transmission system that has a carrier frequency of 400 kHz using on-off keying (OOK) modulation.

I needed to detect the presence of the carrier by distinguishing it from other signals of different frequencies. It was converted to digital with a 5-V logic. I wanted to avoid using programmable devices and timers based on RC circuits.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The resulting circuit is made up of four chips, including a crystal time base. In brief, this system measures the time between the rising edges of the received signal on a cycle-by-cycle basis. Thus, it detects if the incoming signal is valid or not in a short time (approximately one carrier cycle, that is ~2.5 µs). This is done independently of the signal duty cycle and in less time than other systems, such as a phase-locked loop (PLL), which may take several cycles to detect a frequency.

Figure 1 A digital frequency divider circuit that detects the presence of a 400-kHz carrier, distinguishing it from signals of other frequencies, after it has been converted to digital using 5-V logic.

How it works

In the schematic, IC1A and IC1B are the 6.144 MHz crystal oscillator and a buffer, respectively. For X1, I used a standard quartz crystal salvaged from an old microprocessor board.

The flip-flops IC2A and IC2B are interconnected such that a rising edge at the IC2A clock input (connected to the signal input) produces, through its output and IC2B input, a low logic level at IC2B Q output. Immediately afterwards, the low logic level resets IC2A, thereby leaving IC2B ready to receive a rising edge at its clock input, which causes its Q output to return to high again. Since the IC2B clock input is continuously receiving the 6.144 MHz clock, the low logic level at its output will have a very short duration. That very narrow pulse presets IC3, which takes its counting outputs to “0000”.

If IC4A is in a reset condition, that pulse will also set it in the way explained below, with the effect of releasing IC4B by deactivating its input (pin 4 of IC4) and enabling IC3 by pulling its input low.

From that instant, IC3 will count the 6.144 MHz pulses, and, if the next rising edge of the input signal occurs when IC3’s count is at “1110” or “1111”, IC1C’s output will be at a low level, so the IC4B output will go high, indicating that a cycle with about the correct period (2.5µs) has been received. Simultaneously, IC3 will be preset to start a new count. If the next rising edge occurred when the IC3 count was not yet at “1110”, IC3 would still be preset, but the circuit output would go low. This last scenario corresponds to an input frequency higher than 400 kHz.

On the contrary, if, after the last rising edge, a longer time than a valid period passes, the functioning of the circuit will be the following. When the IC3 count reaches the value “1111”, a 6.144 MHz clock pulse will occur at the signal input instead of a rising edge. This will make the IC4A Q output take the low level present at the IC3 output and the IC4A data input.

The low level at IC4A Q output will set IC4B, and the circuit output will go low. As IC4A Q output is also connected to its own input, that low level caused by a pulse at its clock input will prevent that flip-flop from responding to further clock pulses. From then on, the only way of taking IC4A out of that state will be by applying a low level (could be a very narrow pulse, as in this case) at its input (pin 10 of IC4). That would establish a forbidden condition for an instant, making IC4A first pull high both Q and , and immediately change to low.

As a result of the circuit logic and timing, after a complete cycle with a period of approximately 2.5 µs is received, the circuit output goes high and remains in that state until a shorter cycle is received, or until a longer time than the correct period elapses without a complete cycle.

Testing the circuit

I tested the circuit with signals from 0 to 10 MHz. The frequencies between 384 kHz and 405 kHz, or periods between 2.47 µs and 2.6 µs, produced a high level at the output. These values correspond to approximately 15 to 16 pulses of the 6.144 MHz clock, being the first of those pulses used to end the presetting of the counter IC3, so it is not counted.

Frequencies lower than 362 kHz or higher than 433 kHz produced a low logic level. For frequencies between 362 kHz and 384 kHz and between 405 kHz and 433 kHz, the circuit produced pulses at the output. That means that for an input period between 2.31 µs and 2.47 µs or between 2.60 µs and 2.76 µs, there will be some likelihood that the output will be in a high or low logic state. That state will depend on the phase difference between the input signal and the 6.144 MHz clock.

Figure 2 shows a five-pulse 400 kHz burst (lower trace), which is applied to the input of the circuit. The upper trace is the output; it can be seen that after the first cycle has been measured. The output goes high, and it stays in that state as more 2.5 µs cycles keep arriving. After a time slightly higher than 2.5 µs without a complete cycle (~2.76 µs), the output goes low.

Figure 2 A five-pulse 400-kHz burst applied to the input of the digital frequency divider circuit (CH2) and the output (CH2) after the first cycle has been measured.

Ariel Benvenuto is an Electronics Engineer and a PhD in physics, and works in research with IFIS Litoral in Santa Fe, Argentina.

 Related Content

The post A digital frequency detector appeared first on EDN.

Can a smart ring make me an Ultrahuman being?

Mon, 10/06/2025 - 17:53

In last month’s smart ring overview coverage, I mentioned two things that are particularly relevant to today’s post:

  • I’d be following it up with a series of more in-depth write-ups, one per ring introduced in the overview, the first of which you’re reading here, and
  • Given the pending ITC (International Trade Commission) block of further shipments of RingConn and Ultrahuman smart rings into the United States, save for warranty-replacements for existing owners, and a ruling announced a few days prior to my submission of the overview writeup to Aalyia, I planned to prioritize the RingConn and Ultrahuman posts in the hopes of getting them published prior to the October 21 deadline, in case US readers were interested in purchasing either of them ahead of time (note, too, that the ITC ruling doesn’t affect readers in other countries, of course).
Color compatibility

Since the Ultrahuman Ring AIR was the first one that came into my possession, I’ll dive into its minutiae first. To start, I’ll note, in revisiting the photo from last time of all three manufacturers’ rings on my left index finger, that the Ultrahuman ring’s “Raw Titanium” color scheme option (it’s the one in the middle, straddling the Oura Gen3 Horizon to its left and the RingConn Gen 2 to its right) most closely matches the patina of my wedding band:

Here’s the Ultrahuman Ring AIR standalone:

Skip the app

Next up is sizing, discussed upfront in last month’s write-up. Ultrahuman is the only one of the three that offers a sizing app as a (potential) alternative to obtaining a kit, although candidly, I don’t recommend it, at least from my experiences with it. Take a look at the screenshots I took when using it again yesterday in prepping for this piece (and yes, I intentionally picked a size-calibrating credit card from my wallet whose account number wasn’t printed on the front!):

I’ll say upfront that the app was easy to figure out and use, including the ability to optionally disable “flash” supplemental illumination (which I took advantage of because with it “on”, the app labeled my speckled desktop as a “noisy background”).

That said, first off, it’s iOS-only, so folks using Android smartphones will be SOL unless they alternatively have an Apple tablet available (as I did; these were taken using my iPad mini 6). Secondly, the app’s finger-analysis selection was seemingly random (ring and middle finger on my right hand, but only middle finger on my left hand…in neither case the index finger, which was my preference). Thirdly, app sizing estimates undershot by one or multiple sizes (depending on the finger) what the kit indicated was the correct size. And lastly, the app was inconsistent use-to-use; the first time I’d tried it in late May, here’s what I got for my left hand (I didn’t also try my right hand then because it’s my dominant one and I therefore wasn’t planning on wearing the smart ring on it anyway):

Sub-par charging

Next, let’s delve a bit more into the previously mentioned seeming firmware-related battery life issue I came across with my initial ring. Judging from the June 2024 date stamps of the documentation on Ultrahuman’s website, the Ring AIR started shipping mid-last year (following up on the thicker and heavier but functionally equivalent original Ultrahuman R1).

Nearly a year later, when mine came into my possession, new firmware updates were still being released at a surprisingly (at least to me) rapid clip. As I’d mentioned last month, one of them had notably degraded my ring’s battery life from the normal week-ish to a half day, as well as extending the recharge time from less than an hour to nearly a full day. And none of the subsequent firmware updates I installed led to normal-operation recovery, nor did my attempted full battery drain followed by an extended delay before recharge in the hope of resetting the battery management system (BMS). I should also note at this point that other Redditors have reported that firmware updates not only killed rings’ batteries but also permanently neutered their wireless connectivity. 

What happened to the original ring? My suspicion is that it actually had something to do with an inherently compromised (coupled with algorithm-worsened) charging scheme that led to battery overcharge and subsequent damage. Ultrahuman bundles a USB-C-to-USB-C cable with the ring, which would imply (incorrectly, as it turns out) that the ring charging dock circuitry can handle (including down-throttling the output as needed) any peak-wattage USB-C charger that you might want to feed it with, including (but not limited to) USB-PD-capable ones.

In actuality, product documentation claims that you should connect the dock to a charger with only a maximum output of 5W/2A. After doing research on Amazon and elsewhere, I wasn’t able to find any USB-C chargers that were that feeble. So, to get there at all, I had to dig out of storage an ancient Apple 5W USB-A charger, which I then mated to a third-party USB-A-to-USB-C cable.

That all said, following in the footsteps of others on the Ultrahuman subreddit who’d had similar experiences (and positive results), I reached out to the Reddit forum moderators (who are Ultrahuman employees, including the founder and CEO!) and after going through a few more debugging steps they’d suggested (which I’d already tried, but whatevah), got shipped a new ring.

It’s been stable through multiple subsequent firmware updates, with the stored charge dropping only ~10-15% per day (translating to the expected week-ish of between-charges operating life). And the pace of new firmware releases has also now notably slowed, suggestive of either increasing code stability or a refocus on development of the planned new product that aspires to avoid Oura patent infringement…I’m hoping for the more optimistic former option!

Other observations

More comments, some of which echo general points made in last month’s write-up:

  • Since this smart ring, like those from Oura, leverages wireless inductive charging, docks are ring-size-specific. If you go up or down a size or a few, you’ll need to re-purchase this accessory (one comes with each ring, so this is specifically a concern if, like me, you’ve already bought extras for travel, elsewhere in the house, etc.)

  • There’s no battery case available that I’ve come across, not even a third-party option.
  • That 10-15% per day battery drop metric I just mentioned is with the ring in its initial (sole) “Turbo” operating mode, not with the subsequently offered (and now default) “Chill” option. I did drop it down to “Chill” for a couple of days, which decreased the per-drop battery-level drop by a few percent, but nothing dramatic. That said, my comparative testing wasn’t extensive, so my results should be viewed as anecdotal, not scientific. Quoting again from last month’s writeup:

Chill Mode is designed to intelligently manage power while preserving the accuracy of your health data. It extends your Ring AIR battery life by up to 35% by tracking only what matters, when it matters. Chill Mode uses motion and context-based intelligence to track heart rate and temperature primarily during sleep and rest.

  • It (like the other smart rings I also tested) misinterpreted keyboard presses and other finger-and-hand movements as steps, leading to over-measurement results, especially on my dominant right hand.
  • While Bluetooth LE connectivity extends battery life compared to a “vanilla” Bluetooth alternative, it also notably reduces the ring-to-phone connection range. Practically speaking, this isn’t a huge deal, though, since the data is viewed on the phone. The act of picking the phone up (assuming your ring is also on your body) will also prompt a speedy close-proximity preparatory sync.
  • Unlike Oura (and like RingConn), Ultrahuman provides membership-free full data capture and analysis capabilities. That said, the company sells optional Powerplug software add-ons to further expand app functionality, along with extended warranties that, depending on the duration, also include one free replacement ring in case your sizing changes due to, for example, ring-encouraged and fitness-induced weight loss.
  • The app will also automatically sync with other health services, such as Fitbit and Android’s built-in Health Connect. That said, I wonder (but haven’t yet tested to confirm or deny) what happens if, for example, I wear both the ring and an inherently Fitbit-cognizant Google Pixel Watch (or, for that matter, my Garmin or Withings smartwatches).

  • One other curious note: Ultrahuman claims that it’s been manufacturing rings not only in its headquarters country, India, but also in the United States since last November in partnership with a contractor, SVtronics. And in fact, if you look at Amazon’s product page for the Ring AIR, you’ll be able to select between “Made in India” and “Made in USA” product ordering options. Oura, conversely, has indicated that it believes the claimed images of US-located manufacturing facilities are “Photoshop edits” with no basis in reality. I don’t know, nor do I particularly care, what the truth is here. I bring it up only to exemplify the broader contentious nature of ongoing interactions between Oura and its upstart competitors (also including pointed exchanges with RingConn).

Speaking of RingConn, and nearing 1,600 words at this point, I’m going to wrap up my Ultrahuman coverage and switch gears for my other planned post for this month. Time (and ongoing litigation) will tell, I guess, as to whether I have more to say about Ultrahuman in the future, aside from the previously mentioned (and still planned) teardown of my original ring. Until then, reader thoughts are, as always, welcomed in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Can a smart ring make me an Ultrahuman being? appeared first on EDN.

Universal homing sensor: A hands-on guide for makers, engineers

Mon, 10/06/2025 - 09:51

A homing sensor is a device used in certain machines to detect a fixed reference point, allowing the machine to determine its exact starting position. When powered on, the machine moves until it triggers the sensor, so it can accurately track movement from that point onward. It’s essential for precision and repeatability in automated motion systems.

Selecting the right homing sensor can have a big impact on accuracy, dependability, and overall cost. Here is a quick rundown of the three main types:

Mechanical homing sensors: These operate through contact-direct switches or levers to determine position.

  • Advantages: Straightforward, budget-friendly, and easy to install.
  • Drawbacks: Prone to wear over time, slower to respond, and less accurate.

Magnetic homing sensors: Relying on magnetic fields, often via Hall effect sensors, these do not require physical contact.

  • Advantages: Long-lasting, effective in harsh environments, and maintenance-free.
  • Drawbacks: Can be affected by magnetic interference and usually offer slightly less resolution than optical sensors.

Optical homing sensors: These use infrared light paired with slotted discs or reflective surfaces for detection.

  • Advantages: Extremely precise, quick response time, and no mechanical degradation.
  • Drawbacks: Sensitive to dust and misalignment and typically come at a higher cost.

In clean, high-precision applications like 3D printers or CNC machines, optical sensors shine. For more demanding or industrial environments, magnetic sensors often strike the right balance. And if simplicity and low cost are top priorities, mechanical sensors remain a solid choice.

Figure 1 Magnetic, mechanical, and optical homing sensors are available in standard configurations. Source: Author

The following parts of this post detail the design framework of a universal homing sensor adapter module.

We will start with a clean, simplified schematic of the universal homing sensor adapter module. Designed for broad compatibility, it accepts logic-level inputs—including both CMOS and TTL-compatible signals—from nearly any homing sensor head, whether it’s mechanical, magnetic, or optical, making it a flexible choice for diverse applications.

Figure 2 A minimalistic design highlights the inherent simplicity of constructing a universal homing sensor module. Source: Author

The circuit is simple, economical, and built using easily sourced, budget-friendly components. True to form, the onboard test button (SW1) mirrors the function of a mechanical homing sensor, offering a convenient stand-in for setup and troubleshooting tasks.

The 74LVC1G07 (IC1) is a single buffer with an open-drain output. Its inputs accept signals from both 3.3 V and 5 V devices, enabling seamless voltage translation in mixed-signal environments. Schmitt-trigger action at all inputs ensures reliable operation even with slow input rise and fall times.

Optional flair: LED1 is not strictly necessary, but it offers a helpful visual cue. I tested the setup with a red LED and a 1-KΩ resistor (R3)—simple, effective, and reassuringly responsive.

As usual, I whipped up a quick-and-dirty breadboard prototype using an SMD adapter PCB (SOT-353 to DIP-6) to host the core chip (Figure 3). I have skipped the prototype photo for now—there is only a tiny chip in play, and the breadboard layout does not offer much visual clarity anyway.

Figure 3 A good SMD adapter PCB gives even the tiniest chip time to shine. Source: Author

A personal note: I procured the 74LVC1G07 chip from Robu.in.

Just before the setup reaches its close, note that machine homing involves moving an axis toward its designated homing sensor—a specific physical location where a sensor or switch is installed. When the axis reaches this point, the controller uses it as a reference to accurately determine the axis position. For reliable operation, it’s essential that the homing sensor is mounted precisely in its intended location on the machine.

While wrapping up, here are a few additional design pointers for those exploring alternative options, since we have only touched on a straightforward approach so far. Let’s take a closer look at a few randomly picked additional components and devices that may be better suited for the homing task:

  • SN74LVC1G16: Inverting buffer featuring Schmitt-trigger input and open-drain output; ideal for signal conditioning and noise immunity.
  • SN74HCS05: Hex inverter with Schmitt-trigger inputs and open-drain outputs; useful for multi-channel logic interfacing.
  • TCST1103/1202/1300: Transmissive optical sensor with phototransistor output; ideal for applications that require position sensing or the detection of an object’s presence or absence.
  • TCRT5000: Reflective optical sensor; ideal for close-proximity detection.
  • MLX75305: Light-to-voltage sensor (EyeC series); converts ambient light into a proportional voltage signal, suitable for optical detection.
  • OPBxxxx Series: Photologic slotted optical switches; designed for precise object detection and position sensing in automation setups.

Moreover, compact inductive proximity sensors like the Omron E2B-M18KN16-M1-B1 are often used as homing sensors to detect metal targets—typically a machine part or actuator—at a fixed reference point. Their non-contact operation ensures reliable, repeatable positioning with minimal wear, ideal for robotic arms, linear actuators, and CNC machines.

Figure 4 The Omron E2B-M18KN16-M1-B1 inductive proximity sensor supports homing applications by detecting metal targets at fixed reference points. That enables precise, contactless positioning in industrial setups. Source: Author

Finally, if this felt comfortably familiar, take it as a cue to go further; question the defaults, reframe the problem, and build what no datasheet dares to predict.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Universal homing sensor: A hands-on guide for makers, engineers appeared first on EDN.

Amazon and Google: Can you AI-upgrade the smart home while being frugal?

Fri, 10/03/2025 - 18:10

The chronological proximity of Amazon and Google’s dueling new technology and product launch events on Tuesday and Wednesday of this week was highly unlikely to have been a coincidence. Which company, therefore, reacted to the other? Judging solely from when the events were first announced, which is the only data point I have as an outsider, it looks like Google was the one who initially put the stake in the ground on September 2nd with an X (the service formerly known as Twitter) post, with Amazon subsequently responding (not to mention scheduling its event one day earlier in the calendar) two weeks later, on September 15.

Then again, who can say for sure? Maybe Amazon started working on its event ahead of Google, and simply took longer to finalize the planning. We’ll probably never know for sure. That said, it also seems from the sidelines that Amazon might have also gotten its hands on a leaked Google-event script (to be clear, I’m being completely facetious with what I just said). That’s because, although the product specifics might have differed, the overall theme was the same: both companies are enhancing their existing consumer-residence ecosystems with AI (hoped-for) smarts, something that they’ve both already announced as an intention in the past:

Quoting from one of Google’s multiple event-tied blog posts as a descriptive example of what both companies seemingly aspire to achieve:

The idea of a helpful home is one that truly takes care of the people inside it. While the smart home has shown flashes of that promise over the last decade, the underlying AI wasn’t anywhere as capable as it is today, so the experience felt transactional, not conversational. You could issue simple commands, but the home was never truly conversational and seldom understood your context.

 Today, we’re taking a massive step toward making the helpful home a reality with a fundamentally new foundation for Google Home, powered by our most capable AI yet, Gemini. This new era is built on four pillars: a new AI for your home, a redesigned app, new hardware engineered for this moment and a new service to bring it all together.

Amazon’s hardware “Hail Mary”

Of the two companies, Amazon has probably got the most to lose if it fumbles the AI-enhancement service handoff. That’s because, as Ars Technica’s coverage title aptly notes, “Alexa’s survival hinges on you buying more expensive Amazon devices”:

Amazon hasn’t had a problem getting people to buy cheap, Alexa-powered gadgets. However, the Alexa in millions of homes today doesn’t make Amazon money. It’s largely used for simple tasks unrelated to commerce, like setting timers and checking the weather. As a result, Amazon’s Devices business has reportedly been siphoning money, and the clock is ticking for Alexa to prove its worth.

I’m ironically a case study of Amazon’s conundrum. Back in early March, when the Alexa+ early-access program launched, I’d signed up. I finally got my “Your free Early Access to Alexa+ starts now” email on September 24, a week and a day ago, as I’m writing this on October 2. But I haven’t yet upgraded my service, which is admittedly atypical behavior for a tech enthusiast such as myself.

Why? Price isn’t the barrier in my particular case (though it likely would be for others less Amazon-invested than me); mine’s an Amazon Prime-subscribing household, so Alexa+ is bundled versus costing $19.99 per month for non-subscribers. Do the math, though, and why anyone wouldn’t go the bundle-with-Prime route is the question (which, I’d argue, is Amazon’s core motivation); Prime is $14.99 per month or $139/year right now.

So, if it’s not the service price tag, then what alternatively explains my sloth? It’s the devices—more accurately, my dearth of relevant ones—with the exception of the rarely-used Alexa app on my smartphones and tablets (which, ironically, I generally fire up only when I’m activating a new standalone Alexa-cognizant device).

Alexa+ is only supported on newer-generation hardware, whereas more than half (and the dominant share in regular use) of the devices currently activated in my household are first-generation Echoes, early-generation Echo Dots, and a Tap. With the exception of the latter, which I sometimes need to power-cycle before it’ll start streaming Amazon Music-sourced music again, they’re all still working fine, at least for the “transactional” (per Google’s earlier lingo) functions I’ve historically tasked them with.

And therefore, as an example of “chicken and the egg” paralysis, in the absence of their functional failure, I’m not motivated to proactively spend money to replace them in order to gain access to additional Alexa+ services that might not end up rationalizing the upfront investment.

Speakers, displays, and stylus-augmented e-book readers

Amazon unsurprisingly announced a bevy of new devices this week, strangely none of which seemingly justified a press release or, come to think of it, even an event video, in stark contrast to Apple’s prerecorded-only approach (blog posts were published a’plenty, however). Many of the new products are out-of-the-box Alexa+ capable and, generally speaking, they’re also more expensive than their generational precursors. First off is the curiously reshaped (compared to its predecessor) Echo Studio, in both graphite (shown) and “glacier” white color schemes:

There’s also a larger version of the now-globular Echo Dot (albeit still smaller than the also-now-globular Echo Studio), called the Echo Dot Max, with the same two color options:

And two also-redesigned-outside smart displays, the Echo Show 11 and latest-generation Echo Show 8, which basically (at least to me) look like varying-sized Echo Dots with LCDs stuck to their fronts. They both again come in both graphite and glacier white options:

and also have optional, added-price, more position-adjustable stands:

This new hardware begs the perhaps-predictable question: Why is my existing hardware not Alexa+ capable? Assuming all the deep learning inference heavy lifting is being done on the Amazon “cloud”, what resource limitations (if any) exist with the “edge” devices already residing in my (at least semi-) smart home?

Part of the answer might be with my assumption in the prior sentence; perhaps Amazon is intending for them to have limited (at least) ongoing standalone functionality if broadband goes down, which would require beefier processing and memory than that included with my archaic hardware. Perhaps, too, even if all the AI processing is done fully server-side, Amazon’s responsiveness expectations aren’t adequately served by my devices’ resources, in this case also including Wi-Fi connectivity. And yes, to at least some degree, it may just be another “obsolescence by design” case study. Sigh. More likely, my initial assumption was over-simplistic and at least a portion of the inference functions suite is running natively on the edge device using locally stored deep learning models, particularly for situations where rapid response time (vs edge-to-cloud-and-back round-trip extended latency) is necessary.

Other stuff announced this week included three new stylus-inclusive, therefore scribble-capable, Kindle Scribe 11” variants, one with a color screen, which this guy, who tends to buy—among other content—comics-themed e-books that are only full-spectrum appreciable on tablet and computer Kindle apps, found intriguing until he saw the $629.99-$679.99 price tag (in fairness, the company also sells stylus-less, but notably less expensive Colorsoft models):

and higher-resolution indoor and outdoor Blink security cameras, along with a panorama-stitching two-camera image combiner called the Blink Arc:

A curious blue re-embrace

Speaking of security cameras, Ring founder Jamie Siminoff, who had previously left Amazon post-acquisition, has returned and was on hand this week to personally unveil also-resolution-bumped (this time branded as Retinal Vision) indoor- and outdoor-intended hardware, including an updated doorbell camera model:

Equally interesting to me are Ring’s community-themed added and enhanced services: Familiar Faces, Alexa+ Greetings, and (for finding lost dogs) Search Party. And then there’s this notable revision of past stance, passed along as a Wired coverage quote absent personal commentary:

It’s worth noting that Ring has brought back features that allow law enforcement to request footage from you in the event of an incident. Ring customers can choose to share video, and they can stay anonymous if they opt not to send the video. “There is no access that we’re giving police to anything other than the ability to, in a very privacy-centric way, request footage from someone who wants to do this because they want to live in a safe neighborhood,” Siminoff tells WIRED.

A new software chapter

Last, but not least (especially in the last case) are several upgraded Fire TVs, still Fire OS-based:

and a new 4K Fire TV Stick, the latter the first out-of-box implementation example of Amazon’s newfound Linux embrace (and Linux-derived Android about-face), Vega OS:

We’d already known for a while that Amazon was shutting down its Appstore, but its Fire OS-to-Vega OS transition is more recent. Notably, there’s no more local app sideloading allowed; all apps come down from the Amazon cloud.

Google’s more modest (but comprehensive) response

Google’s counterpunch was more muted, albeit notably (and thankfully, from a skip-the-landfill standpoint) more inclusive of upgrades for existing hardware versus the day-prior comparative fixation on migrating folks to new devices, and reflective of a company that’s fundamentally a software supplier (with a software-licensing business model). Again from Wired’s coverage:

This month, Gemini will launch on every Google Assistant smart home device from the last decade, from the original 2016 Google Home speaker to the Nest Cam Indoor 2016. It’s rolling out in Early Access, and you can sign up to take part in the Google Home app.

There’s more:

Google is bringing Gemini Live to select Google Home devices (the Nest Audio, Google Nest Hub Max, and Nest Hub 2nd Gen, plus the new Google Home Speaker). That’s because Gemini Live has a few hardware dependencies, like better microphones and background noise suppression. With Gemini Live, you’ll be able to have a back-and-forth conversation with the chatbot, even have it craft a story to tell kids, with characters and voices.

But note the fine print, which shouldn’t be a surprise to anyone who’s already seen my past coverage: “Support doesn’t include third-party devices like Lenovo’s smart displays, which Google stopped updating in 2023.”

One other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, won’t ship until early next year. There was one other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, which won’t ship until early next year.

And, as the latest example of Google’s longstanding partnership with Walmart, the latter retailer has also launched a line of onn.-branded, Gemini-supportive security cameras and doorbells:

That’s what I’ve got for you today; we’ll have to see what, if anything else, Apple has for us before the end of the year, and whether it’ll take the form of an event or just a series of press releases. Until then, your fellow readers and I await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Amazon and Google: Can you AI-upgrade the smart home while being frugal? appeared first on EDN.

PoE basics and beyond: What every engineer should know

Fri, 10/03/2025 - 11:42

Power over Ethernet (PoE) is not rocket science, but it’s not plug-and-play magic either. This short primer walks through the basics with a few practical nudges for those curious to try it out.

It’s a technology that delivers electrical power alongside data over standard twisted-pair Ethernet cables. It enables a single RJ45 cable to supply both network connectivity and power to powered devices (PDs) such as wireless access points, IP cameras, and VoIP phones, eliminating the need for separate power cables and simplifying installation.

PoE essentials: From devices to injectors

Any network device powered via PoE is known as a powered device or PD, with common examples including wireless access points, IP security cameras, and VoIP phones. These devices receive both data and electrical power through Ethernet cables from power sourcing equipment (PSE), which is classified as either “endspan” or “midspan.”

An endspan—also called an endpoint—is typically a PoE-enabled network switch that directly supplies power and data to connected PDs, eliminating the need for a separate power source. In contrast, when using a non-PoE network switch, an intermediary device is required to inject power into the connection. This midspan device, often referred to as a PoE injector, sits between the switch and the PD, enabling PoE functionality without replacing existing network infrastructure. A PoE injector sends data and power together through one Ethernet cable, simplifying network setups.

Figure 1 A PoE injector is shown with auto negotiation that manages power delivery safely and efficiently. Source: http://poe-world.com

The above figure shows a PoE injector with auto negotiation, a safety and compatibility feature that ensures power is delivered only when the connected device can accept it. Before supplying power, the injector initiates a handshake with the PD to detect its PoE capability and determine the appropriate power level. This prevents accidental damage to non-PoE devices and allows precise power delivery—whether it’s 15.4 W for Type 1, 25.5 W for Type 2, or up to 90 W for newer Type 4 devices.

Note at this point that the original IEEE 802.3af-2003 PoE standard provides up to 15.4 watts of DC power per port. This was later enhanced by the IEEE 802.3at-2009 standard—commonly referred to as PoE+ or PoE Plus—which supports up to 25.5 watts for Type 2 devices, making it suitable for powering VoIP phones, wireless access points, and security cameras.

To meet growing demands for higher power delivery, the IEEE introduced a new standard in 2018: IEEE 802.3bt. This advancement significantly increased capacity, enabling up to 60 watts (Type 3) and circa 100 watts (Type 4) of power at the source by utilizing all four pairs of wires in Ethernet cabling compared to earlier standards that used only two pairs.

As indicated previously, VoIP phones were among the earliest applications of PoE. Wireless access points (WAPs) and IP cameras are also ideal use cases, as all these devices require both data connectivity and power.

Figure 2 This PoE system is powering a fixed wireless access (FWA) device.

As a sidenote, an injector delivers power over the network cable, while a splitter extracts both data and power—providing an Ethernet output and a DC plug.

A practical intro to PoE for engineers and DIYers

So, PoE simplifies device deployment by delivering both power and data over a single cable. For engineers and DIYers looking to streamline installations or reduce cable clutter, PoE offers a clean, scalable solution.

This brief session outlines foundational use cases and practical considerations for first-time PoE users. No deep dives: just clear, actionable insights to help you get started with smarter, more efficient connectivity.

Up next is the tried-and-true schematic of a passive PoE injector I put together some time ago for an older IP security camera (24 VDC/12 W).

Figure 3 Schematic demonstrates how a passive PoE injector powers an IP camera. Source: Author

In this setup, the LAN port links the camera to the network, and the PoE port delivers power while completing the data path. As a cautionary note, use a passive PoE injector only when you are certain of the device’s power requirements. If you are unsure, take time to review the device specifications. Then, either configure a passive injector to match your setup or choose an active PoE solution with integrated negotiation and protection.

Fundamentally, most passive PoE installations operate across a range of voltages, with 24 V often serving as practical middle ground. Even lower voltages, such as 12 V, can be viable depending on cable length and power requirements. However, passive PoE should never be applied to devices not explicitly designed to accept it; doing so risks damaging the Ethernet port’s magnetics.

Unlike active PoE standards, passive PoE delivers power continuously without any form of negotiation. In its earliest and simplest form, it leveraged unused pairs in Fast Ethernet to transmit DC voltage—typically using pins 4–5 for positive and 7–8 for negative, echoing the layout of 802.3af Mode B. As Gigabit Ethernet became common, passive PoE evolved to use transformers that enabled both power and data to coexist on the same pins, though implementations vary.

Seen from another angle, PoE technology typically utilizes the two unused twisted pairs in standard Ethernet cables—but this applies only to 10BASE-T and 100BASE-TX networks, which use two pairs for data transmission.

In contrast, 1000BASE-T (Gigabit Ethernet) employs all four twisted pairs for data, so PoE is delivered differently—by superimposing power onto the data lines using a method known as phantom power. This technique allows power to be transmitted without interfering with data, leveraging the center tap of Ethernet transformers to extract the common-mode voltage.

PoE primer: Surface touched, more to come

Though we have only skimmed the surface, it’s time for a brief wrap-up.

Fortunately, even beginners exploring PoE projects can get started quickly, thanks to off-the-shelf controller chips and evaluation boards designed for immediate use. For instance, the EV8020-QV-00A evaluation board—shown below—demonstrates the capabilities of the MP8020, an IEEE 802.3af/at/bt-compliant PoE-powered device.

Figure 4 MPS showcases the EV8020-QV-00A evaluation board, configured to evaluate the MP8020’s IEEE 802.3af/at/bt-compliant PoE PD functionality. Source: MPS

Here are my quick picks for reliable, currently supported PoE PD interface ICs—the brains behind PoE:

  • TI TPS23730 – IEEE 802.3bt Type 3 PD with integrated DC-DC controller
  • TI TPS23731 – No-opto flyback controller; compact and efficient
  • TI TPS23734 – Type 3 PD with robust thermal performance and DC-DC control
  • onsemi NCP1081 – Integrated PoE-PD and DC-DC converter controller; 802.3at compliant
  • onsemi NCP1083 – Similar to NCP1081, with auxiliary supply support for added flexibility
  • TI TPS2372 – IEEE 802.3bt Type 4 high-power PD interface with automatic MPS (maintain power signature) and autoclass

Similarly, leading semiconductor manufacturers offer a broad spectrum of PSE controller ICs for PoE applications—ranging from basic single-port controllers to sophisticated multi-port managers that support the latest IEEE standards.

As a notable example, TI’s TPS23861 is a feature-rich, 4-channel IEEE 802.3at PSE controller that supports auto mode, external FET architecture, and four-point detection for enhanced reliability, with optional I²C control and efficient thermal design for compact, cost-effective PoE systems.

In short, fantastic ICs make today’s PoE designs smarter and more efficient, especially in dynamic or power-sensitive environments. Whether you are refining an existing layout or venturing into high-power applications, now is the time to explore, prototype, and push your PoE designs further. I will be here.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post PoE basics and beyond: What every engineer should know appeared first on EDN.

DMD powers high-resolution lithography

Thu, 10/02/2025 - 21:44

With over 8.9 million micromirrors, TI’s DLP991UUV digital micromirror device (DMD) enables maskless digital lithography for advanced packaging. Its 4096×2176 micromirror array, 5.4-µm pitch, and 110-Gpixel/s data rate remove the need for costly mask technology while providing scalability and precision for increasingly complex designs.

The DMD is a spatial light modulator that controls the amplitude, direction, and phase of incoming light. Paired with the DLPC964 controller, the DLP991UUV DMD supports high-speed continuous data streaming for laser direct imaging. Its resolution enables large 3D-print build sizes, fine feature detail, and scanning of larger objects in 3D machine vision applications.

Offering the highest resolution and smallest mirror pitch in TI’s Digital Light Processing (DLP) portfolio, the DLP991UUV provides precise light control for industrial, medical, and consumer applications. It steers UV wavelengths from 343 nm to 410 nm and delivers up to 22.5 W/cm² at 405 nm.

Preproduction quantities of the DLP991UUV are available now on TI.com.

DLP991UUV product page 

Texas Instruments 

The post DMD powers high-resolution lithography appeared first on EDN.

Co-packaged optics enables AI data center scale-up

Thu, 10/02/2025 - 21:44

AIchip Technologies and Ayar Labs unveiled a co-packaged optics (CPO) solution for multi-rack AI clusters, providing extended reach, low latency, and high radix. The joint development tackles AI infrastructure data-movement bottlenecks by replacing copper interconnects with CPO in large-scale accelerator deployments.

The offering integrates Ayar’s TeraPHY optical engines with AIchip’s advanced packaging on a common substrate, bringing optical I/O directly to the AI accelerator interface. This enables over 100 Tbps of scale-up bandwidth per accelerator and supports more than 256 optical scale-up ports per device. TeraPHY is also protocol agnostic, allowing flexible integration with customer-designed chiplets and fabrics.

The co-packaged solution scales multi-rack networks without the power and latency penalties of pluggable optics by shortening electrical traces and placing optical I/O close to the compute core. With UCIe support and flexible protocol endpoints at the package boundary, it integrates alongside compute tiles, memory, and accelerators while maintaining performance, signal integrity, and thermal requirements.

Both companies are working with select customers to integrate co-packaged optics into next-generation AI accelerators and scale-up switches. They will provide collateral, reference architectures, and build options to qualified design teams.

Ayar Labs 

The post Co-packaged optics enables AI data center scale-up appeared first on EDN.

Platform speeds AI from prototype to production

Thu, 10/02/2025 - 21:44

Purpose-built for Lantronix Open-Q system-on-modules (SOMs), EdgeFabric.ai is a no-code development platform for designing and deploying edge AI applications. According to Lantronix, it helps customers move AI from prototype to production in minutes instead of months, without needing a team of AI experts.

The visual orchestration platform integrates with Open-Q hardware and leading AI model ecosystems, automatically configuring performance across Qualcomm GPUs, DSPs, and NPUs. It streamlines data pipelines with drag-and-drop workflows for AI, video, and sensors, while delivering real-time visualization. Prebuilt templates support common use cases such as surveillance, anomaly detection, and safety monitoring.

EdgeFabric.ai auto-generates production-ready code in Python and C++, making it easy to build and adjust pipelines, fine-tune parameters, and adapt workflows quickly.

Learn more about the EdgeFabric.ai platform here. For details on Open-Q SOMs, visit SOM solutions. Lantronix also offers engineering services for development support.

Lantronix

The post Platform speeds AI from prototype to production appeared first on EDN.

Dual-core MCUs drive motor-control efficiency

Thu, 10/02/2025 - 21:44

RA8T2 MCUs from Renesas integrate dual processors for real-time motor control in advanced factory automation and robotics. They pair a 1-GHz Arm Cortex-M85 core with an optional 250-MHz Cortex-M33 core, combining high-speed operation, large memory, timers, and analog functions on a single chip.

The Cortex-M85 with Helium technology accelerates DSP and machine-learning workloads, enabling AI functions that predict motor maintenance needs. In dual-core variants, the embedded Cortex-M33 separates real-time control from general-purpose tasks to further enhance system performance.

RA8T2 devices integrate up to 1 MB of MRAM and 2 MB of SRAM, including 256 KB of TCM for the Cortex-M85 and 128 KB of TCM for the Cortex-M33. For high-speed networking in factory automation, they offer multiple interfaces, such as two Gigabit Ethernet MACs with DMA and a two-port EtherCAT slave. A 32-bit, 14-channel timer delivers PWM functionality up to 300 MHz.

The RA8T2 series of MCUs is available now through Renesas and its distributors.

RA8T2 product page

Renesas Electronics 

The post Dual-core MCUs drive motor-control efficiency appeared first on EDN.

Image sensor provides ultra-high dynamic range

Thu, 10/02/2025 - 21:43

Omnivision’s OV50R40 50-Mpixel CMOS image sensor delivers single-exposure HDR up to 110 dB with second-generation TheiaCel technology. It also reduces power consumption by ~20% compared with the previous-generation OV50K40, enabling longer HDR video capture.

Aimed at high-end smartphones and action cameras, the OV50R40 achieves ultra-high dynamic range in any lighting. Built on PureCel Plus‑S stacked die technology, the color sensor supports 100% coverage quad phase detection for improved autofocus. It features an active array of 8192×6144 with 1.2‑µm pixels in a 1/1.3‑in. format and supports premium 8K video with dual analog gain (DAG) HDR and on-sensor crop zoom.

The sensor also supports 4-cell binning, producing 12.5‑Mpixel resolution at 120 fps. For 4K video at 60 fps, it provides 3-channel HDR with 4× sensitivity, ensuring enhanced low-light performance.

The OV50R40 is now sampling, with mass production planned for Q1 2026.

OV50R40 product page 

Omnivision

The post Image sensor provides ultra-high dynamic range appeared first on EDN.

Thermally enhanced packages—hot or not?

Thu, 10/02/2025 - 17:43

The relentless pursuit of performance in sectors such as AI, cloud computing, and autonomous driving is creating a heat crisis. As the next generation of processors demand more power in smaller spaces, the switched-mode power supply (SMPS) is being pushed to its thermal limit. SMPS’s integrated circuit (IC) packages have traditionally used a large thermal pad on the bottom side of the package, known as a die attach paddle (DAP), to dissipate the majority of the heat through the printed circuit board (PCB). But as power density increases, relying on only one side of the package to dissipate heat quickly becomes a serious constraint.

A thermally enhanced package is a type of IC package designed to dissipate heat from both the top and bottom surfaces. In this article, we’ll explore the standard thermal metrics of IC packages, along with the composition, top-side cooling methods, and thermal benefits of a thermally enhanced package.

Thermal metrics of IC packages

In order to understand what a thermally-enhanced package is and why it is beneficial, it’s important to first understand the terminology for describing the thermal performance of an IC package. Three foundational metrics of thermal resistance are the junction-to-ambient thermal resistance (RθJA), the junction-to-case (top) thermal resistance (RθJC(top)), and the junction-to-board thermal resistance (RθJB).

Thermal resistance measures the opposition to the flow of heat in a medium. In IC packages, thermal resistance is usually measured in Celsius rise per watt dissipated (°C/W), or how much the temperature rises when the IC dissipates a certain amount of power.

RθJA measures the thermal resistance between the junction (J) (the silicon die itself), and the ambient air (A) around the IC. RθJC(top) measures the thermal resistance specifically between (J) and the top (t) of the case (C) or package mold. RθJB measures the thermal resistance specifically between (J) and the PCB on which the package is mounted.

RθJA significantly depends on its subcomponents—both RθJC(top) and RθJB. The lower the RθJA, the better, because it clearly indicates that there will be a lower temperature rise per unit of power dissipated. Power IC designers spend a lot of time and resources to come up with new ways to lower RθJA. A thermally enhanced package is one such way.

Thermally enhanced package composition

A thermally enhanced package is a quad flat no-lead (QFN) package that has both a bottom-side DAP and a top-side cutout of the molding to directly expose the back of the silicon die to the environment. Figure 1 shows the gray backside of the die for the Texas Instruments (TI) LM61495T-Q1 buck converter.

Figure 1 The LM61495T-Q1 buck converter in a thermally enhanced package. Source: Texas Instruments

Exposing the die on the top side of the package does two things: it lowers the RθJC(top) compared to an IC package that completely molds over the die, and enables a direct connection between the die and an external heat sink, which can significantly reduce RθJA.

RθJC(top) in a thermally enhanced package

RθJC(top) allows heat to escape more effectively from the top of the device. Typically, heat escapes through the package mold and then to the air, but in a thermally enhanced package, it escapes directly to the air. This helps reduce the device temperature and reduces the risk of thermal shutdown and long-term heat stress issues. The thermally enhanced package also has a lower RθJA, which makes it possible for a converter to handle more current and operate in hotter environments.

Figure 2 shows a series of IC junction temperature measurements taken across output current for both the LM61495T-Q1 in the thermally enhanced package and TI’s LM61495-Q1 buck converter in the standard QFN package under two common operating conditions.

VOUT = 5V

FSW = 400kHz

TA = 25°C

Figure 2 Output current vs. junction temperature for the LM61495-Q1 and LM61495T-Q1 with no heat sink. Source: Texas Instruments

Clearly, even with no heat sink attached, the thermally enhanced package runs slightly cooler, simply because more heat is dissipating out of the top of the package and into the air. The RθJA for a thermally enhanced package is slightly lower, demonstrating with certainty that, even if only marginally, this package type will provide better thermals compared to the standard QFN with top-side molding, even without any additional thermal management techniques. Table 1 lists the official thermal metrics found in both devices’ data sheets.

Part number

Package type

RθJA (evaluation module)(°C/W)

RθJC(top)
(°C/W)

RθJB
(°C/W)

LM61495-Q1

Standard QFN

21.6

19.2

12.2

LM61495T-Q1

Thermally enhanced package QFN

21

0.64

11.5

Table 1 Comparing data sheet-derived thermal metrics for the LM61495-Q1 and LM61495T-Q1. Source: Texas Instruments

Top-side cooling vs QFN

Combining its near-zero RθJC(Top) top side with an effective heat sink significantly reduces the RθJA of an IC in a thermally enhanced package. There are three significant improvements when compared to the same IC in a standard QFN package under otherwise similar operating conditions:

  • Higher switching-frequency operation.
  • Higher output-current capability.
  • Operation at higher ambient temperatures.

For any SMPS under a given input voltage (VIN), output voltage (VOUT) condition and supplying a given output current, the maximum switching frequency will be thermally limited. Within every switching period, there are switching losses and conduction losses that dissipate as heat. Switching more frequently dissipates more power in the IC, leading to an increased IC junction temperature. This can be frustrating for engineers because switching at higher frequencies enables the use of a smaller buck inductor, and therefore a smaller overall solution size and lower cost.

Under the same operating conditions, using the thermally enhanced package and a heat sink, the heat dissipated in each switching period is now more easily channeled out of the IC, leading to a lower junction temperature and enabling a higher switching frequency without hitting the IC’s junction temperature limit. Just don’t exceed the maximum switching frequency recommendation of the device as outlined in the data sheet.

The benefits of using a smaller inductor are especially pronounced in higher-current multiphase designs that require an inductor for every phase. Figure 3 shows a simplified four-phase design capable of supplying 24 A at 3.3 VOUT at 2.2 MHz using the TI LM64AA2-Q1 step-down converter. If the design were to overheat and the switching frequency had to be reduced to 400 kHz, you would have to replace all four inductors with larger inductors (in terms of both inductance and size), inflating the overall solution cost and size substantially.

Figure 3 Simplified schematic of a single-output, four-phase step-down converter design using the LM644A2-Q1 step-down converter in the thermally enhanced package. Source: Texas Instruments

Conversely, for any SMPS under a given VIN, VOUT condition, and operating at a specific switching frequency, the maximum output current will be thermally limited. When discussing the current limit of an IC, it’s important to clarify that for all high-side FET integrated SMPSs, there is a data sheet-specified high-side current limit that bounds the possible output current.

Upon reaching the current-limit setpoint, the high-side FET turns off, and the IC may enter a hiccup interval to reduce the operating temperature until the overcurrent condition goes away. But even before reaching the current limit, it is very possible for an IC to overheat from a high output-current requirement. This is especially true, again, at higher frequencies. As long as you don’t exceed the high-side current limit, using an IC in the thermally enhanced package with a heat sink can extend the maximum possible output current to a level at which the standard QFN IC alone would overheat.

There is another constant to make the thermally enhanced package versus the standard QFN package comparison valid, and that is the ambient temperature (TA). TA is a significant factor when considering how much power an SMPS can deliver before it starts to overheat.

For example, a buck converter may be able to easily do a 12VIN-to-5VOUT conversion and support a continuous 6 A of current while switching at 2.2 MHz when the TA is 25°C, but not at 105°C. So, there is yet a third way to look at the benefit that a thermally enhanced package can provide. Assuming the VIN, VOUT, output current, and maximum switching frequency are constant, a thermally enhanced package used with a heat sink can enable an SMPS to operate at a meaningfully higher TA compared to a standard QFN package with no heat sink.

Figure 4 uses a current derating curve to demonstrate both the higher output current capability and operation at a higher TA. In an experiment using the LM61495-Q1 and LM61495T-Q1 buck converters, we measured the output current against the TA in a standard QFN package without a heat sink and in a thermally enhanced package QFN connected to an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Other than the package and the heat sink, all other conditions are constant: operating conditions, PCB, and measurement instrumentation.

VIN = 12V

VOUT = 3.3V

FSW = 2.2MHz

Figure 4 Output current vs. ambient temperature of the LM61495-Q1 with no heat sink and the LM61495T-Q1 with an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Source: Texas Instruments

When TA reaches about 83˚C, the standard QFN package hits its thermal shutdown threshold, and the output current begins to collapse. As TA increases further, the device cycles into and out of thermal shutdown, and the maximum achievable output current that the device can deliver is necessarily reduced until TA reaches a steady 125˚C. At this point, the converter may not be able to sustain even 5 A without overheating.

Compare this to the thermally enhanced package QFN connected to a heat sink. The first instance of thermal shutdown now doesn’t occur until about 117˚C. That’s an increase in TA before hitting a thermal shutdown of 34˚C, or 40%. The LM61495-Q1 is a 10-A buck converter, meaning that its recommended maximum output current is 10 A. But in this case, with a thermally enhanced package and effective heat sinking, a continuous 11 A output was clearly achievable up to 117˚C – in other words, a 10% increase in maximum continuous output current even at a high TA.

Methods of top-side cooling

Figure 5, Figure 6, and Figure 7 show some of the most common methods of top-side cooling. Stand-alone heat sinks are simple and readily available in many different forms, materials, and sizes, but are sometimes impractical in small-form-factor designs.

Figure 5 Stand-alone fin-type heat sink, these are simple and readily available but sometimes impractical in small form factor designs. Source: Texas Instruments

Cold plates are very effective in dissipating heat but are more complex and costlier to implement (Figure 6).

Figure 6 Cold plate-type heat sink, these are very effective in dissipating heat but are more complex and costlier to implement. Source: Texas Instruments

Using the metal housing containing the power supply and the surrounding electronics as a heat sink is compact, effective, and relatively inexpensive if the housing already exists. As shown in Figure 7, this is done by creating a pillar or dimple that connects the IC to the housing to enable efficient heat transfer. For power supplies powering processors, it’s likely that this method is already helping dissipate heat on the processor. Adding an additional dimple or pillar that now gives heat-sink access to the power supply is often a simple change, making it a very popular method, especially for processor power.

Figure 7 Contact-with-housing heat sink where a pillar or dimple connects the IC to the housing to enable efficient heat transfer. Source: Texas Instruments

There are many ways to implement heat sinking, but that doesn’t mean that they are all equally effective. The size, material, and form of the heat sink matter. The type and amount of thermal interface material used between the IC and the heat sink matter, as does its placement. It is important to optimize all of these factors for the design at hand.

Comparing heat sinks

Figure 8 shows another current derating curve. It compares two different types of heat sinks, each mounted on the LM61495T-Q1. For reference, the figure includes the performance of the standard QFN package with no heat sink.

VIN = 24V

                VOUT = 3.3V

FSW = 2.2MHz

Figure 8 Output current versus the ambient temperature of the LM61495-Q1 with no heat sink, the LM61495T-Q1 with an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink, and with an aluminum plate heat sink. Source: Texas Instruments

For a visualization of these heat sinks, see Figure 9 and Figure 10, which show a top-down view of the PCB and a clear view of how the heat sinks are mounted to the IC and PCB. The heat sink shown in Figure 9 is a commercially available, off-the-shelf product. To reiterate, it is a 45 mm by 45 mm aluminum alloy heat sink with a base that is 3mm thick and pin-type fins that extend the surface area and allow omnidirectional airflow.

Figure 9 The LM61495T-Q1 evaluation board with the off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Source: Texas Instruments

Figure 10 shows a custom heat sink that is essentially just a 50 mm by 50 mm aluminum plate with a 2 mm thickness and a small pillar that directly touches the IC. This heat sink was designed to mimic the contact-with-housing method, as it is very similar in size and material to the types of housing seen in real applications.

Figure 10 The LM61495T-Q1 evaluation board with a custom aluminum plate heat sink to mimic the contact-with-housing method. Source: Texas Instruments

Under the same conditions, the stand-alone heat sink provides a major benefit compared to the standard QFN package with no heat sink. The standard QFN package hits thermal shutdown around 67°C TA. For the stand-alone heat-sink setup, thermal shutdown isn’t triggered until the TA reaches about 111°C, which is a major improvement. However, the aluminum plate heat-sink setup doesn’t hit thermal shutdown at all. With the aluminum plate setup, the converter is still able to supply a continuous 10-A current at the highest TA tested (125˚C), demonstrating both the importance of choosing the correct heat sink for the system requirements as well as the popularity of the contact-with-housing method.

Addressing modern thermal challenges

Power supply designers increasingly deal with thermal challenges as modern applications demand more power and smaller form factors in hotter spaces. Standard QFN packaging has long relied on dissipating the majority of generated heat through the bottom side of the package to the PCB. A thermally enhanced package QFN uses both the top and bottom sides of the package to improve heat flow out of the IC, essentially paralleling the thermal impedance paths and reducing the effective thermal impedance.

Combining a thermally enhanced package with effective heat sinking results in significant thermal benefits and enables higher-power-density designs. Because these benefits are derived from reducing the effective RθJA, designers can realize just one or all of these benefits in varying degrees. Increase the maximum switching frequency and reduce solution size and cost. Enabling a higher maximum output current for higher power conversion. Enable operation at a higher TA.

Jonathan Riley is a Senior Product Marketing Engineer for Texas Instruments’ Switching Regulators organization. He holds a BS in Electrical Engineering from the University of California Santa Cruz. At TI, Jonathan works in the crossroads of marketing and engineering to ensure TI’s Switching Regulator product line continues to evolve ahead of the market and enable customers to power the technologies of tomorrow.

 Related Content

Additional resources

The post Thermally enhanced packages—hot or not? appeared first on EDN.

Past, present, and future of hard disk drives (HDDs)

Thu, 10/02/2025 - 15:44

Where do HDDs stand after the advent of SDDs? Are they a thing of the past now, or do they still have a life? While HDDs store digital data, what’s their relation to analog technology? Here is a fascinating look at HDD’s past, present, and future, accompanied by data from the industry. The author also raises a very valid point: while their trajectory is very similar to the world of semiconductors, why don’t HDDs have their own version of Moore’s Law?

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post Past, present, and future of hard disk drives (HDDs) appeared first on EDN.

Improve PWM controller-induced ripple in voltage regulators

Wed, 10/01/2025 - 19:45

Simple linear and switching voltage regulators with feedback networks of the type shown in Figure 1 are legion. Their output voltages are the reference voltage at the feedback (FB) pin multiplied by 1 + Rf / Rg. Recommended values of Cf from 100 pF to 10nF increase the amount of feedback at higher frequencies, or at least ensure it is not reduced by stray capacitances at the feedback pin.

Figure 1 The configurations of common regulators and their feedback networks. A linear regulator is shown on the left and a switcher on the right.

Modifying this structure to incorporate PWM control of the output voltage requires some thought, and both Stephen Woodward and I have presented several Design Ideas (DIs) that address this.

Wow the engineering world with your unique design: Design Ideas Submission Guide

I’ve suggested disconnecting Rg from ground and driving it from a heavily filtered (op-amp-based) PWM signal supplied by a 74xx04-type logic inverter. Although this can result in excellent ripple suppression, it has a disadvantage—the need for an inverter power supply, which does not degrade the accuracy of the regulator’s 1% or better reference voltage.

Stephen has proposed switching the disconnected Rg leg between ground and open with a MOSFET. The beauty of this is that no new reference is needed. Although the output voltage is no longer a linear function of the PWM duty cycle, a simple software-based lookup table renders this a mere inconvenience. (Yup, “we can fix it in software!”)

A general scheme to mitigate PWM controller-induced ripple should be flexible enough to accommodate different regulators, regulator reference voltages, output voltage ranges, and PWM frequencies. In selecting one, here are some possible traps to be aware of:

  • Nulling by adding an out-of-phase version of the ripple signal is at the mercy of component tolerances.
  • Cheap ceramics, such as the ubiquitous X7R, have DC voltage and temperature-sensitive capacitances. If used, the circuit must tolerate these undesirable traits.
  • Schemes which connect capacitors between ground and the feedback pin will reduce loop feedback at higher frequencies. The result could be degradation of line and load transient responses.
Circuit

With this in mind, consider the circuit of Figure 2, capable of operation from 0.8 V to a little more than 5 V.

Note: If the regulator output is capable of operation below the FB voltage, a resistor could be connected between FB and a higher DC supply voltage to enable this. For outputs of 0 V, the current through it would have to equal VFB / Rf. The value of Rf would have to be increased to maintain operation to 5 V. However, this approach requires a reference voltage of suitable quality, and much of the advantage of using a MOSFET is lost.

Figure 2 A specific instance of a PWM-controlled regulator with ripple suppression. Only a linear regulator is shown, but the adaptation for switcher operation entails only the addition of an inductor and a filter capacitor.

The low capacitance MOSFET has a maximum on-resistance of under 2 Ω at a VGS of 2.5 V or more. Cg1 and Cg2 see maximum DC voltages of 0.8 V (up to 1.25 V in some regulators). Their capacitive accuracies are not critical, and at these low voltages, they barely budge when 10-V or higher-rated X7R capacitors are employed.

Cf can see a significant DC voltage, however. Here, you might get away with an X7R, but a 10-nF (voltage-insensitive) C0G is cheap. The value of Cf was chosen to aid in ripple management. If it were not present, the ripple would be larger and proportional to the value of Rf. With a 10-nF Cf, larger values of Rf for higher output voltages would have no effect on the PWM-induced ripple; smaller ones could only reduce it. The largest peak-to-peak ripple occurs at duty cycles from 30 to 40%.

The filtering supplied by the three capacitors produces a sinusoidal ripple waveform of amplitude 5.7 µV peak-to-peak. For a 16-bit ADC with a full scale of 5 V, the peak-to-peak amplitude is less than 1 LSbit.

Flexibility

You might have a requirement for a wider or narrower range of output voltages. Feel free to modify Rf accordingly without a penalty in ripple amplitude.

Ripple amplitude will scale in proportion to the regulator’s reference voltage. The design assumes a regulator whose optimum FB-to-ground resistance is 10 kΩ. If it’s necessary to change this for the regulator of your choice, scale the three Rg resistors by the same factor Z. Because the resistors and three capacitors implement a 3rd order filter, the ripple will scale in accordance with Z-3. To keep the same ripple amplitude, scale the three capacitors by 1/Z. You might want to scale the capacitors’ values for some other reason, even if the resistors are unchanged.

Changing the PWM frequency by a factor F will change the ripple amplitude by a factor of F-3. But too high a frequency could encounter accuracy problems due to the parasitic capacitances and unequal turn-on/turn-off times of the MOSFET.

Some regulators might not tolerate a Cf of a value large enough to aid in ripple suppression. Usually, these will tolerate a resistor Rcf in series with Cf. In such cases, ripple will be increased by a factor K equal to the square root of ( 1 + Rcf · 2π · fPWM · Cf ), and the waveform might no longer be sinusoidal. But increasing Cg1 and Cg2 by the square root of K will compensate to yield approximately the same suppression as offered by the design with Rcf equal to 0. If all else fails, there is always the possibility of adding an Rg4 and a Cg3 to provide another stage of filtering.

Tying it all together

 A flexible approach has been introduced for the suppression of PWM control-induced ripple in linear and switching regulators. Simple rules have been presented for the use and modification of the Figure 2 circuit for operation over different output voltage ranges, PWM frequencies, preferred resistances between ground and the regulator’s feedback pin, and tolerances for moderately large capacitances between the FB pins and the output.

The limitations of capacitors with sensitivities to DC voltages are recognized. These components are used appropriately and judiciously. Dependency on component matching is avoided. Standard feedback network structures are maintained or, at worst, subjected to minor modifications only; specifically, feedback at higher frequencies is not reduced from that recommended by the regulator manufacturer. This maintains the specified line and load transient responses.

Addendum

Once again, the Comments section of DIs has shown its worth. And it’s Deja vu all over again; value was provided by the redoubtable Stephen Woodward. In an earlier DI, he pointed out that regulators generally do not tolerate negative voltages at their feedback pins. But if there is a capacitor Cf of more than a few hundred picofarads connected from the output to this pin, as I have recommended in this DI, and the output is shorted or rapidly discharged, this capacitor could couple a negative voltage to that pin and damage the part. To protect against this, add the components shown in the following figure.

Figure 3 Add these components to protect the FB pin from output rapid negative voltage changes.

In normal operation and during startup, the CUS10S30 Schottky diode looks like an open circuit and it, Cc, and the 1 MΩ resistor have a negligible effect on circuit operation. Cc prevents the flow of diode reverse current, which could otherwise produce output voltage errors. If Vout transitions to ground rapidly, Cc and the diode prevent any negative voltage from appearing at the junction of the capacitors. Rc provides a cheap “just in case” limit of the current into the FB pin from that voltage transient if it somehow saw a negative voltage. (Check the maximum FB pin current to ensure that no significant error-inducing voltages develop across Rc.) When the circuit has settled, the voltage across Cc is discharged, and the circuit is ready to restart normally.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Improve PWM controller-induced ripple in voltage regulators appeared first on EDN.

A transistor thermostat for DAC voltage references

Wed, 10/01/2025 - 17:19

Frequent contributor Christopher Paul recently provided us with a painstakingly conservatively error-budget-analyzed Design Idea (DI) for a state-of-the-art pursuit of a 16-bit-perfection PWM DAC.

The DI presented below, while shamelessly kibitzing on Chris’ excellent design process and product, should in no way be construed as criticism or even a suggested modification. It is neither. It’s just a voyage into the strange land of ultimate precision.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In his pursuit of perfect precision, Christopher creatively coped with the limitations of the “art.” Perhaps the most intractable of these limitations in the context of his design was the temperature coefficient of available moderately priced precision voltage references. His choice of the excellent 35xxx family of references, for example, exhibits a temperature coefficient (tempco) of 12 ppm/°C = 0.8 lsb/°C = 55 lsb over 0 to 70°C, reducing this element of conversion precision to only an effective 10.2 bits. 

Since that was more than an order of magnitude worse than other error factors (e.g., DNL, INL, ripple) in Christopher’s simple and elegant (and nice!) design, it got me musing about what possibilities might exist to mediate it. 

Let me candidly admit upfront that my musing was unconstrained by a concern for the practical damage such possibilities might imply towards the simplicity and elegance of the design. This included damage, such as doubling the parts count and vastly increasing the power consumption.

But with those caveats out of the way, here we go.

The obvious possibility that came to mind, of course, was what if we reduced the importance of thermal instability of the reference by the simple (and brute-force) tactic of putting it in a thermostat? Over the years, we’ve seen lots of DIs for using transistors as sensors and heaters (sometimes combining both functions in the same device) for controlling the temperature of single components. Figure 1 illustrates the thermo-mechanics of such a scheme for this application. 

Figure 1 Thermally coupling the transistor sensor/heater to the DAC voltage reference to stabilize its temperature.

A nylon machine screw clamps the heatsink hotspot of a TO-220-packaged transistor (TIP31G) in a cantilever fashion onto the surface of the reference. A foam O-ring provides a modicum of thermal insulation. A dab of thermal grease on the mating surfaces will improve thermal coupling.

Figure 2 shows the electronics of the thermostat. Here’s how that works.

Figure 2 Q1 is a combo heater/sensor for a ±1°C thermostat, nominal setpoint ~70°C. R3 = 37500/(Vref – 0.375).

Q1 is the core of the thermostat. Under the control of gated multivibrator U1, it alternates between a temperature measurement when U1’s “Out” pin is low, and heating when U1’s “Out” pin goes high. Setpoint corresponds to Q1 Vbe = 375 mV as generated by the voltage divider R3/R4, detected by comparator A1, and timed by U1. 

I drew Figure 1 with the R3/R4 divider connected to +5 V, but in practice, this might not be the ideal choice. The thermostat setpoint will change by ~1.6°C per 1% change in Vref, so sub-percentage-point Vref stability is crucial to achieve optimal 16-bit DAC performance. The +5-V supply rail may therefore not be stable enough, and using the thermostatted DAC reference itself would be (much) better.

Any Vref of adequate stability and at least 365 mV may be used by simply setting R3 = 37500/(Vref – 0.375). For the same reason, R3 and R4 should be 1% or better metal film types. The point isn’t setpoint accuracy, which matters little, but stability, which matters much.

Vbe > 375mV indicates Q1 junction temp < setpoint, which gates U1 on. This allows U1 “Out” to transition to +5 V. This turns on driver transistor Q3, supplying ~20 mA to the Q1, Q2 pair. Q2 functions as a basic current regulator, limiting Q1’s heating current to ~0.7 V/1.5 Ω = 470 mA and therefore heating power to 2 W

The feedback loop thus established, Q1 Vbe to A1 to U1 to Q3 to Q1, adjusts the U1 duty cycle from 0 to 95%, and thereby tweaks the heating power to maintain thermostasis. Note that I omitted pinout numbers on A1 to accommodate the possibility that it might be contained in a multifunction chip (e.g., a quad) used elsewhere in the DAC.

Q.E.D. But wait! What are C2 and R2 for? Their reason for being, in general terms, is to be found in “Fixing a fundamental flaw of self-sensing transistor thermostats.”

As “Fixing…” explains, a fundamental limitation on the accuracy of thermostats like Figure 1 is as follows. The junction temperature (Tj) that we can actually measure is only an imperfect approximation of what we’re really interested in: controlling the package temperature (Tc). Figure 3 shows why.

Figure 3 The fatal flaw of Figure 1: the junction temperature is an imperfect approximation of the package temperature.

Because of the nonzero thermal impedance (Rjc) between the transistor junction and the surface of its case, an error term is introduced that’s proportional to that impedance and the heating power:

Terr = Tj – Tc = Rjc*Pj

In the TIP31 datasheet, Rjc is specified in the “Thermal Characteristics” section as 3.125 °C/W. Therefore, as Pj goes from 0 to 2 W, Terr would go from 0 to 6.25 °C. Recalling that the REF35 has a 12 ppm/°C tempco, that would leave us with 12 x 6.25 = 75 ppm = 5 lsb DAC drift. 

That’s 11x better than the 55-lsb tempco error we started with, but it’s still quite a way from true 16-bit accuracy. Can we do even better?

Just like the R11, R12, C2 network in Figure 2 of “Fixing a fundamental flaw of self-sensing transistor thermostats” that adds a Pj proportional Terr correction to the thermostat setpoint, that’s what R2 and C2 do here in this DI. C2 accumulates a ~23 ms average of 0 to 100% heating duty cycle = 0 to 700 mV, and adds through R2 a proportional 0 to 14 mV = 0 to 6.25°C Terr correction to the setpoint for net ±1°C stable thermostasis and < 1 lsb reference instability.

Now Q.E.D!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post A transistor thermostat for DAC voltage references appeared first on EDN.

An off-line power supply

Tue, 09/30/2025 - 16:07

One of my electronics interests is building radios, particularly those featured in older UK electronics magazines such as Practical Wireless, Everyday Electronics, Radio Constructor, and The Maplin Magazine. Most of those radios are designed to run on a 9-V disposable PP3 battery.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Using 9 V instead of the 3 V found in many domestic radios allows the transistors in these often-simple circuits to operate with a higher gain. PP3 batteries are, at a minimum, expensive in circuits consuming tens of mA and are—I suspect—hard to recycle. A more environmentally friendly solution was needed.

In the past, I’ve used single 3.6-V lithium-ion (Li-ion) cells from discarded e-cigarettes [1] with cheap combined charger and DC-DC converter modules found on eBay. They provide a nice, neat solution when housed in a small plastic box, but unfortunately generate a lot of electromagnetic interference (EMI), which falls within the shortwave band of frequencies (3 to 30 MHz) where a lot of the radios I build operate. I needed another solution that was EMI-free and environmentally friendly.

Solution

One solution is to eliminate the DC-DC converter and string together three or more Li-ion cells in a battery pack (B1) with a variable linear regulator (IC1) to generate the required 9 V (V1) as shown in Figure 1. Li-ion cells, like all electronic components, have tolerances. The two most important parameters are cell capacity and open circuit voltage. Differences in these parameters between cells in series lead to uneven charging and ultimately stressing of some cells, leading to their eventual degradation [2]. To even out these differences, Li-ion battery packs often contain a battery management system (BMS) to ensure that cells charge evenly.

Figure 1 Li-ion battery pack, with 3 or more Li-ion cells, and a variable linear regulator to generate the required 9 V.

As luck would have it, on the local buy-nothing group in Ottawa, Canada, where I live, someone was giving away a Mastercraft 18-V Li-ion battery with charger as shown in Figure 2. The person offering it had misplaced the drill, so there was little expense for me. Upon opening the battery pack, it was indeed found to contain a battery management system (BMS). This seemed like an ideal solution.

Figure 2 The Mastercraft 18-V Li-ion battery and charger obtained locally.

Circuit

The next step was to make a linear voltage regulator to drop 18 V to 9 V. This, in itself, is not particularly environmentally friendly, as it is only 50% efficient, and any dropped battery voltage will be dissipating as heat. However, assuming renewable power generation is used as the source, this would prove a more environmentally friendly solution compared to using disposable batteries.

In one of my boxes of old projects, I found a constant current nickel-cadmium (NiCad) battery charger. It was based around an LM317 linear voltage regulator in a nice black plastic enclosure sold by Maplin Electronics as a “power supply” box. The NiCad battery hadn’t been used for over 20 years, so this project would be a repurpose. A schematic of the rewired power supply is shown in Figure 3.

Figure 3 The power supply schematic with four selectable output voltages—6, 9, 12, and 13.8 V.

In Figure 3, switch S1 functions as both the power switch and selects the output voltage. Four different output voltages are selectable based on current needs: 6 V, 9 V, 12 V, and 13.8 V can be chosen by adjusting the ratio of R2 and R3-R6 as shown in the LM317 datasheet [3]. R2 is usually 220 Ω and develops 1.23 V across it, the remaining output voltage is developed across R3-R6. To get the exact values, parallel combinations are used as shown in Table 1.

Resistor #

Resistors (Ω)

Combined Value (Ω)

3

910, 18k, 15k

819

4

1.5k, 22k, 33k

1.35k

5

2.2k, 15k

1.92k

6

2.2k

2.2k

Table 1 Different values of paralleled R3 to R6 resistors and their combined value.

A photograph of the finished power supply with a Li-ion battery attached is shown in Figure 4.

Figure 4 A photograph of the finished power supply with four selectable output voltages that can be adjusted via a knob.

Results

Crimp-type spade connectors were fitted to the two input wires, which mated well with the terminals of the Li-ion battery. Maybe at some point, I will 3D-print a full connector for the battery. With the resistor values shown in Figure 3, the actual output voltages produced are: 5.96 V, 9.03 V, 12.15 V and 13.8 V. While these are not the actual designed values due to the use of preferred resistor values, it is of little consequence as the output voltage of disposable batteries varies over their operating time and there is of course a voltage drop due to cables. With this power supply, though, the output voltage of the power supply will remain constant during this time, even as the output voltage of the Li-ion drops due to its discharging.

Portable power

Although the power supply was intended for powering radio projects, it has other uses where portable power is needed and a DC-DC converter is too noisy, like sensitive instrumentation or some audiophile preamplifier [4]. 

Gavin Watkins is the founder of GapRF, a producer of online EDA tools focusing on the RF supply chain. When not doing that, he is happiest noodling around in his lab, working on audio electronics and RF projects, and restoring vintage equipment.

Related Content

References

  1. Reusing e-cigarette batteries in a e-bike, https://globalnews.ca/news/10883760/powering-e-bike-disposable-vapes/
  2. BU-808: How to Prolong Lithium-based Batteries, https://batteryuniversity.com/article/bu-808-how-to-prolong-lithium-based-batteries
  3. LM317 regulator datasheet, https://www.ti.com/lit/ds/symlink/lm317.pdf
  4. Battery powered hifi preamp, https://10audio.com/dodd_battery_pre/

The post An off-line power supply appeared first on EDN.

(Dis)assembling the bill-of-materials list for measuring blood pressure on the wrist

Mon, 09/29/2025 - 17:07

More than a decade ago, I visited my local doctor’s office, suffering from either kidney stone or back-spasm pain (I don’t recall which; at the time, it could have been either, or both, for that matter). As usual, the assistant logged my height and weight on the hallway scale, then my blood pressure in the examination room. I recall her measuring the latter, then re-measuring it, then hurriedly leaving the room with a worried look on her face and an “I’ll be back in a minute” comment. Turns out, my systolic blood pressure reading was near 200; she and the doctor had been conferring on whether to rush me to the nearest hospital in an ambulance.

Fortunately, a painkiller dropped my blood pressure below the danger point (spikes are a common body response to transient acute pain) in a timely manner, but the situation more broadly revealed that my pain-free ongoing blood pressure was still at the stage 2 hypertension level. My response was three-fold:

Traditional measurement techniques

Before continuing, here’s a quick definition of the two data points involved in blood pressure:

  • Systolic blood pressure is the first (top/upper) number. It measures the pressure your blood is pushing against the walls of your arteries when the heart beats.
  • Diastolic blood pressure is the second (bottom/lower) number. It measures the pressure your blood is pushing against your artery walls while the heart muscle rests between beats.

How is blood pressure traditionally measured at the doctor’s office or a hospital, specifically via a device called a sphygmomanometer in conjunction with a stethoscope? Thanks for asking:

Your doctor will typically use the following instruments in combination to measure your blood pressure:

  • a cuff that can be inflated with air,
  • a pressure meter (manometer) for measuring the air pressure inside the cuff, and
  • a stethoscope for listening to the sound the blood makes as it flows through the brachial artery (the major artery found in your upper arm).

 To measure blood pressure, the cuff is placed around the bare and extended upper arm, and inflated until no blood can flow through the brachial artery. Then the air is slowly let out of the cuff. As soon as blood starts flowing into the arm, it can be heard as a pounding sound through the stethoscope. The sound is produced by the rushing of the blood and the vibration of the vessel walls. The systolic pressure can be read from the meter once the first sounds are heard. The diastolic blood pressure is read once the pounding sound stops.

Home monitoring devices

What about at home? Here, there’s no separate stethoscope—or another person trained in listening to it and discerning what’s heard, for that matter—involved. And no, there isn’t a microphone integrated in the cuff to listen to the brachial artery, coupled with digital signal processing to analyze the microphone outputs, either (admittedly, that was Mr. Engineer here’s initial theory, until a realization of the bill-of-materials cost involved to implement the concept compelled me to do research on alternative approaches). This Reddit thread, specifically the following post within it, was notably helpful:

Pressure transducer within the machine. The pressure transducer can feel the pressure within the cuff. The air pressure in the cuff is the same at the end of the line in the machine.

So, like a manual BP cuff, the computer pumps air into the cuff until it feels a pulse. The pressure transducer actually senses the change in cuff pressure as the heartbeat.

That pulse is only looked at a little, get a relative beats per minute from the cuff. Now that the cuff can sense the pulse, keep pumping air until the pulse stops being sensed. That’s systolic. Now slowly and gently release air until you feel the pulse again. Check it against the rate number you had earlier. If it’s close, keep releasing air until you lose the sense. The last pressure that you had the pulse is the diastolic.

 It grabs the two numbers very similarly to how you do it with your ears and a stethoscope. But, it is able to measure the pressure directly and look at the pressure many times per second, instead of your eyes and ears listening to the pulse and watching the gauge.

That’s where the specific algorithm inside the computer takes over. They’re all black magic as to exactly how they interpret pulse. Peaks from baseline, rise and fall, rising wave, falling wave, lots of ways to count pulses on a line. But all of them can give you a heart rate from just a blood pressure cuff.

Another Redditor explained the process a bit differently in that same thread, specifically in terms of exactly when the systolic value is ascertained:

OK, imagine your arm is a like a balloon and your heartbeat is a drummer inside. The cuff squeezes the balloon tight, no drumming gets out. As it slowly lets air out, the first quiet drumbeat you “hear” is your systolic. When the drumming gets too lazy to rattle the balloon, that’s your diastolic. The machine just listens for those drum‑beats via pressure wobbles in the cuff, no extra pulse sensor needed!

I came across a couple of nuances in a teardown of a different machine than the one we’ll be looking at today. First off, particularly note the following bolded-by-me emphasis phrase:

The system seems to be quite simple – a DC motor drives a pump (PUMP-924A) to inflate the cuff. The port to the cuff is actually a tee, with the other port heading towards a solenoid valve that is venting to atmosphere by default. When the unit starts, it does a bit of a leak-check which inflates the cuff to a small value (20mmHg) and sits there for a bit to also ensure that the user isn’t moving about, and detect if the cuff is too tight or too loose. From there, it seems to inflate at a controlled pressure rate, which requires running the motor at variable speed depending on the tightness of the cuff and the pressure in the cuff.

Note, too, the following functional deviation of the device showcased at “Dr. Gough’s Tech Zone” (by Dr. Gough Lui, with the most excellent tagline “Reversing the mindless enslavement of humans by technology”) from the previous definition I’d quoted, which had described measuring systolic and diastolic pressure on the cuff-deflation phase of the entire process:

As a system that measures on the inflation stroke, it’s quicker but I do have my hesitations about its accuracy.

Wrist cuff-monitoring pros and cons

When I decided to start regularly measuring my own blood pressure at home, I initially grabbed a wrist-located cuff-based monitor I’d had sitting around for a while, through multiple residence transitions (therefore explaining—versus frequent usage, which admittedly would have been a deception if I’d tried to convince you of it—the condition of the packaging), Samsung’s BW-325S (the republished version of the press release I found online includes a 2006 copyright date):

I quickly discovered, however, that its results’ consistency (when consecutive readings were taken experimentally only a few minutes apart, to clarify; day-to-day deviations would have been expected) was lacking. Some of this was likely due to imperfect arm-and-hand positioning on my part. And, since I was single at the time, I didn’t have a partner around to help me put it on; an upper-arm cuff-based device, conversely, left both hands free for placement purposes. That said, my research also suggests that upper-arm cuff-located devices are also inherently more reliable than wrist cuff alternatives (or alternative approaches that measure pulse rate via photoplethysmography, computer vision facial analysis, or other techniques, for that matter)

I’ve now transitioned to using an Omron BP786N upper-arm cuff device, which also includes Bluetooth connectivity for smartphone data-logging and -archiving purposes.

Dissecting the Samsung BW-325S

Having retired my wrist cuff device, I’ll be tearing it down today to satisfy my own curiosity (and hopefully at least some of yours’ as well). Afterwards, assuming I’m able to reassemble it in a fully functional condition, I’ll probably go ahead and donate it, in the spirit of “ballpark accuracy is better than nothing at all.” That said, I’ll include a note for the recipient suggesting periodic redundant checks with another device, whether at home, at a pharmacy or a medical clinic.

Opening and emptying the box reveals some literature:

along with our patient, initially housed within a rugged plastic case convenient for travel (and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes).

Open Sesame:

I briefly popped in a couple of AAA batteries to show you what the display looks like near-fully digit-populated on measurement startup:

More generally, here are some perspectives of the device from various vantage points, and with the cuff both coiled and extended:

There are two screw heads visible on both the right side, whose sticker is also info-rich:

And the left, specifically inside the hard-to-access battery compartment (another admitted reason why I decided to retire the device):

You know what comes next, right?

Easy peasy:

Complete with a focus shift:

The inside of the top half of the case is comparatively unmemorable, unless you’re into the undersides of front-panel buttons:

That’s more like it:

Look closely (lower left corner, specifically) and you’ll see what looks like evidence that one of the screws that supposedly holds the PCB in place has been missing since the device left the factory:

Turns out, however, that this particular “hole” doesn’t go all the way through; it’s just a raised disc formed in the plastic, to fit inside the PCB hole (thereby holding the PCB in place, horizontally at least). Why, versus a proper hole and associated screw? I dunno (BOM cost reduction?). Nevertheless, let’s remove the other (more accurately: only) screw:

Now we can flip the assembly over:

And rotate it 90° to expose the innards to full view.

We want to pump you up

The pump, valve, and associated tubing are located underneath the PCB:

Directly below the battery compartment is another (white-color) hole, into which fits the pressure transducer attached to the PCB underside:

“Dr. Gough” notes in the teardown of his unit that “The pressure sensor appears to be a differential part with the other side facing inside the case for atmospheric pressure perhaps.”

Speaking of “the other side,” there’s an entire other side of the PCB that we haven’t seen yet. Doing so requires first carefully peeling the adhesive-attached display away:

Revealing, along with some passives, the main control/processing/display IC marked as follows:

86CX23
HL8890
076SATC22 [followed by an unrecognized company logo]

Its supplier, identity, and details remain (definitively, at least) unknown to me, unfortunately, despite plenty of online research (and for what it’s worth, others are baffled as well). Some distributor-published references indicate that the original developer is Sonix, but although that company is involved in semiconductors, its website suggests that it focuses exclusively on fabrication, packaging, and test technologies and equipment. Others have found this same chip in blood pressure monitoring devices from a Taiwan-based personal medical equipment company called Health & Life (referencing the HL in the product code), which makes me wonder if Samsung just relabeled and sold a blood pressure monitor originally designed and built by Health & Life (to wit, in retrospect, note the “Healthy Living” branding all over the device and its packaging), or if Samsung just bought up Health & Life’s excess IC inventory. Insights, readers?

The identity of the other IC in this photo (to the right of the 86CX23-HL) was thankfully easier to ascertain and matched my in-advance suspicion of its function. After cleaning away the glue with isopropyl alcohol and my fingernail, I faintly discerned the following three-line marking:

ATMEL716
24C08AN
C277 D

It’s an Atmel (now Microchip Technology) 24C08 8 Kbit I²C-compatible 2-wire serial EEPROM, presumably used to store logged user data in a nonvolatile fashion that survives system battery expiration, removal, and replacement steps.

All that’s left is to reverse my steps and put everything back together carefully. Reinsert a couple of batteries, press the front panel switch, and…

Huzzah! It lives to measure another person another day! Conceptually, at least …worry not, dear readers, that 180 millimeters of mercury (mmHg) systolic measurement is not accurate. Wrapping up at this point, I await your thoughts in the comments!

 Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post (Dis)assembling the bill-of-materials list for measuring blood pressure on the wrist appeared first on EDN.

Hybrid system resolves edge AI’s on-chip memory conundrum

Fri, 09/26/2025 - 16:52

Edge AI—enabling autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—can now adopt learning models on the fly while keeping energy consumption and hardware wear under tight control.

It’s made possible by a hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture has been developed by scientists at CEA-Leti, in collaboration with scientists at French microelectronic research centers.

Their work has been published in a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” in Nature Electronics. It explains how it’s possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems.

 

The on-chip memory conundrum

Edge AI requires both inference for reading data to make decisions and learning, a.k.a. training, for updating models based on new data on a chip without burning through energy budgets or challenging hardware constraints. However, for on-chip memory, while memristors are considered suitable for inference, ferroelectric capacitors (FeCAPs) are more suitable for learning tasks.

Resistive random-access memories or memristors excel at inference because they can store analog weights. Moreover, they are energy-efficient during read operations and better support in-memory computing. However, while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.

On the other hand, ferroelectric capacitors allow rapid, low-energy updates, but their read operations are destructive, making them unsuitable for inference. Consequently, design engineers face the choice of either favoring inference and outsourcing training to the cloud or carrying out training with high costs and limited endurance.

This led French scientists to adopt a hybrid approach in which forward and backward passes use low-precision weights stored in analog form in memristors, while updates are achieved using higher-precision FeCAPs. “Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper on this new hybrid memory system.

How hybrid approach works

The CEA-Leti team developed this hybrid system by engineering a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode memory device can operate as a FeCAP or a memristor, depending on its electrical formation.

In other words, the same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state. Here, a digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.

The hardware for this hybrid system was fabricated and tested on an 18,432-device array using standard 130-nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.

CEA-Leti has acknowledged funding support for this design undertaking from the European Research Council and the French Government’s France 2030 grant.

Related Content

The post Hybrid system resolves edge AI’s on-chip memory conundrum appeared first on EDN.

Pages