Українською
  In English
EDN Network
Can a smart ring make me an Ultrahuman being?

In last month’s smart ring overview coverage, I mentioned two things that are particularly relevant to today’s post:
- I’d be following it up with a series of more in-depth write-ups, one per ring introduced in the overview, the first of which you’re reading here, and
- Given the pending ITC (International Trade Commission) block of further shipments of RingConn and Ultrahuman smart rings into the United States, save for warranty-replacements for existing owners, and a ruling announced a few days prior to my submission of the overview writeup to Aalyia, I planned to prioritize the RingConn and Ultrahuman posts in the hopes of getting them published prior to the October 21 deadline, in case US readers were interested in purchasing either of them ahead of time (note, too, that the ITC ruling doesn’t affect readers in other countries, of course).
Since the Ultrahuman Ring AIR was the first one that came into my possession, I’ll dive into its minutiae first. To start, I’ll note, in revisiting the photo from last time of all three manufacturers’ rings on my left index finger, that the Ultrahuman ring’s “Raw Titanium” color scheme option (it’s the one in the middle, straddling the Oura Gen3 Horizon to its left and the RingConn Gen 2 to its right) most closely matches the patina of my wedding band:

Here’s the Ultrahuman Ring AIR standalone:

Next up is sizing, discussed upfront in last month’s write-up. Ultrahuman is the only one of the three that offers a sizing app as a (potential) alternative to obtaining a kit, although candidly, I don’t recommend it, at least from my experiences with it. Take a look at the screenshots I took when using it again yesterday in prepping for this piece (and yes, I intentionally picked a size-calibrating credit card from my wallet whose account number wasn’t printed on the front!):
I’ll say upfront that the app was easy to figure out and use, including the ability to optionally disable “flash” supplemental illumination (which I took advantage of because with it “on”, the app labeled my speckled desktop as a “noisy background”).
That said, first off, it’s iOS-only, so folks using Android smartphones will be SOL unless they alternatively have an Apple tablet available (as I did; these were taken using my iPad mini 6). Secondly, the app’s finger-analysis selection was seemingly random (ring and middle finger on my right hand, but only middle finger on my left hand…in neither case the index finger, which was my preference). Thirdly, app sizing estimates undershot by one or multiple sizes (depending on the finger) what the kit indicated was the correct size. And lastly, the app was inconsistent use-to-use; the first time I’d tried it in late May, here’s what I got for my left hand (I didn’t also try my right hand then because it’s my dominant one and I therefore wasn’t planning on wearing the smart ring on it anyway):

Next, let’s delve a bit more into the previously mentioned seeming firmware-related battery life issue I came across with my initial ring. Judging from the June 2024 date stamps of the documentation on Ultrahuman’s website, the Ring AIR started shipping mid-last year (following up on the thicker and heavier but functionally equivalent original Ultrahuman R1).
Nearly a year later, when mine came into my possession, new firmware updates were still being released at a surprisingly (at least to me) rapid clip. As I’d mentioned last month, one of them had notably degraded my ring’s battery life from the normal week-ish to a half day, as well as extending the recharge time from less than an hour to nearly a full day. And none of the subsequent firmware updates I installed led to normal-operation recovery, nor did my attempted full battery drain followed by an extended delay before recharge in the hope of resetting the battery management system (BMS). I should also note at this point that other Redditors have reported that firmware updates not only killed rings’ batteries but also permanently neutered their wireless connectivity.
What happened to the original ring? My suspicion is that it actually had something to do with an inherently compromised (coupled with algorithm-worsened) charging scheme that led to battery overcharge and subsequent damage. Ultrahuman bundles a USB-C-to-USB-C cable with the ring, which would imply (incorrectly, as it turns out) that the ring charging dock circuitry can handle (including down-throttling the output as needed) any peak-wattage USB-C charger that you might want to feed it with, including (but not limited to) USB-PD-capable ones.
In actuality, product documentation claims that you should connect the dock to a charger with only a maximum output of 5W/2A. After doing research on Amazon and elsewhere, I wasn’t able to find any USB-C chargers that were that feeble. So, to get there at all, I had to dig out of storage an ancient Apple 5W USB-A charger, which I then mated to a third-party USB-A-to-USB-C cable.

That all said, following in the footsteps of others on the Ultrahuman subreddit who’d had similar experiences (and positive results), I reached out to the Reddit forum moderators (who are Ultrahuman employees, including the founder and CEO!) and after going through a few more debugging steps they’d suggested (which I’d already tried, but whatevah), got shipped a new ring.
It’s been stable through multiple subsequent firmware updates, with the stored charge dropping only ~10-15% per day (translating to the expected week-ish of between-charges operating life). And the pace of new firmware releases has also now notably slowed, suggestive of either increasing code stability or a refocus on development of the planned new product that aspires to avoid Oura patent infringement…I’m hoping for the more optimistic former option!
Other observationsMore comments, some of which echo general points made in last month’s write-up:
- Since this smart ring, like those from Oura, leverages wireless inductive charging, docks are ring-size-specific. If you go up or down a size or a few, you’ll need to re-purchase this accessory (one comes with each ring, so this is specifically a concern if, like me, you’ve already bought extras for travel, elsewhere in the house, etc.)

- There’s no battery case available that I’ve come across, not even a third-party option.
- That 10-15% per day battery drop metric I just mentioned is with the ring in its initial (sole) “Turbo” operating mode, not with the subsequently offered (and now default) “Chill” option. I did drop it down to “Chill” for a couple of days, which decreased the per-drop battery-level drop by a few percent, but nothing dramatic. That said, my comparative testing wasn’t extensive, so my results should be viewed as anecdotal, not scientific. Quoting again from last month’s writeup:
Chill Mode is designed to intelligently manage power while preserving the accuracy of your health data. It extends your Ring AIR battery life by up to 35% by tracking only what matters, when it matters. Chill Mode uses motion and context-based intelligence to track heart rate and temperature primarily during sleep and rest.
- It (like the other smart rings I also tested) misinterpreted keyboard presses and other finger-and-hand movements as steps, leading to over-measurement results, especially on my dominant right hand.
- While Bluetooth LE connectivity extends battery life compared to a “vanilla” Bluetooth alternative, it also notably reduces the ring-to-phone connection range. Practically speaking, this isn’t a huge deal, though, since the data is viewed on the phone. The act of picking the phone up (assuming your ring is also on your body) will also prompt a speedy close-proximity preparatory sync.
- Unlike Oura (and like RingConn), Ultrahuman provides membership-free full data capture and analysis capabilities. That said, the company sells optional Powerplug software add-ons to further expand app functionality, along with extended warranties that, depending on the duration, also include one free replacement ring in case your sizing changes due to, for example, ring-encouraged and fitness-induced weight loss.
- The app will also automatically sync with other health services, such as Fitbit and Android’s built-in Health Connect. That said, I wonder (but haven’t yet tested to confirm or deny) what happens if, for example, I wear both the ring and an inherently Fitbit-cognizant Google Pixel Watch (or, for that matter, my Garmin or Withings smartwatches).



- One other curious note: Ultrahuman claims that it’s been manufacturing rings not only in its headquarters country, India, but also in the United States since last November in partnership with a contractor, SVtronics. And in fact, if you look at Amazon’s product page for the Ring AIR, you’ll be able to select between “Made in India” and “Made in USA” product ordering options. Oura, conversely, has indicated that it believes the claimed images of US-located manufacturing facilities are “Photoshop edits” with no basis in reality. I don’t know, nor do I particularly care, what the truth is here. I bring it up only to exemplify the broader contentious nature of ongoing interactions between Oura and its upstart competitors (also including pointed exchanges with RingConn).
Speaking of RingConn, and nearing 1,600 words at this point, I’m going to wrap up my Ultrahuman coverage and switch gears for my other planned post for this month. Time (and ongoing litigation) will tell, I guess, as to whether I have more to say about Ultrahuman in the future, aside from the previously mentioned (and still planned) teardown of my original ring. Until then, reader thoughts are, as always, welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Smart ring allows wearer to “air-write” messages with a fingertip
- The 2025 CES: Safety, Longevity and Interoperability Remain a Mess
- Can wearable devices help detect COVID-19 cases?
The post Can a smart ring make me an Ultrahuman being? appeared first on EDN.
Universal homing sensor: A hands-on guide for makers, engineers

A homing sensor is a device used in certain machines to detect a fixed reference point, allowing the machine to determine its exact starting position. When powered on, the machine moves until it triggers the sensor, so it can accurately track movement from that point onward. It’s essential for precision and repeatability in automated motion systems.
Selecting the right homing sensor can have a big impact on accuracy, dependability, and overall cost. Here is a quick rundown of the three main types:
Mechanical homing sensors: These operate through contact-direct switches or levers to determine position.
- Advantages: Straightforward, budget-friendly, and easy to install.
- Drawbacks: Prone to wear over time, slower to respond, and less accurate.
Magnetic homing sensors: Relying on magnetic fields, often via Hall effect sensors, these do not require physical contact.
- Advantages: Long-lasting, effective in harsh environments, and maintenance-free.
- Drawbacks: Can be affected by magnetic interference and usually offer slightly less resolution than optical sensors.
Optical homing sensors: These use infrared light paired with slotted discs or reflective surfaces for detection.
- Advantages: Extremely precise, quick response time, and no mechanical degradation.
- Drawbacks: Sensitive to dust and misalignment and typically come at a higher cost.
In clean, high-precision applications like 3D printers or CNC machines, optical sensors shine. For more demanding or industrial environments, magnetic sensors often strike the right balance. And if simplicity and low cost are top priorities, mechanical sensors remain a solid choice.

Figure 1 Magnetic, mechanical, and optical homing sensors are available in standard configurations. Source: Author
The following parts of this post detail the design framework of a universal homing sensor adapter module.
We will start with a clean, simplified schematic of the universal homing sensor adapter module. Designed for broad compatibility, it accepts logic-level inputs—including both CMOS and TTL-compatible signals—from nearly any homing sensor head, whether it’s mechanical, magnetic, or optical, making it a flexible choice for diverse applications.

Figure 2 A minimalistic design highlights the inherent simplicity of constructing a universal homing sensor module. Source: Author
The circuit is simple, economical, and built using easily sourced, budget-friendly components. True to form, the onboard test button (SW1) mirrors the function of a mechanical homing sensor, offering a convenient stand-in for setup and troubleshooting tasks.
The 74LVC1G07 (IC1) is a single buffer with an open-drain output. Its inputs accept signals from both 3.3 V and 5 V devices, enabling seamless voltage translation in mixed-signal environments. Schmitt-trigger action at all inputs ensures reliable operation even with slow input rise and fall times.
Optional flair: LED1 is not strictly necessary, but it offers a helpful visual cue. I tested the setup with a red LED and a 1-KΩ resistor (R3)—simple, effective, and reassuringly responsive.
As usual, I whipped up a quick-and-dirty breadboard prototype using an SMD adapter PCB (SOT-353 to DIP-6) to host the core chip (Figure 3). I have skipped the prototype photo for now—there is only a tiny chip in play, and the breadboard layout does not offer much visual clarity anyway.

Figure 3 A good SMD adapter PCB gives even the tiniest chip time to shine. Source: Author
A personal note: I procured the 74LVC1G07 chip from Robu.in.
Just before the setup reaches its close, note that machine homing involves moving an axis toward its designated homing sensor—a specific physical location where a sensor or switch is installed. When the axis reaches this point, the controller uses it as a reference to accurately determine the axis position. For reliable operation, it’s essential that the homing sensor is mounted precisely in its intended location on the machine.
While wrapping up, here are a few additional design pointers for those exploring alternative options, since we have only touched on a straightforward approach so far. Let’s take a closer look at a few randomly picked additional components and devices that may be better suited for the homing task:
- SN74LVC1G16: Inverting buffer featuring Schmitt-trigger input and open-drain output; ideal for signal conditioning and noise immunity.
- SN74HCS05: Hex inverter with Schmitt-trigger inputs and open-drain outputs; useful for multi-channel logic interfacing.
- TCST1103/1202/1300: Transmissive optical sensor with phototransistor output; ideal for applications that require position sensing or the detection of an object’s presence or absence.
- TCRT5000: Reflective optical sensor; ideal for close-proximity detection.
- MLX75305: Light-to-voltage sensor (EyeC series); converts ambient light into a proportional voltage signal, suitable for optical detection.
- OPBxxxx Series: Photologic slotted optical switches; designed for precise object detection and position sensing in automation setups.
Moreover, compact inductive proximity sensors like the Omron E2B-M18KN16-M1-B1 are often used as homing sensors to detect metal targets—typically a machine part or actuator—at a fixed reference point. Their non-contact operation ensures reliable, repeatable positioning with minimal wear, ideal for robotic arms, linear actuators, and CNC machines.

Figure 4 The Omron E2B-M18KN16-M1-B1 inductive proximity sensor supports homing applications by detecting metal targets at fixed reference points. That enables precise, contactless positioning in industrial setups. Source: Author
Finally, if this felt comfortably familiar, take it as a cue to go further; question the defaults, reframe the problem, and build what no datasheet dares to predict.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Reflective Object Sensors
- Smart PIR Sensor for Smart Homes
- Inductive Proximity Switch w/ Sensor
- The role of IoT sensors in smart homes and cities
- Radar sensors in home, office, school, factory and more
The post Universal homing sensor: A hands-on guide for makers, engineers appeared first on EDN.
Amazon and Google: Can you AI-upgrade the smart home while being frugal?

The chronological proximity of Amazon and Google’s dueling new technology and product launch events on Tuesday and Wednesday of this week was highly unlikely to have been a coincidence. Which company, therefore, reacted to the other? Judging solely from when the events were first announced, which is the only data point I have as an outsider, it looks like Google was the one who initially put the stake in the ground on September 2nd with an X (the service formerly known as Twitter) post, with Amazon subsequently responding (not to mention scheduling its event one day earlier in the calendar) two weeks later, on September 15.
Then again, who can say for sure? Maybe Amazon started working on its event ahead of Google, and simply took longer to finalize the planning. We’ll probably never know for sure. That said, it also seems from the sidelines that Amazon might have also gotten its hands on a leaked Google-event script (to be clear, I’m being completely facetious with what I just said). That’s because, although the product specifics might have differed, the overall theme was the same: both companies are enhancing their existing consumer-residence ecosystems with AI (hoped-for) smarts, something that they’ve both already announced as an intention in the past:
- Amazon, with a generative AI evolution-for-Alexa allusion two years ago, subsequently assigned the “Alexa+” marketing moniker back in February, and
- Google, which foreshadowed the smart home migration to come within its announcement of the Google Assistant-to-Gemini transition for mobile devices back in March.
Quoting from one of Google’s multiple event-tied blog posts as a descriptive example of what both companies seemingly aspire to achieve:
The idea of a helpful home is one that truly takes care of the people inside it. While the smart home has shown flashes of that promise over the last decade, the underlying AI wasn’t anywhere as capable as it is today, so the experience felt transactional, not conversational. You could issue simple commands, but the home was never truly conversational and seldom understood your context.
Today, we’re taking a massive step toward making the helpful home a reality with a fundamentally new foundation for Google Home, powered by our most capable AI yet, Gemini. This new era is built on four pillars: a new AI for your home, a redesigned app, new hardware engineered for this moment and a new service to bring it all together.
Amazon’s hardware “Hail Mary”Of the two companies, Amazon has probably got the most to lose if it fumbles the AI-enhancement service handoff. That’s because, as Ars Technica’s coverage title aptly notes, “Alexa’s survival hinges on you buying more expensive Amazon devices”:
Amazon hasn’t had a problem getting people to buy cheap, Alexa-powered gadgets. However, the Alexa in millions of homes today doesn’t make Amazon money. It’s largely used for simple tasks unrelated to commerce, like setting timers and checking the weather. As a result, Amazon’s Devices business has reportedly been siphoning money, and the clock is ticking for Alexa to prove its worth.
I’m ironically a case study of Amazon’s conundrum. Back in early March, when the Alexa+ early-access program launched, I’d signed up. I finally got my “Your free Early Access to Alexa+ starts now” email on September 24, a week and a day ago, as I’m writing this on October 2. But I haven’t yet upgraded my service, which is admittedly atypical behavior for a tech enthusiast such as myself.
Why? Price isn’t the barrier in my particular case (though it likely would be for others less Amazon-invested than me); mine’s an Amazon Prime-subscribing household, so Alexa+ is bundled versus costing $19.99 per month for non-subscribers. Do the math, though, and why anyone wouldn’t go the bundle-with-Prime route is the question (which, I’d argue, is Amazon’s core motivation); Prime is $14.99 per month or $139/year right now.
So, if it’s not the service price tag, then what alternatively explains my sloth? It’s the devices—more accurately, my dearth of relevant ones—with the exception of the rarely-used Alexa app on my smartphones and tablets (which, ironically, I generally fire up only when I’m activating a new standalone Alexa-cognizant device).
Alexa+ is only supported on newer-generation hardware, whereas more than half (and the dominant share in regular use) of the devices currently activated in my household are first-generation Echoes, early-generation Echo Dots, and a Tap. With the exception of the latter, which I sometimes need to power-cycle before it’ll start streaming Amazon Music-sourced music again, they’re all still working fine, at least for the “transactional” (per Google’s earlier lingo) functions I’ve historically tasked them with.
And therefore, as an example of “chicken and the egg” paralysis, in the absence of their functional failure, I’m not motivated to proactively spend money to replace them in order to gain access to additional Alexa+ services that might not end up rationalizing the upfront investment.
Speakers, displays, and stylus-augmented e-book readersAmazon unsurprisingly announced a bevy of new devices this week, strangely none of which seemingly justified a press release or, come to think of it, even an event video, in stark contrast to Apple’s prerecorded-only approach (blog posts were published a’plenty, however). Many of the new products are out-of-the-box Alexa+ capable and, generally speaking, they’re also more expensive than their generational precursors. First off is the curiously reshaped (compared to its predecessor) Echo Studio, in both graphite (shown) and “glacier” white color schemes:

There’s also a larger version of the now-globular Echo Dot (albeit still smaller than the also-now-globular Echo Studio), called the Echo Dot Max, with the same two color options:

And two also-redesigned-outside smart displays, the Echo Show 11 and latest-generation Echo Show 8, which basically (at least to me) look like varying-sized Echo Dots with LCDs stuck to their fronts. They both again come in both graphite and glacier white options:


and also have optional, added-price, more position-adjustable stands:

This new hardware begs the perhaps-predictable question: Why is my existing hardware not Alexa+ capable? Assuming all the deep learning inference heavy lifting is being done on the Amazon “cloud”, what resource limitations (if any) exist with the “edge” devices already residing in my (at least semi-) smart home?
Part of the answer might be with my assumption in the prior sentence; perhaps Amazon is intending for them to have limited (at least) ongoing standalone functionality if broadband goes down, which would require beefier processing and memory than that included with my archaic hardware. Perhaps, too, even if all the AI processing is done fully server-side, Amazon’s responsiveness expectations aren’t adequately served by my devices’ resources, in this case also including Wi-Fi connectivity. And yes, to at least some degree, it may just be another “obsolescence by design” case study. Sigh. More likely, my initial assumption was over-simplistic and at least a portion of the inference functions suite is running natively on the edge device using locally stored deep learning models, particularly for situations where rapid response time (vs edge-to-cloud-and-back round-trip extended latency) is necessary.
Other stuff announced this week included three new stylus-inclusive, therefore scribble-capable, Kindle Scribe 11” variants, one with a color screen, which this guy, who tends to buy—among other content—comics-themed e-books that are only full-spectrum appreciable on tablet and computer Kindle apps, found intriguing until he saw the $629.99-$679.99 price tag (in fairness, the company also sells stylus-less, but notably less expensive Colorsoft models):

and higher-resolution indoor and outdoor Blink security cameras, along with a panorama-stitching two-camera image combiner called the Blink Arc:

Speaking of security cameras, Ring founder Jamie Siminoff, who had previously left Amazon post-acquisition, has returned and was on hand this week to personally unveil also-resolution-bumped (this time branded as Retinal Vision) indoor- and outdoor-intended hardware, including an updated doorbell camera model:

Equally interesting to me are Ring’s community-themed added and enhanced services: Familiar Faces, Alexa+ Greetings, and (for finding lost dogs) Search Party. And then there’s this notable revision of past stance, passed along as a Wired coverage quote absent personal commentary:
It’s worth noting that Ring has brought back features that allow law enforcement to request footage from you in the event of an incident. Ring customers can choose to share video, and they can stay anonymous if they opt not to send the video. “There is no access that we’re giving police to anything other than the ability to, in a very privacy-centric way, request footage from someone who wants to do this because they want to live in a safe neighborhood,” Siminoff tells WIRED.
A new software chapterLast, but not least (especially in the last case) are several upgraded Fire TVs, still Fire OS-based:

and a new 4K Fire TV Stick, the latter the first out-of-box implementation example of Amazon’s newfound Linux embrace (and Linux-derived Android about-face), Vega OS:

We’d already known for a while that Amazon was shutting down its Appstore, but its Fire OS-to-Vega OS transition is more recent. Notably, there’s no more local app sideloading allowed; all apps come down from the Amazon cloud.
Google’s more modest (but comprehensive) response
Google’s counterpunch was more muted, albeit notably (and thankfully, from a skip-the-landfill standpoint) more inclusive of upgrades for existing hardware versus the day-prior comparative fixation on migrating folks to new devices, and reflective of a company that’s fundamentally a software supplier (with a software-licensing business model). Again from Wired’s coverage:
This month, Gemini will launch on every Google Assistant smart home device from the last decade, from the original 2016 Google Home speaker to the Nest Cam Indoor 2016. It’s rolling out in Early Access, and you can sign up to take part in the Google Home app.
There’s more:
Google is bringing Gemini Live to select Google Home devices (the Nest Audio, Google Nest Hub Max, and Nest Hub 2nd Gen, plus the new Google Home Speaker). That’s because Gemini Live has a few hardware dependencies, like better microphones and background noise suppression. With Gemini Live, you’ll be able to have a back-and-forth conversation with the chatbot, even have it craft a story to tell kids, with characters and voices.
But note the fine print, which shouldn’t be a surprise to anyone who’s already seen my past coverage: “Support doesn’t include third-party devices like Lenovo’s smart displays, which Google stopped updating in 2023.”
One other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, won’t ship until early next year. There was one other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, which won’t ship until early next year.

And, as the latest example of Google’s longstanding partnership with Walmart, the latter retailer has also launched a line of onn.-branded, Gemini-supportive security cameras and doorbells:

That’s what I’ve got for you today; we’ll have to see what, if anything else, Apple has for us before the end of the year, and whether it’ll take the form of an event or just a series of press releases. Until then, your fellow readers and I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling the Echo Studio, Amazon’s Apple HomePod foe
- Amazon’s Echo Auto Assistant: Legacy vehicle retrofit-relevant
- Lenovo’s Smart Clock 2: A “charged” device that met a premature demise
- The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance
- Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
The post Amazon and Google: Can you AI-upgrade the smart home while being frugal? appeared first on EDN.
PoE basics and beyond: What every engineer should know

Power over Ethernet (PoE) is not rocket science, but it’s not plug-and-play magic either. This short primer walks through the basics with a few practical nudges for those curious to try it out.
It’s a technology that delivers electrical power alongside data over standard twisted-pair Ethernet cables. It enables a single RJ45 cable to supply both network connectivity and power to powered devices (PDs) such as wireless access points, IP cameras, and VoIP phones, eliminating the need for separate power cables and simplifying installation.
PoE essentials: From devices to injectors
Any network device powered via PoE is known as a powered device or PD, with common examples including wireless access points, IP security cameras, and VoIP phones. These devices receive both data and electrical power through Ethernet cables from power sourcing equipment (PSE), which is classified as either “endspan” or “midspan.”
An endspan—also called an endpoint—is typically a PoE-enabled network switch that directly supplies power and data to connected PDs, eliminating the need for a separate power source. In contrast, when using a non-PoE network switch, an intermediary device is required to inject power into the connection. This midspan device, often referred to as a PoE injector, sits between the switch and the PD, enabling PoE functionality without replacing existing network infrastructure. A PoE injector sends data and power together through one Ethernet cable, simplifying network setups.

Figure 1 A PoE injector is shown with auto negotiation that manages power delivery safely and efficiently. Source: http://poe-world.com
The above figure shows a PoE injector with auto negotiation, a safety and compatibility feature that ensures power is delivered only when the connected device can accept it. Before supplying power, the injector initiates a handshake with the PD to detect its PoE capability and determine the appropriate power level. This prevents accidental damage to non-PoE devices and allows precise power delivery—whether it’s 15.4 W for Type 1, 25.5 W for Type 2, or up to 90 W for newer Type 4 devices.
Note at this point that the original IEEE 802.3af-2003 PoE standard provides up to 15.4 watts of DC power per port. This was later enhanced by the IEEE 802.3at-2009 standard—commonly referred to as PoE+ or PoE Plus—which supports up to 25.5 watts for Type 2 devices, making it suitable for powering VoIP phones, wireless access points, and security cameras.
To meet growing demands for higher power delivery, the IEEE introduced a new standard in 2018: IEEE 802.3bt. This advancement significantly increased capacity, enabling up to 60 watts (Type 3) and circa 100 watts (Type 4) of power at the source by utilizing all four pairs of wires in Ethernet cabling compared to earlier standards that used only two pairs.
As indicated previously, VoIP phones were among the earliest applications of PoE. Wireless access points (WAPs) and IP cameras are also ideal use cases, as all these devices require both data connectivity and power.

Figure 2 This PoE system is powering a fixed wireless access (FWA) device.
As a sidenote, an injector delivers power over the network cable, while a splitter extracts both data and power—providing an Ethernet output and a DC plug.
A practical intro to PoE for engineers and DIYers
So, PoE simplifies device deployment by delivering both power and data over a single cable. For engineers and DIYers looking to streamline installations or reduce cable clutter, PoE offers a clean, scalable solution.
This brief session outlines foundational use cases and practical considerations for first-time PoE users. No deep dives: just clear, actionable insights to help you get started with smarter, more efficient connectivity.
Up next is the tried-and-true schematic of a passive PoE injector I put together some time ago for an older IP security camera (24 VDC/12 W).

Figure 3 Schematic demonstrates how a passive PoE injector powers an IP camera. Source: Author
In this setup, the LAN port links the camera to the network, and the PoE port delivers power while completing the data path. As a cautionary note, use a passive PoE injector only when you are certain of the device’s power requirements. If you are unsure, take time to review the device specifications. Then, either configure a passive injector to match your setup or choose an active PoE solution with integrated negotiation and protection.
Fundamentally, most passive PoE installations operate across a range of voltages, with 24 V often serving as practical middle ground. Even lower voltages, such as 12 V, can be viable depending on cable length and power requirements. However, passive PoE should never be applied to devices not explicitly designed to accept it; doing so risks damaging the Ethernet port’s magnetics.
Unlike active PoE standards, passive PoE delivers power continuously without any form of negotiation. In its earliest and simplest form, it leveraged unused pairs in Fast Ethernet to transmit DC voltage—typically using pins 4–5 for positive and 7–8 for negative, echoing the layout of 802.3af Mode B. As Gigabit Ethernet became common, passive PoE evolved to use transformers that enabled both power and data to coexist on the same pins, though implementations vary.
Seen from another angle, PoE technology typically utilizes the two unused twisted pairs in standard Ethernet cables—but this applies only to 10BASE-T and 100BASE-TX networks, which use two pairs for data transmission.
In contrast, 1000BASE-T (Gigabit Ethernet) employs all four twisted pairs for data, so PoE is delivered differently—by superimposing power onto the data lines using a method known as phantom power. This technique allows power to be transmitted without interfering with data, leveraging the center tap of Ethernet transformers to extract the common-mode voltage.
PoE primer: Surface touched, more to come
Though we have only skimmed the surface, it’s time for a brief wrap-up.
Fortunately, even beginners exploring PoE projects can get started quickly, thanks to off-the-shelf controller chips and evaluation boards designed for immediate use. For instance, the EV8020-QV-00A evaluation board—shown below—demonstrates the capabilities of the MP8020, an IEEE 802.3af/at/bt-compliant PoE-powered device.

Figure 4 MPS showcases the EV8020-QV-00A evaluation board, configured to evaluate the MP8020’s IEEE 802.3af/at/bt-compliant PoE PD functionality. Source: MPS
Here are my quick picks for reliable, currently supported PoE PD interface ICs—the brains behind PoE:
- TI TPS23730 – IEEE 802.3bt Type 3 PD with integrated DC-DC controller
- TI TPS23731 – No-opto flyback controller; compact and efficient
- TI TPS23734 – Type 3 PD with robust thermal performance and DC-DC control
- onsemi NCP1081 – Integrated PoE-PD and DC-DC converter controller; 802.3at compliant
- onsemi NCP1083 – Similar to NCP1081, with auxiliary supply support for added flexibility
- TI TPS2372 – IEEE 802.3bt Type 4 high-power PD interface with automatic MPS (maintain power signature) and autoclass
Similarly, leading semiconductor manufacturers offer a broad spectrum of PSE controller ICs for PoE applications—ranging from basic single-port controllers to sophisticated multi-port managers that support the latest IEEE standards.
As a notable example, TI’s TPS23861 is a feature-rich, 4-channel IEEE 802.3at PSE controller that supports auto mode, external FET architecture, and four-point detection for enhanced reliability, with optional I²C control and efficient thermal design for compact, cost-effective PoE systems.
In short, fantastic ICs make today’s PoE designs smarter and more efficient, especially in dynamic or power-sensitive environments. Whether you are refining an existing layout or venturing into high-power applications, now is the time to explore, prototype, and push your PoE designs further. I will be here.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- More opportunities for PoE
- A PoE injector with a “virtual” usage precursor
- Simple circuit design tutorial for PoE applications
- Power over Ethernet (PoE) grows up: it’s now PoE+
- Power over Ethernet (PoE) to Power Home Security & Health Care Devices
The post PoE basics and beyond: What every engineer should know appeared first on EDN.
DMD powers high-resolution lithography

With over 8.9 million micromirrors, TI’s DLP991UUV digital micromirror device (DMD) enables maskless digital lithography for advanced packaging. Its 4096×2176 micromirror array, 5.4-µm pitch, and 110-Gpixel/s data rate remove the need for costly mask technology while providing scalability and precision for increasingly complex designs.

The DMD is a spatial light modulator that controls the amplitude, direction, and phase of incoming light. Paired with the DLPC964 controller, the DLP991UUV DMD supports high-speed continuous data streaming for laser direct imaging. Its resolution enables large 3D-print build sizes, fine feature detail, and scanning of larger objects in 3D machine vision applications.
Offering the highest resolution and smallest mirror pitch in TI’s Digital Light Processing (DLP) portfolio, the DLP991UUV provides precise light control for industrial, medical, and consumer applications. It steers UV wavelengths from 343 nm to 410 nm and delivers up to 22.5 W/cm² at 405 nm.
Preproduction quantities of the DLP991UUV are available now on TI.com.
The post DMD powers high-resolution lithography appeared first on EDN.
Co-packaged optics enables AI data center scale-up

AIchip Technologies and Ayar Labs unveiled a co-packaged optics (CPO) solution for multi-rack AI clusters, providing extended reach, low latency, and high radix. The joint development tackles AI infrastructure data-movement bottlenecks by replacing copper interconnects with CPO in large-scale accelerator deployments.

The offering integrates Ayar’s TeraPHY optical engines with AIchip’s advanced packaging on a common substrate, bringing optical I/O directly to the AI accelerator interface. This enables over 100 Tbps of scale-up bandwidth per accelerator and supports more than 256 optical scale-up ports per device. TeraPHY is also protocol agnostic, allowing flexible integration with customer-designed chiplets and fabrics.
The co-packaged solution scales multi-rack networks without the power and latency penalties of pluggable optics by shortening electrical traces and placing optical I/O close to the compute core. With UCIe support and flexible protocol endpoints at the package boundary, it integrates alongside compute tiles, memory, and accelerators while maintaining performance, signal integrity, and thermal requirements.
Both companies are working with select customers to integrate co-packaged optics into next-generation AI accelerators and scale-up switches. They will provide collateral, reference architectures, and build options to qualified design teams.
The post Co-packaged optics enables AI data center scale-up appeared first on EDN.
Platform speeds AI from prototype to production

Purpose-built for Lantronix Open-Q system-on-modules (SOMs), EdgeFabric.ai is a no-code development platform for designing and deploying edge AI applications. According to Lantronix, it helps customers move AI from prototype to production in minutes instead of months, without needing a team of AI experts.

The visual orchestration platform integrates with Open-Q hardware and leading AI model ecosystems, automatically configuring performance across Qualcomm GPUs, DSPs, and NPUs. It streamlines data pipelines with drag-and-drop workflows for AI, video, and sensors, while delivering real-time visualization. Prebuilt templates support common use cases such as surveillance, anomaly detection, and safety monitoring.
EdgeFabric.ai auto-generates production-ready code in Python and C++, making it easy to build and adjust pipelines, fine-tune parameters, and adapt workflows quickly.
Learn more about the EdgeFabric.ai platform here. For details on Open-Q SOMs, visit SOM solutions. Lantronix also offers engineering services for development support.
The post Platform speeds AI from prototype to production appeared first on EDN.
Dual-core MCUs drive motor-control efficiency

RA8T2 MCUs from Renesas integrate dual processors for real-time motor control in advanced factory automation and robotics. They pair a 1-GHz Arm Cortex-M85 core with an optional 250-MHz Cortex-M33 core, combining high-speed operation, large memory, timers, and analog functions on a single chip.

The Cortex-M85 with Helium technology accelerates DSP and machine-learning workloads, enabling AI functions that predict motor maintenance needs. In dual-core variants, the embedded Cortex-M33 separates real-time control from general-purpose tasks to further enhance system performance.
RA8T2 devices integrate up to 1 MB of MRAM and 2 MB of SRAM, including 256 KB of TCM for the Cortex-M85 and 128 KB of TCM for the Cortex-M33. For high-speed networking in factory automation, they offer multiple interfaces, such as two Gigabit Ethernet MACs with DMA and a two-port EtherCAT slave. A 32-bit, 14-channel timer delivers PWM functionality up to 300 MHz.
The RA8T2 series of MCUs is available now through Renesas and its distributors.
The post Dual-core MCUs drive motor-control efficiency appeared first on EDN.
Image sensor provides ultra-high dynamic range

Omnivision’s OV50R40 50-Mpixel CMOS image sensor delivers single-exposure HDR up to 110 dB with second-generation TheiaCel technology. It also reduces power consumption by ~20% compared with the previous-generation OV50K40, enabling longer HDR video capture.

Aimed at high-end smartphones and action cameras, the OV50R40 achieves ultra-high dynamic range in any lighting. Built on PureCel Plus‑S stacked die technology, the color sensor supports 100% coverage quad phase detection for improved autofocus. It features an active array of 8192×6144 with 1.2‑µm pixels in a 1/1.3‑in. format and supports premium 8K video with dual analog gain (DAG) HDR and on-sensor crop zoom.
The sensor also supports 4-cell binning, producing 12.5‑Mpixel resolution at 120 fps. For 4K video at 60 fps, it provides 3-channel HDR with 4× sensitivity, ensuring enhanced low-light performance.
The OV50R40 is now sampling, with mass production planned for Q1 2026.
The post Image sensor provides ultra-high dynamic range appeared first on EDN.
Thermally enhanced packages—hot or not?

The relentless pursuit of performance in sectors such as AI, cloud computing, and autonomous driving is creating a heat crisis. As the next generation of processors demand more power in smaller spaces, the switched-mode power supply (SMPS) is being pushed to its thermal limit. SMPS’s integrated circuit (IC) packages have traditionally used a large thermal pad on the bottom side of the package, known as a die attach paddle (DAP), to dissipate the majority of the heat through the printed circuit board (PCB). But as power density increases, relying on only one side of the package to dissipate heat quickly becomes a serious constraint.
A thermally enhanced package is a type of IC package designed to dissipate heat from both the top and bottom surfaces. In this article, we’ll explore the standard thermal metrics of IC packages, along with the composition, top-side cooling methods, and thermal benefits of a thermally enhanced package.
Thermal metrics of IC packagesIn order to understand what a thermally-enhanced package is and why it is beneficial, it’s important to first understand the terminology for describing the thermal performance of an IC package. Three foundational metrics of thermal resistance are the junction-to-ambient thermal resistance (RθJA), the junction-to-case (top) thermal resistance (RθJC(top)), and the junction-to-board thermal resistance (RθJB).
Thermal resistance measures the opposition to the flow of heat in a medium. In IC packages, thermal resistance is usually measured in Celsius rise per watt dissipated (°C/W), or how much the temperature rises when the IC dissipates a certain amount of power.
RθJA measures the thermal resistance between the junction (J) (the silicon die itself), and the ambient air (A) around the IC. RθJC(top) measures the thermal resistance specifically between (J) and the top (t) of the case (C) or package mold. RθJB measures the thermal resistance specifically between (J) and the PCB on which the package is mounted.
RθJA significantly depends on its subcomponents—both RθJC(top) and RθJB. The lower the RθJA, the better, because it clearly indicates that there will be a lower temperature rise per unit of power dissipated. Power IC designers spend a lot of time and resources to come up with new ways to lower RθJA. A thermally enhanced package is one such way.
Thermally enhanced package compositionA thermally enhanced package is a quad flat no-lead (QFN) package that has both a bottom-side DAP and a top-side cutout of the molding to directly expose the back of the silicon die to the environment. Figure 1 shows the gray backside of the die for the Texas Instruments (TI) LM61495T-Q1 buck converter.
Figure 1 The LM61495T-Q1 buck converter in a thermally enhanced package. Source: Texas Instruments
Exposing the die on the top side of the package does two things: it lowers the RθJC(top) compared to an IC package that completely molds over the die, and enables a direct connection between the die and an external heat sink, which can significantly reduce RθJA.
RθJC(top) in a thermally enhanced packageRθJC(top) allows heat to escape more effectively from the top of the device. Typically, heat escapes through the package mold and then to the air, but in a thermally enhanced package, it escapes directly to the air. This helps reduce the device temperature and reduces the risk of thermal shutdown and long-term heat stress issues. The thermally enhanced package also has a lower RθJA, which makes it possible for a converter to handle more current and operate in hotter environments.
Figure 2 shows a series of IC junction temperature measurements taken across output current for both the LM61495T-Q1 in the thermally enhanced package and TI’s LM61495-Q1 buck converter in the standard QFN package under two common operating conditions.

|
VOUT = 5V |
FSW = 400kHz |
TA = 25°C |
Figure 2 Output current vs. junction temperature for the LM61495-Q1 and LM61495T-Q1 with no heat sink. Source: Texas Instruments
Clearly, even with no heat sink attached, the thermally enhanced package runs slightly cooler, simply because more heat is dissipating out of the top of the package and into the air. The RθJA for a thermally enhanced package is slightly lower, demonstrating with certainty that, even if only marginally, this package type will provide better thermals compared to the standard QFN with top-side molding, even without any additional thermal management techniques. Table 1 lists the official thermal metrics found in both devices’ data sheets.
|
Part number |
Package type |
RθJA (evaluation module)(°C/W) |
RθJC(top) |
RθJB |
|
LM61495-Q1 |
Standard QFN |
21.6 |
19.2 |
12.2 |
|
LM61495T-Q1 |
Thermally enhanced package QFN |
21 |
0.64 |
11.5 |
Table 1 Comparing data sheet-derived thermal metrics for the LM61495-Q1 and LM61495T-Q1. Source: Texas Instruments
Top-side cooling vs QFNCombining its near-zero RθJC(Top) top side with an effective heat sink significantly reduces the RθJA of an IC in a thermally enhanced package. There are three significant improvements when compared to the same IC in a standard QFN package under otherwise similar operating conditions:
- Higher switching-frequency operation.
- Higher output-current capability.
- Operation at higher ambient temperatures.
For any SMPS under a given input voltage (VIN), output voltage (VOUT) condition and supplying a given output current, the maximum switching frequency will be thermally limited. Within every switching period, there are switching losses and conduction losses that dissipate as heat. Switching more frequently dissipates more power in the IC, leading to an increased IC junction temperature. This can be frustrating for engineers because switching at higher frequencies enables the use of a smaller buck inductor, and therefore a smaller overall solution size and lower cost.
Under the same operating conditions, using the thermally enhanced package and a heat sink, the heat dissipated in each switching period is now more easily channeled out of the IC, leading to a lower junction temperature and enabling a higher switching frequency without hitting the IC’s junction temperature limit. Just don’t exceed the maximum switching frequency recommendation of the device as outlined in the data sheet.
The benefits of using a smaller inductor are especially pronounced in higher-current multiphase designs that require an inductor for every phase. Figure 3 shows a simplified four-phase design capable of supplying 24 A at 3.3 VOUT at 2.2 MHz using the TI LM64AA2-Q1 step-down converter. If the design were to overheat and the switching frequency had to be reduced to 400 kHz, you would have to replace all four inductors with larger inductors (in terms of both inductance and size), inflating the overall solution cost and size substantially.

Figure 3 Simplified schematic of a single-output, four-phase step-down converter design using the LM644A2-Q1 step-down converter in the thermally enhanced package. Source: Texas Instruments
Conversely, for any SMPS under a given VIN, VOUT condition, and operating at a specific switching frequency, the maximum output current will be thermally limited. When discussing the current limit of an IC, it’s important to clarify that for all high-side FET integrated SMPSs, there is a data sheet-specified high-side current limit that bounds the possible output current.
Upon reaching the current-limit setpoint, the high-side FET turns off, and the IC may enter a hiccup interval to reduce the operating temperature until the overcurrent condition goes away. But even before reaching the current limit, it is very possible for an IC to overheat from a high output-current requirement. This is especially true, again, at higher frequencies. As long as you don’t exceed the high-side current limit, using an IC in the thermally enhanced package with a heat sink can extend the maximum possible output current to a level at which the standard QFN IC alone would overheat.
There is another constant to make the thermally enhanced package versus the standard QFN package comparison valid, and that is the ambient temperature (TA). TA is a significant factor when considering how much power an SMPS can deliver before it starts to overheat.
For example, a buck converter may be able to easily do a 12VIN-to-5VOUT conversion and support a continuous 6 A of current while switching at 2.2 MHz when the TA is 25°C, but not at 105°C. So, there is yet a third way to look at the benefit that a thermally enhanced package can provide. Assuming the VIN, VOUT, output current, and maximum switching frequency are constant, a thermally enhanced package used with a heat sink can enable an SMPS to operate at a meaningfully higher TA compared to a standard QFN package with no heat sink.
Figure 4 uses a current derating curve to demonstrate both the higher output current capability and operation at a higher TA. In an experiment using the LM61495-Q1 and LM61495T-Q1 buck converters, we measured the output current against the TA in a standard QFN package without a heat sink and in a thermally enhanced package QFN connected to an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Other than the package and the heat sink, all other conditions are constant: operating conditions, PCB, and measurement instrumentation.

|
VIN = 12V |
VOUT = 3.3V |
FSW = 2.2MHz |
Figure 4 Output current vs. ambient temperature of the LM61495-Q1 with no heat sink and the LM61495T-Q1 with an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Source: Texas Instruments
When TA reaches about 83˚C, the standard QFN package hits its thermal shutdown threshold, and the output current begins to collapse. As TA increases further, the device cycles into and out of thermal shutdown, and the maximum achievable output current that the device can deliver is necessarily reduced until TA reaches a steady 125˚C. At this point, the converter may not be able to sustain even 5 A without overheating.
Compare this to the thermally enhanced package QFN connected to a heat sink. The first instance of thermal shutdown now doesn’t occur until about 117˚C. That’s an increase in TA before hitting a thermal shutdown of 34˚C, or 40%. The LM61495-Q1 is a 10-A buck converter, meaning that its recommended maximum output current is 10 A. But in this case, with a thermally enhanced package and effective heat sinking, a continuous 11 A output was clearly achievable up to 117˚C – in other words, a 10% increase in maximum continuous output current even at a high TA.
Methods of top-side coolingFigure 5, Figure 6, and Figure 7 show some of the most common methods of top-side cooling. Stand-alone heat sinks are simple and readily available in many different forms, materials, and sizes, but are sometimes impractical in small-form-factor designs.
Figure 5 Stand-alone fin-type heat sink, these are simple and readily available but sometimes impractical in small form factor designs. Source: Texas Instruments
Cold plates are very effective in dissipating heat but are more complex and costlier to implement (Figure 6).

Figure 6 Cold plate-type heat sink, these are very effective in dissipating heat but are more complex and costlier to implement. Source: Texas Instruments
Using the metal housing containing the power supply and the surrounding electronics as a heat sink is compact, effective, and relatively inexpensive if the housing already exists. As shown in Figure 7, this is done by creating a pillar or dimple that connects the IC to the housing to enable efficient heat transfer. For power supplies powering processors, it’s likely that this method is already helping dissipate heat on the processor. Adding an additional dimple or pillar that now gives heat-sink access to the power supply is often a simple change, making it a very popular method, especially for processor power.

Figure 7 Contact-with-housing heat sink where a pillar or dimple connects the IC to the housing to enable efficient heat transfer. Source: Texas Instruments
There are many ways to implement heat sinking, but that doesn’t mean that they are all equally effective. The size, material, and form of the heat sink matter. The type and amount of thermal interface material used between the IC and the heat sink matter, as does its placement. It is important to optimize all of these factors for the design at hand.
Comparing heat sinksFigure 8 shows another current derating curve. It compares two different types of heat sinks, each mounted on the LM61495T-Q1. For reference, the figure includes the performance of the standard QFN package with no heat sink.

|
VIN = 24V |
VOUT = 3.3V |
FSW = 2.2MHz |
Figure 8 Output current versus the ambient temperature of the LM61495-Q1 with no heat sink, the LM61495T-Q1 with an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink, and with an aluminum plate heat sink. Source: Texas Instruments
For a visualization of these heat sinks, see Figure 9 and Figure 10, which show a top-down view of the PCB and a clear view of how the heat sinks are mounted to the IC and PCB. The heat sink shown in Figure 9 is a commercially available, off-the-shelf product. To reiterate, it is a 45 mm by 45 mm aluminum alloy heat sink with a base that is 3mm thick and pin-type fins that extend the surface area and allow omnidirectional airflow.
Figure 9 The LM61495T-Q1 evaluation board with the off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Source: Texas Instruments
Figure 10 shows a custom heat sink that is essentially just a 50 mm by 50 mm aluminum plate with a 2 mm thickness and a small pillar that directly touches the IC. This heat sink was designed to mimic the contact-with-housing method, as it is very similar in size and material to the types of housing seen in real applications.

Figure 10 The LM61495T-Q1 evaluation board with a custom aluminum plate heat sink to mimic the contact-with-housing method. Source: Texas Instruments
Under the same conditions, the stand-alone heat sink provides a major benefit compared to the standard QFN package with no heat sink. The standard QFN package hits thermal shutdown around 67°C TA. For the stand-alone heat-sink setup, thermal shutdown isn’t triggered until the TA reaches about 111°C, which is a major improvement. However, the aluminum plate heat-sink setup doesn’t hit thermal shutdown at all. With the aluminum plate setup, the converter is still able to supply a continuous 10-A current at the highest TA tested (125˚C), demonstrating both the importance of choosing the correct heat sink for the system requirements as well as the popularity of the contact-with-housing method.
Addressing modern thermal challengesPower supply designers increasingly deal with thermal challenges as modern applications demand more power and smaller form factors in hotter spaces. Standard QFN packaging has long relied on dissipating the majority of generated heat through the bottom side of the package to the PCB. A thermally enhanced package QFN uses both the top and bottom sides of the package to improve heat flow out of the IC, essentially paralleling the thermal impedance paths and reducing the effective thermal impedance.
Combining a thermally enhanced package with effective heat sinking results in significant thermal benefits and enables higher-power-density designs. Because these benefits are derived from reducing the effective RθJA, designers can realize just one or all of these benefits in varying degrees. Increase the maximum switching frequency and reduce solution size and cost. Enabling a higher maximum output current for higher power conversion. Enable operation at a higher TA.

Jonathan Riley is a Senior Product Marketing Engineer for Texas Instruments’ Switching Regulators organization. He holds a BS in Electrical Engineering from the University of California Santa Cruz. At TI, Jonathan works in the crossroads of marketing and engineering to ensure TI’s Switching Regulator product line continues to evolve ahead of the market and enable customers to power the technologies of tomorrow.
Related Content
- Power Tips #101: Use a thermal camera to assess temperatures in automotive environments
- IC packages and thermal design
- Keeping space chips cool and reliable
- QFN? QFP? QFWHAT?
Additional resources
- Read the TI application note, “Semiconductor and IC Package Thermal Metrics.”
- Check out these TI application reports:
- See the TI application brief, “PowerPAD
Made Easy.” - Watch the TI video resource, “Improve thermal performance using thermally enhanced packaging (TEP).”
The post Thermally enhanced packages—hot or not? appeared first on EDN.
Past, present, and future of hard disk drives (HDDs)

Where do HDDs stand after the advent of SDDs? Are they a thing of the past now, or do they still have a life? While HDDs store digital data, what’s their relation to analog technology? Here is a fascinating look at HDD’s past, present, and future, accompanied by data from the industry. The author also raises a very valid point: while their trajectory is very similar to the world of semiconductors, why don’t HDDs have their own version of Moore’s Law?
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- When big data crashes against small problems
- The Hottest Data-Storage Medium Is…Magnetic Tape?
- Audio cassette tapes are coming back, this time for mass storage
The post Past, present, and future of hard disk drives (HDDs) appeared first on EDN.
Improve PWM controller-induced ripple in voltage regulators

Simple linear and switching voltage regulators with feedback networks of the type shown in Figure 1 are legion. Their output voltages are the reference voltage at the feedback (FB) pin multiplied by 1 + Rf / Rg. Recommended values of Cf from 100 pF to 10nF increase the amount of feedback at higher frequencies, or at least ensure it is not reduced by stray capacitances at the feedback pin.
Figure 1 The configurations of common regulators and their feedback networks. A linear regulator is shown on the left and a switcher on the right.
Modifying this structure to incorporate PWM control of the output voltage requires some thought, and both Stephen Woodward and I have presented several Design Ideas (DIs) that address this.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I’ve suggested disconnecting Rg from ground and driving it from a heavily filtered (op-amp-based) PWM signal supplied by a 74xx04-type logic inverter. Although this can result in excellent ripple suppression, it has a disadvantage—the need for an inverter power supply, which does not degrade the accuracy of the regulator’s 1% or better reference voltage.
Stephen has proposed switching the disconnected Rg leg between ground and open with a MOSFET. The beauty of this is that no new reference is needed. Although the output voltage is no longer a linear function of the PWM duty cycle, a simple software-based lookup table renders this a mere inconvenience. (Yup, “we can fix it in software!”)
A general scheme to mitigate PWM controller-induced ripple should be flexible enough to accommodate different regulators, regulator reference voltages, output voltage ranges, and PWM frequencies. In selecting one, here are some possible traps to be aware of:
- Nulling by adding an out-of-phase version of the ripple signal is at the mercy of component tolerances.
- Cheap ceramics, such as the ubiquitous X7R, have DC voltage and temperature-sensitive capacitances. If used, the circuit must tolerate these undesirable traits.
- Schemes which connect capacitors between ground and the feedback pin will reduce loop feedback at higher frequencies. The result could be degradation of line and load transient responses.
With this in mind, consider the circuit of Figure 2, capable of operation from 0.8 V to a little more than 5 V.

Figure 2 A specific instance of a PWM-controlled regulator with ripple suppression. Only a linear regulator is shown, but the adaptation for switcher operation entails only the addition of an inductor and a filter capacitor.
The low capacitance MOSFET has a maximum on-resistance of under 2 Ω at a VGS of 2.5 V or more. Cg1 and Cg2 see maximum DC voltages of 0.8 V (up to 1.25 V in some regulators). Their capacitive accuracies are not critical, and at these low voltages, they barely budge when 10-V or higher-rated X7R capacitors are employed.
Cf can see a significant DC voltage, however. Here, you might get away with an X7R, but a 10-nF (voltage-insensitive) C0G is cheap. The value of Cf was chosen to aid in ripple management. If it were not present, the ripple would be larger and proportional to the value of Rf. With a 10-nF Cf, larger values of Rf for higher output voltages would have no effect on the PWM-induced ripple; smaller ones could only reduce it. The largest peak-to-peak ripple occurs at duty cycles from 30 to 40%.
The filtering supplied by the three capacitors produces a sinusoidal ripple waveform of amplitude 5.7 µV peak-to-peak. For a 16-bit ADC with a full scale of 5 V, the peak-to-peak amplitude is less than 1 LSbit.
FlexibilityYou might have a requirement for a wider or narrower range of output voltages. Feel free to modify Rf accordingly without a penalty in ripple amplitude.
Ripple amplitude will scale in proportion to the regulator’s reference voltage. The design assumes a regulator whose optimum FB-to-ground resistance is 10 kΩ. If it’s necessary to change this for the regulator of your choice, scale the three Rg resistors by the same factor Z. Because the resistors and three capacitors implement a 3rd order filter, the ripple will scale in accordance with Z-3. To keep the same ripple amplitude, scale the three capacitors by 1/Z. You might want to scale the capacitors’ values for some other reason, even if the resistors are unchanged.
Changing the PWM frequency by a factor F will change the ripple amplitude by a factor of F-3. But too high a frequency could encounter accuracy problems due to the parasitic capacitances and unequal turn-on/turn-off times of the MOSFET.
Some regulators might not tolerate a Cf of a value large enough to aid in ripple suppression. Usually, these will tolerate a resistor Rcf in series with Cf. In such cases, ripple will be increased by a factor K equal to the square root of ( 1 + Rcf · 2π · fPWM · Cf ), and the waveform might no longer be sinusoidal. But increasing Cg1 and Cg2 by the square root of K will compensate to yield approximately the same suppression as offered by the design with Rcf equal to 0. If all else fails, there is always the possibility of adding an Rg4 and a Cg3 to provide another stage of filtering.
Tying it all togetherA flexible approach has been introduced for the suppression of PWM control-induced ripple in linear and switching regulators. Simple rules have been presented for the use and modification of the Figure 2 circuit for operation over different output voltage ranges, PWM frequencies, preferred resistances between ground and the regulator’s feedback pin, and tolerances for moderately large capacitances between the FB pins and the output.
The limitations of capacitors with sensitivities to DC voltages are recognized. These components are used appropriately and judiciously. Dependency on component matching is avoided. Standard feedback network structures are maintained or, at worst, subjected to minor modifications only; specifically, feedback at higher frequencies is not reduced from that recommended by the regulator manufacturer. This maintains the specified line and load transient responses.
AddendumOnce again, the Comments section of DIs has shown its worth. And it’s Deja vu all over again; value was provided by the redoubtable Stephen Woodward. In an earlier DI, he pointed out that regulators generally do not tolerate negative voltages at their feedback pins. But if there is a capacitor Cf of more than a few hundred picofarads connected from the output to this pin, as I have recommended in this DI, and the output is shorted or rapidly discharged, this capacitor could couple a negative voltage to that pin and damage the part. To protect against this, add the components shown in the following figure.

Figure 3 Add these components to protect the FB pin from output rapid negative voltage changes.
In normal operation and during startup, the CUS10S30 Schottky diode looks like an open circuit and it, Cc, and the 1 MΩ resistor have a negligible effect on circuit operation. Cc prevents the flow of diode reverse current, which could otherwise produce output voltage errors. If Vout transitions to ground rapidly, Cc and the diode prevent any negative voltage from appearing at the junction of the capacitors. Rc provides a cheap “just in case” limit of the current into the FB pin from that voltage transient if it somehow saw a negative voltage. (Check the maximum FB pin current to ensure that no significant error-inducing voltages develop across Rc.) When the circuit has settled, the voltage across Cc is discharged, and the circuit is ready to restart normally.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Brute force mitigation of PWM Vdd and ground “saturation” errors
- A transistor thermostat for DAC voltage references
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- PWM buck regulator interface generalized design equations
The post Improve PWM controller-induced ripple in voltage regulators appeared first on EDN.
A transistor thermostat for DAC voltage references

Frequent contributor Christopher Paul recently provided us with a painstakingly conservatively error-budget-analyzed Design Idea (DI) for a state-of-the-art pursuit of a 16-bit-perfection PWM DAC.
The DI presented below, while shamelessly kibitzing on Chris’ excellent design process and product, should in no way be construed as criticism or even a suggested modification. It is neither. It’s just a voyage into the strange land of ultimate precision.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In his pursuit of perfect precision, Christopher creatively coped with the limitations of the “art.” Perhaps the most intractable of these limitations in the context of his design was the temperature coefficient of available moderately priced precision voltage references. His choice of the excellent 35xxx family of references, for example, exhibits a temperature coefficient (tempco) of 12 ppm/°C = 0.8 lsb/°C = 55 lsb over 0 to 70°C, reducing this element of conversion precision to only an effective 10.2 bits.
Since that was more than an order of magnitude worse than other error factors (e.g., DNL, INL, ripple) in Christopher’s simple and elegant (and nice!) design, it got me musing about what possibilities might exist to mediate it.
Let me candidly admit upfront that my musing was unconstrained by a concern for the practical damage such possibilities might imply towards the simplicity and elegance of the design. This included damage, such as doubling the parts count and vastly increasing the power consumption.
But with those caveats out of the way, here we go.
The obvious possibility that came to mind, of course, was what if we reduced the importance of thermal instability of the reference by the simple (and brute-force) tactic of putting it in a thermostat? Over the years, we’ve seen lots of DIs for using transistors as sensors and heaters (sometimes combining both functions in the same device) for controlling the temperature of single components. Figure 1 illustrates the thermo-mechanics of such a scheme for this application.
Figure 1 Thermally coupling the transistor sensor/heater to the DAC voltage reference to stabilize its temperature.
A nylon machine screw clamps the heatsink hotspot of a TO-220-packaged transistor (TIP31G) in a cantilever fashion onto the surface of the reference. A foam O-ring provides a modicum of thermal insulation. A dab of thermal grease on the mating surfaces will improve thermal coupling.
Figure 2 shows the electronics of the thermostat. Here’s how that works.

Figure 2 Q1 is a combo heater/sensor for a ±1°C thermostat, nominal setpoint ~70°C. R3 = 37500/(Vref – 0.375).
Q1 is the core of the thermostat. Under the control of gated multivibrator U1, it alternates between a temperature measurement when U1’s “Out” pin is low, and heating when U1’s “Out” pin goes high. Setpoint corresponds to Q1 Vbe = 375 mV as generated by the voltage divider R3/R4, detected by comparator A1, and timed by U1.
I drew Figure 1 with the R3/R4 divider connected to +5 V, but in practice, this might not be the ideal choice. The thermostat setpoint will change by ~1.6°C per 1% change in Vref, so sub-percentage-point Vref stability is crucial to achieve optimal 16-bit DAC performance. The +5-V supply rail may therefore not be stable enough, and using the thermostatted DAC reference itself would be (much) better.
Any Vref of adequate stability and at least 365 mV may be used by simply setting R3 = 37500/(Vref – 0.375). For the same reason, R3 and R4 should be 1% or better metal film types. The point isn’t setpoint accuracy, which matters little, but stability, which matters much.
Vbe > 375mV indicates Q1 junction temp < setpoint, which gates U1 on. This allows U1 “Out” to transition to +5 V. This turns on driver transistor Q3, supplying ~20 mA to the Q1, Q2 pair. Q2 functions as a basic current regulator, limiting Q1’s heating current to ~0.7 V/1.5 Ω = 470 mA and therefore heating power to 2 W
The feedback loop thus established, Q1 Vbe to A1 to U1 to Q3 to Q1, adjusts the U1 duty cycle from 0 to 95%, and thereby tweaks the heating power to maintain thermostasis. Note that I omitted pinout numbers on A1 to accommodate the possibility that it might be contained in a multifunction chip (e.g., a quad) used elsewhere in the DAC.
Q.E.D. But wait! What are C2 and R2 for? Their reason for being, in general terms, is to be found in “Fixing a fundamental flaw of self-sensing transistor thermostats.”
As “Fixing…” explains, a fundamental limitation on the accuracy of thermostats like Figure 1 is as follows. The junction temperature (Tj) that we can actually measure is only an imperfect approximation of what we’re really interested in: controlling the package temperature (Tc). Figure 3 shows why.

Figure 3 The fatal flaw of Figure 1: the junction temperature is an imperfect approximation of the package temperature.
Because of the nonzero thermal impedance (Rjc) between the transistor junction and the surface of its case, an error term is introduced that’s proportional to that impedance and the heating power:
Terr = Tj – Tc = Rjc*Pj
In the TIP31 datasheet, Rjc is specified in the “Thermal Characteristics” section as 3.125 °C/W. Therefore, as Pj goes from 0 to 2 W, Terr would go from 0 to 6.25 °C. Recalling that the REF35 has a 12 ppm/°C tempco, that would leave us with 12 x 6.25 = 75 ppm = 5 lsb DAC drift.
That’s 11x better than the 55-lsb tempco error we started with, but it’s still quite a way from true 16-bit accuracy. Can we do even better?
Just like the R11, R12, C2 network in Figure 2 of “Fixing a fundamental flaw of self-sensing transistor thermostats” that adds a Pj proportional Terr correction to the thermostat setpoint, that’s what R2 and C2 do here in this DI. C2 accumulates a ~23 ms average of 0 to 100% heating duty cycle = 0 to 700 mV, and adds through R2 a proportional 0 to 14 mV = 0 to 6.25°C Terr correction to the setpoint for net ±1°C stable thermostasis and < 1 lsb reference instability.
Now Q.E.D!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Fixing a fundamental flaw of self-sensing transistor thermostats
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Double up on and ease the filtering requirements for PWMs
- Inherently DC accurate 16-bit PWM TBH DAC
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- 1kHz per Kelvin temperature sensor
- Measure junction temperature using the MOSFET body diode on a PG pin
The post A transistor thermostat for DAC voltage references appeared first on EDN.
An off-line power supply

One of my electronics interests is building radios, particularly those featured in older UK electronics magazines such as Practical Wireless, Everyday Electronics, Radio Constructor, and The Maplin Magazine. Most of those radios are designed to run on a 9-V disposable PP3 battery.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Using 9 V instead of the 3 V found in many domestic radios allows the transistors in these often-simple circuits to operate with a higher gain. PP3 batteries are, at a minimum, expensive in circuits consuming tens of mA and are—I suspect—hard to recycle. A more environmentally friendly solution was needed.
In the past, I’ve used single 3.6-V lithium-ion (Li-ion) cells from discarded e-cigarettes [1] with cheap combined charger and DC-DC converter modules found on eBay. They provide a nice, neat solution when housed in a small plastic box, but unfortunately generate a lot of electromagnetic interference (EMI), which falls within the shortwave band of frequencies (3 to 30 MHz) where a lot of the radios I build operate. I needed another solution that was EMI-free and environmentally friendly.
SolutionOne solution is to eliminate the DC-DC converter and string together three or more Li-ion cells in a battery pack (B1) with a variable linear regulator (IC1) to generate the required 9 V (V1) as shown in Figure 1. Li-ion cells, like all electronic components, have tolerances. The two most important parameters are cell capacity and open circuit voltage. Differences in these parameters between cells in series lead to uneven charging and ultimately stressing of some cells, leading to their eventual degradation [2]. To even out these differences, Li-ion battery packs often contain a battery management system (BMS) to ensure that cells charge evenly.
Figure 1 Li-ion battery pack, with 3 or more Li-ion cells, and a variable linear regulator to generate the required 9 V.
As luck would have it, on the local buy-nothing group in Ottawa, Canada, where I live, someone was giving away a Mastercraft 18-V Li-ion battery with charger as shown in Figure 2. The person offering it had misplaced the drill, so there was little expense for me. Upon opening the battery pack, it was indeed found to contain a battery management system (BMS). This seemed like an ideal solution.

Figure 2 The Mastercraft 18-V Li-ion battery and charger obtained locally.
CircuitThe next step was to make a linear voltage regulator to drop 18 V to 9 V. This, in itself, is not particularly environmentally friendly, as it is only 50% efficient, and any dropped battery voltage will be dissipating as heat. However, assuming renewable power generation is used as the source, this would prove a more environmentally friendly solution compared to using disposable batteries.
In one of my boxes of old projects, I found a constant current nickel-cadmium (NiCad) battery charger. It was based around an LM317 linear voltage regulator in a nice black plastic enclosure sold by Maplin Electronics as a “power supply” box. The NiCad battery hadn’t been used for over 20 years, so this project would be a repurpose. A schematic of the rewired power supply is shown in Figure 3.

Figure 3 The power supply schematic with four selectable output voltages—6, 9, 12, and 13.8 V.
In Figure 3, switch S1 functions as both the power switch and selects the output voltage. Four different output voltages are selectable based on current needs: 6 V, 9 V, 12 V, and 13.8 V can be chosen by adjusting the ratio of R2 and R3-R6 as shown in the LM317 datasheet [3]. R2 is usually 220 Ω and develops 1.23 V across it, the remaining output voltage is developed across R3-R6. To get the exact values, parallel combinations are used as shown in Table 1.
|
Resistor # |
Resistors (Ω) |
Combined Value (Ω) |
|
3 |
910, 18k, 15k |
819 |
|
4 |
1.5k, 22k, 33k |
1.35k |
|
5 |
2.2k, 15k |
1.92k |
|
6 |
2.2k |
2.2k |
Table 1 Different values of paralleled R3 to R6 resistors and their combined value.
A photograph of the finished power supply with a Li-ion battery attached is shown in Figure 4.

Figure 4 A photograph of the finished power supply with four selectable output voltages that can be adjusted via a knob.
ResultsCrimp-type spade connectors were fitted to the two input wires, which mated well with the terminals of the Li-ion battery. Maybe at some point, I will 3D-print a full connector for the battery. With the resistor values shown in Figure 3, the actual output voltages produced are: 5.96 V, 9.03 V, 12.15 V and 13.8 V. While these are not the actual designed values due to the use of preferred resistor values, it is of little consequence as the output voltage of disposable batteries varies over their operating time and there is of course a voltage drop due to cables. With this power supply, though, the output voltage of the power supply will remain constant during this time, even as the output voltage of the Li-ion drops due to its discharging.
Portable powerAlthough the power supply was intended for powering radio projects, it has other uses where portable power is needed and a DC-DC converter is too noisy, like sensitive instrumentation or some audiophile preamplifier [4].
Gavin Watkins is the founder of GapRF, a producer of online EDA tools focusing on the RF supply chain. When not doing that, he is happiest noodling around in his lab, working on audio electronics and RF projects, and restoring vintage equipment.
Related Content
- Drive any electronic clock with a high-precision 10-MHz reference
- Analogue charge pump produces high-frequency, high-voltage pulses
- Investigating a vape device
- Double Lithium-Ion/Lithium-Polymer USB Type-C Charger
- Low Cost Universal Battery Charger Schematic
References
- Reusing e-cigarette batteries in a e-bike, https://globalnews.ca/news/10883760/powering-e-bike-disposable-vapes/
- BU-808: How to Prolong Lithium-based Batteries, https://batteryuniversity.com/article/bu-808-how-to-prolong-lithium-based-batteries
- LM317 regulator datasheet, https://www.ti.com/lit/ds/symlink/lm317.pdf
- Battery powered hifi preamp, https://10audio.com/dodd_battery_pre/
The post An off-line power supply appeared first on EDN.
(Dis)assembling the bill-of-materials list for measuring blood pressure on the wrist

More than a decade ago, I visited my local doctor’s office, suffering from either kidney stone or back-spasm pain (I don’t recall which; at the time, it could have been either, or both, for that matter). As usual, the assistant logged my height and weight on the hallway scale, then my blood pressure in the examination room. I recall her measuring the latter, then re-measuring it, then hurriedly leaving the room with a worried look on her face and an “I’ll be back in a minute” comment. Turns out, my systolic blood pressure reading was near 200; she and the doctor had been conferring on whether to rush me to the nearest hospital in an ambulance.
Fortunately, a painkiller dropped my blood pressure below the danger point (spikes are a common body response to transient acute pain) in a timely manner, but the situation more broadly revealed that my pain-free ongoing blood pressure was still at the stage 2 hypertension level. My response was three-fold:
- Dietary changes, specifically to reduce sodium intake (my cholesterol levels were fine)
- Medication, specifically ongoing daily losartan potassium
- And regular blood pressure measurement using at-home equipment
Before continuing, here’s a quick definition of the two data points involved in blood pressure:
- Systolic blood pressure is the first (top/upper) number. It measures the pressure your blood is pushing against the walls of your arteries when the heart beats.
- Diastolic blood pressure is the second (bottom/lower) number. It measures the pressure your blood is pushing against your artery walls while the heart muscle rests between beats.
How is blood pressure traditionally measured at the doctor’s office or a hospital, specifically via a device called a sphygmomanometer in conjunction with a stethoscope? Thanks for asking:
Your doctor will typically use the following instruments in combination to measure your blood pressure:
- a cuff that can be inflated with air,
- a pressure meter (manometer) for measuring the air pressure inside the cuff, and
- a stethoscope for listening to the sound the blood makes as it flows through the brachial artery (the major artery found in your upper arm).
To measure blood pressure, the cuff is placed around the bare and extended upper arm, and inflated until no blood can flow through the brachial artery. Then the air is slowly let out of the cuff. As soon as blood starts flowing into the arm, it can be heard as a pounding sound through the stethoscope. The sound is produced by the rushing of the blood and the vibration of the vessel walls. The systolic pressure can be read from the meter once the first sounds are heard. The diastolic blood pressure is read once the pounding sound stops.
Home monitoring devicesWhat about at home? Here, there’s no separate stethoscope—or another person trained in listening to it and discerning what’s heard, for that matter—involved. And no, there isn’t a microphone integrated in the cuff to listen to the brachial artery, coupled with digital signal processing to analyze the microphone outputs, either (admittedly, that was Mr. Engineer here’s initial theory, until a realization of the bill-of-materials cost involved to implement the concept compelled me to do research on alternative approaches). This Reddit thread, specifically the following post within it, was notably helpful:
Pressure transducer within the machine. The pressure transducer can feel the pressure within the cuff. The air pressure in the cuff is the same at the end of the line in the machine.
So, like a manual BP cuff, the computer pumps air into the cuff until it feels a pulse. The pressure transducer actually senses the change in cuff pressure as the heartbeat.
That pulse is only looked at a little, get a relative beats per minute from the cuff. Now that the cuff can sense the pulse, keep pumping air until the pulse stops being sensed. That’s systolic. Now slowly and gently release air until you feel the pulse again. Check it against the rate number you had earlier. If it’s close, keep releasing air until you lose the sense. The last pressure that you had the pulse is the diastolic.
It grabs the two numbers very similarly to how you do it with your ears and a stethoscope. But, it is able to measure the pressure directly and look at the pressure many times per second, instead of your eyes and ears listening to the pulse and watching the gauge.
That’s where the specific algorithm inside the computer takes over. They’re all black magic as to exactly how they interpret pulse. Peaks from baseline, rise and fall, rising wave, falling wave, lots of ways to count pulses on a line. But all of them can give you a heart rate from just a blood pressure cuff.
Another Redditor explained the process a bit differently in that same thread, specifically in terms of exactly when the systolic value is ascertained:
OK, imagine your arm is a like a balloon and your heartbeat is a drummer inside. The cuff squeezes the balloon tight, no drumming gets out. As it slowly lets air out, the first quiet drumbeat you “hear” is your systolic. When the drumming gets too lazy to rattle the balloon, that’s your diastolic. The machine just listens for those drum‑beats via pressure wobbles in the cuff, no extra pulse sensor needed!
I came across a couple of nuances in a teardown of a different machine than the one we’ll be looking at today. First off, particularly note the following bolded-by-me emphasis phrase:
The system seems to be quite simple – a DC motor drives a pump (PUMP-924A) to inflate the cuff. The port to the cuff is actually a tee, with the other port heading towards a solenoid valve that is venting to atmosphere by default. When the unit starts, it does a bit of a leak-check which inflates the cuff to a small value (20mmHg) and sits there for a bit to also ensure that the user isn’t moving about, and detect if the cuff is too tight or too loose. From there, it seems to inflate at a controlled pressure rate, which requires running the motor at variable speed depending on the tightness of the cuff and the pressure in the cuff.
Note, too, the following functional deviation of the device showcased at “Dr. Gough’s Tech Zone” (by Dr. Gough Lui, with the most excellent tagline “Reversing the mindless enslavement of humans by technology”) from the previous definition I’d quoted, which had described measuring systolic and diastolic pressure on the cuff-deflation phase of the entire process:
As a system that measures on the inflation stroke, it’s quicker but I do have my hesitations about its accuracy.
Wrist cuff-monitoring pros and consWhen I decided to start regularly measuring my own blood pressure at home, I initially grabbed a wrist-located cuff-based monitor I’d had sitting around for a while, through multiple residence transitions (therefore explaining—versus frequent usage, which admittedly would have been a deception if I’d tried to convince you of it—the condition of the packaging), Samsung’s BW-325S (the republished version of the press release I found online includes a 2006 copyright date):






I quickly discovered, however, that its results’ consistency (when consecutive readings were taken experimentally only a few minutes apart, to clarify; day-to-day deviations would have been expected) was lacking. Some of this was likely due to imperfect arm-and-hand positioning on my part. And, since I was single at the time, I didn’t have a partner around to help me put it on; an upper-arm cuff-based device, conversely, left both hands free for placement purposes. That said, my research also suggests that upper-arm cuff-located devices are also inherently more reliable than wrist cuff alternatives (or alternative approaches that measure pulse rate via photoplethysmography, computer vision facial analysis, or other techniques, for that matter)
I’ve now transitioned to using an Omron BP786N upper-arm cuff device, which also includes Bluetooth connectivity for smartphone data-logging and -archiving purposes.

Having retired my wrist cuff device, I’ll be tearing it down today to satisfy my own curiosity (and hopefully at least some of yours’ as well). Afterwards, assuming I’m able to reassemble it in a fully functional condition, I’ll probably go ahead and donate it, in the spirit of “ballpark accuracy is better than nothing at all.” That said, I’ll include a note for the recipient suggesting periodic redundant checks with another device, whether at home, at a pharmacy or a medical clinic.
Opening and emptying the box reveals some literature:

along with our patient, initially housed within a rugged plastic case convenient for travel (and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes).



I briefly popped in a couple of AAA batteries to show you what the display looks like near-fully digit-populated on measurement startup:

More generally, here are some perspectives of the device from various vantage points, and with the cuff both coiled and extended:






There are two screw heads visible on both the right side, whose sticker is also info-rich:



And the left, specifically inside the hard-to-access battery compartment (another admitted reason why I decided to retire the device):



You know what comes next, right?

Easy peasy:
Complete with a focus shift:

The inside of the top half of the case is comparatively unmemorable, unless you’re into the undersides of front-panel buttons:

That’s more like it:

Look closely (lower left corner, specifically) and you’ll see what looks like evidence that one of the screws that supposedly holds the PCB in place has been missing since the device left the factory:

Turns out, however, that this particular “hole” doesn’t go all the way through; it’s just a raised disc formed in the plastic, to fit inside the PCB hole (thereby holding the PCB in place, horizontally at least). Why, versus a proper hole and associated screw? I dunno (BOM cost reduction?). Nevertheless, let’s remove the other (more accurately: only) screw:


Now we can flip the assembly over:

And rotate it 90° to expose the innards to full view.

The pump, valve, and associated tubing are located underneath the PCB:



Directly below the battery compartment is another (white-color) hole, into which fits the pressure transducer attached to the PCB underside:


“Dr. Gough” notes in the teardown of his unit that “The pressure sensor appears to be a differential part with the other side facing inside the case for atmospheric pressure perhaps.”
Speaking of “the other side,” there’s an entire other side of the PCB that we haven’t seen yet. Doing so requires first carefully peeling the adhesive-attached display away:


Revealing, along with some passives, the main control/processing/display IC marked as follows:
86CX23
HL8890
076SATC22 [followed by an unrecognized company logo]
Its supplier, identity, and details remain (definitively, at least) unknown to me, unfortunately, despite plenty of online research (and for what it’s worth, others are baffled as well). Some distributor-published references indicate that the original developer is Sonix, but although that company is involved in semiconductors, its website suggests that it focuses exclusively on fabrication, packaging, and test technologies and equipment. Others have found this same chip in blood pressure monitoring devices from a Taiwan-based personal medical equipment company called Health & Life (referencing the HL in the product code), which makes me wonder if Samsung just relabeled and sold a blood pressure monitor originally designed and built by Health & Life (to wit, in retrospect, note the “Healthy Living” branding all over the device and its packaging), or if Samsung just bought up Health & Life’s excess IC inventory. Insights, readers?
The identity of the other IC in this photo (to the right of the 86CX23-HL) was thankfully easier to ascertain and matched my in-advance suspicion of its function. After cleaning away the glue with isopropyl alcohol and my fingernail, I faintly discerned the following three-line marking:
ATMEL716
24C08AN
C277 D
It’s an Atmel (now Microchip Technology) 24C08 8 Kbit I²C-compatible 2-wire serial EEPROM, presumably used to store logged user data in a nonvolatile fashion that survives system battery expiration, removal, and replacement steps.
All that’s left is to reverse my steps and put everything back together carefully. Reinsert a couple of batteries, press the front panel switch, and…

Huzzah! It lives to measure another person another day! Conceptually, at least …worry not, dear readers, that 180 millimeters of mercury (mmHg) systolic measurement is not accurate. Wrapping up at this point, I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Avoiding blood pressure measurement errors
- COVID-19: The long-term implications
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Blood Pressure Monitor Design Considerations
The post (Dis)assembling the bill-of-materials list for measuring blood pressure on the wrist appeared first on EDN.
Hybrid system resolves edge AI’s on-chip memory conundrum

Edge AI—enabling autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—can now adopt learning models on the fly while keeping energy consumption and hardware wear under tight control.
It’s made possible by a hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture has been developed by scientists at CEA-Leti, in collaboration with scientists at French microelectronic research centers.
Their work has been published in a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” in Nature Electronics. It explains how it’s possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems.
The on-chip memory conundrum
Edge AI requires both inference for reading data to make decisions and learning, a.k.a. training, for updating models based on new data on a chip without burning through energy budgets or challenging hardware constraints. However, for on-chip memory, while memristors are considered suitable for inference, ferroelectric capacitors (FeCAPs) are more suitable for learning tasks.
Resistive random-access memories or memristors excel at inference because they can store analog weights. Moreover, they are energy-efficient during read operations and better support in-memory computing. However, while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.
On the other hand, ferroelectric capacitors allow rapid, low-energy updates, but their read operations are destructive, making them unsuitable for inference. Consequently, design engineers face the choice of either favoring inference and outsourcing training to the cloud or carrying out training with high costs and limited endurance.
This led French scientists to adopt a hybrid approach in which forward and backward passes use low-precision weights stored in analog form in memristors, while updates are achieved using higher-precision FeCAPs. “Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper on this new hybrid memory system.
How hybrid approach works
The CEA-Leti team developed this hybrid system by engineering a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode memory device can operate as a FeCAP or a memristor, depending on its electrical formation.
In other words, the same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state. Here, a digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.
The hardware for this hybrid system was fabricated and tested on an 18,432-device array using standard 130-nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.
CEA-Leti has acknowledged funding support for this design undertaking from the European Research Council and the French Government’s France 2030 grant.
Related Content
- Speak Up to Shape Next-Gen Edge AI
- AI at the edge: It’s just getting started
- Will Memory Constraints Limit Edge AI in Logistics?
- Two new runtime tools to accelerate edge AI deployment
- For Leti and ST, the Fastest Way to Edge AI Is Through the Memory Wall
The post Hybrid system resolves edge AI’s on-chip memory conundrum appeared first on EDN.
DC series motor caution
There are various ways to construct a motor, and the properties of that motor will depend on the construction choice. The series motor configuration has some desirable properties, but it can become quite dangerous to use if proper safety precautions are overlooked.
“Motors,” per se, is a complex subject. Variations in motor designs abound and lie well outside the scope of this essay. Rather, the goal here is to focus on just one aspect of one particular type of motor. To pay proper homage, Figure 1 shows three basic motor designs.
Figure 1 The three basic DC motor types, this article focuses on DC series motor.
Readers may study the first two at their leisure, but we will focus on the DC series motor highlighted in green and begin with an examination of its basic structure.
The DC series motorA magnetic field is required. That field is provided by current-carrying coils that are wound over steel structures called “poles”. The number of poles may vary from design to design. Simple-mindedly, Figure 2 shows three examples of pole design: two poles, four poles, and six poles. Note the alternation of north (N) and south (S) magnetic polarities.
The armature is shown as a setup for four (Figure 2) paralleled paths of wires that are insulated from each other but tied at their ends. In the example shown, there are twenty-four armature conductors arranged in six groups of four conductors, or in four parallel paths, each.

Figure 2 The DC series motor structure showing two, four, and six poles with alternating N and S polarities.
It is conventional to use the letter “Z” to represent the number of armature conductors (twenty-four as shown) and the letter “A” to represent the number of paralleled conductors (four as shown) in each path. Please do not be confused by the fact that this “Z” does NOT refer to an impedance and that this “A” does NOT refer to an area.
As shown in Figure 3, we now look at the circuit of this structure.
The field coils, wrapped around each pole, are connected in series to form the field coil.
The armature conductor groups are wired in series, with their returns being made through the center of the armature, where their wire movement is slowest. By contrast, the outermost sections of the armature conductor groups move quite rapidly as they cross the magnetic flux lines of the poles and since they are all connected in series, they generate a summation voltage called the “back electromotive force” or the “back EMF”.

Figure 3 The DC series motor equivalent circuit, the series connections of the outermost sections of the armature conductor generating back EMF.
The current flowing in the field coil and the current flowing in the armature is the same current. There is no other place for the current to flow. The available torque of a DC series motor is therefore proportional to the square of that current. By using really heavy and large conductors for both, that current can be made very large, and the available torque can be made very high. Such motors are used in high torque applications such as engine starters, in heavily loaded and slow-moving lifting cranes, commuter railroad cars, and other such applications.
The governing equation for generating back EMF is as follows in Figure 4.

Figure 4 The governing equation for back EMF, where the back EMF equals the total magnetic flux multiplied by the rotational speed multiplied by the number of series-connected armature groups.
The total magnetic flux equals the flux per pole times the number of poles. The back EMF equals the total magnetic flux multiplied by the rotational speed multiplied by the number of series-connected armature groups, which, for our present example, will be six for our six-pole magnetic structure.
Connect the load!Now comes the crucial point to remember about DC series motors.
For safety’s sake, no DC series motor should ever be operated without a mechanical load. A DC shunt motor or a DC compound motor can be safely operated without a mechanical load (separate discussions), but a DC series motor CANNOT be safely operated that way.
When the DC series motor is operating, there will be some back EMF generated in the armature as shown in Figure 4. That back EMF will act in opposition to the input voltage in determining the field and armature current, as shown in Figure 3 and as follows:
![]()
However, suppose a DC series motor is allowed to run without a mechanical load as the DC series motor undergoes rotary acceleration and starts to gain rotational velocity. In that case, a current flow exists for which some measure of torque exists for which there will be some measure of angular acceleration. With no mechanical load, the rotor will always be rotationally accelerating and gaining in rotational velocity because there is then no load to take rotational energy away from that rotating armature.
As the armature accelerates, the back EMF tends to rise, which lowers the current flow, which lowers the magnetic flux, which lowers the torque, but the flux and the torque do not go to zero, and the rotational velocity will continue to rise. The rotational velocity will keep increasing, tending toward further raising the back EMF, which further reduces the current flow, which further reduces the magnetic field as the rotational velocity continues to increase, and so on and so on, but it is in a vicious cycle of rotary speed-up that constitutes a runaway condition. If there is no mechanical load on the armature, there will be no upper limit on the armature’s speed of rotation, and the DC series motor can and will destroy itself.
A storyIt is stridently recommended that any mechanical load being driven by a DC series motor be coupled to that motor by a gear mechanism and never by a belt because a belt can break. If such a break occurs, the DC series motor will have no mechanical load, and as described, it will run away with itself.
This issue was taught to my class by my instructor, Dr. Sigfried Meyers, when I was in Brooklyn Technical High School in Brooklyn, NY. There was a motor lab area. Dr. Meyers told us of one day when there was no faculty supervision at hand, several students snuck into that lab and decided to hook up a lab motor in a series motor mode with no mechanical load. When they applied power, the motor did exactly as Dr. Meyers had warned that it would do, and the motor was destroyed.
As Mr. Spock would put it on Star Trek, that was “an undesirable outcome”.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Brushless DC Motors – Part I: Construction and Operating Principles
- DC Motor Drive Basics – Part 1: Thyristor Drive Overview
- Electric motor: types, operation modes
- Speed Control Unit Designed for a DC Motor
The post DC series motor caution appeared first on EDN.
R&S expands VNA lineup to 54 GHz

With the addition of 32-GHz, 43.5-GHz, and 54-GHz models, the R&S ZNB3000 series of vector network analyzers (VNAs) now covers a wider range of applications. The midrange family combines precision and speed in a scalable platform, extending RF component testing to satellite Ka and V bands and high-speed interconnects for AI data centers.

Beyond satellite and data center applications, the ZNB3000 also enables RF testing for 5G, 6G, and Wi-Fi. This makes it well-suited for both production environments and research labs working on next-generation technologies.
The ZNB3000 offers strong RF performance with up to 150-dB dynamic range and less than 0.0015-dB RMS trace noise. It also provides fast sweep cycle times of 11.8 ms (1601 points, 1 MHz to 26.5 GHz) and high output power of 11 dBm at 26.5 GHz. A 9-kHz start frequency enables precise time-domain analysis for signal integrity and high-speed testing.
Flexible frequency upgrades allow customers to start with a base unit and expand the maximum frequency later. ZNB3000 VNAs operating at the new frequencies will be available by the end of 2025.
The post R&S expands VNA lineup to 54 GHz appeared first on EDN.
2-in-1 SiC module raises power density

Rohm has introduced the DOT-247, a 2-in-1 SiC molded module that combines two TO-247 devices to deliver higher power density. The dual structure accommodates larger chips, while the optimized internal design lowers on-resistance. Package enhancements cut thermal resistance by roughly 15% and reduce inductance by about 50% compared with standard TO-247 devices. Rohm reports a 2.3× increase in power density in a half-bridge configuration, enabling the same conversion capability in nearly half the volume.

The 750-V and 1200-V devices target industrial power systems such as PV inverters, UPS units, and semiconductor relays, and are offered in half-bridge and common-source configurations. While two-level inverters remain standard, demand is growing for multi-level circuits—including three-level NPC, three-level T-NPC, and five-level ANPC—to support higher voltages. These advanced topologies often require custom designs with standard SiC packages due to the complexity of combining half-bridge and common-source configurations.
Rohm addresses this challenge with standardized 2-in-1 modules supporting both topologies, providing greater flexibility for NPC circuits and DC/DC converters. This approach reduces component count and board space, enabling more compact designs compared with discrete solutions.
Devices in the 750-V SC740xxDT series and 1200-V SCZ40xxKTx series are available now in OEM quantities. Samples of AEC-Q101 qualified products are scheduled to begin in October 2025.
The post 2-in-1 SiC module raises power density appeared first on EDN.
Redriver strengthens USB4v2 and DP 2.1a signals

Parade Technologies’ PS8780 four-lane bidirectional linear redriver restores high-speed signals for active cables, laptops, and PCs. It supports USB4v2 Gen 4, Thunderbolt 5, and DisplayPort 2.1 Alt Mode, and is pin-compatible with the PS8778 Gen 3 redriver.

The redriver delivers USB4v2 at up to 2×40 Gbps symmetric or 120 Gbps asymmetric, TBT5 at 2×41.25 Gbps, and DP 2.1 UHBR20. It provides full USB4, USB 3.2, and DP 2.1a power management, including Advanced Link Power Management (ALPM). Its low-power design and Modern Standby support extend battery life in mobile devices and reduce energy use in active cables.
The PS8780 extends USB4v2 signals beyond the typical 1-m (3.3-ft) passive cable limit while maintaining full performance. When paired with a USB4v2 retimer between the SoC (USB4v2 router) and the USB-C/USB4 connector, it also lengthens system PCB traces. Operating from a 1.8 V supply, the device consumes 297 mW at 40 Gbps and just 0.5 mW in standby. Its compact 28-pin, 2.8×4.4 mm QFN package suits space-constrained designs.
The PS8780 redriver is now sampling.
The post Redriver strengthens USB4v2 and DP 2.1a signals appeared first on EDN.



