EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 55 хв тому

Handheld enclosures add integrated cable glands

Чтв, 12/04/2025 - 20:33
OKW's CONNECT handheld enclosures with integrated cable glands.

OKW now offers CONNECT fast-assembly handheld plastic enclosures with optional integrated cable glands, making it easier to install power and data cables.

Cost-effective CONNECT is ideal for network technology, building services, safety engineering, IoT/IIoT, medical devices, analytical instruments, data loggers, detectors, sensors, test and measurement.

OKW's CONNECT handheld enclosures with  integrated cable glands.(Source: OKW Enclosures Inc.)

CONNECT’s two case shells snap together for fast and easy assembly: no screws are required. This offers the choice of two ‘fronts’: one shell is convex – perfect for LEDs – while the other is flat and recessed for a compact display or membrane keypad. Inside the flat shell there are mounting pillars for PCBs and components.

CONNECT enclosures feature open apertures at each end. For these, design engineers can specify a combination of ASA+PC blank end panels and soft-touch TPE cable glands with integrated strain relief. Cable diameters from 0.134“ to 0.232“ are accommodated. The two long sides provide ample space for USB connectors.

These UV-stable ASA+PC (UL 94 V-0) enclosures are available in six sizes from 2.36″ x 1.65″ x 0.87″ to 6.14″ x 2.13″ x 0.87″. The standard colors are off-white (RAL 9002) and black (RAL 9005). Custom colors are also available.

The cable glands come in volcano (gray) and black (RAL 9005). The end parts are off-white (RAL 9002) and black (RAL 9005). Other accessories include wall holders, rail holding clamps for round tubes up to ø 1.26″, and self-tapping screws.

OKW can supply CONNECT fully customized. Services include machining, lacquering, printing, laser marking, decor foils, RFI/EMI shielding, and installation and assembly of accessories.

For more information, view the OKW website: https://www.okwenclosures.com/en/Plastic-enclosures/Connect.htm

The post Handheld enclosures add integrated cable glands appeared first on EDN.

Through-hole connector resolves surface-mount dilemma

Чтв, 12/04/2025 - 16:19

Manufacturing of a modern component-laded printed circuit board (PCB) is an amazing fusion and coordination of diverse technologies. There’s the board as substrate itself, the stencils and masks that enable precise placement of solder paster, and the pick-and-place mechanical system that places components (both ICs and passive ones) on the appropriate lands with pinpoint precision and repeatability, all culminating in most cases in a sophisticated reflow-soldering process.

Most of the loaded components use surface mount technology (SMT) and tiny contacts to their respective lands on the PCB. However, it wasn’t always an SMT world. In the early days of PCBs, the situation was somewhat different. Most of the components were dual inline package (DIP) ICs and passives with tangible wire leads, where their connections went through holes in the board (Figure 1).

Figure 1 Dual-inline package (DIP) was dominant in the early days of ICs and is still favored by makers and DIY enthusiasts; but most devices are no longer offered this way, nor can they be. Source: Wikipedia

Not only did this require costly drilling of hundreds and thousands of space-consuming holes, but component installation was a challenge. The loaded board—with these through-hole components mounted on one side only—went through a wave-soldering process which soldered the leads to the tracks on the bottom of the board.

The advent of SMT

The use of surface-mount technology began in the 1960s, when it was originally called “planar mounting”. However, surface mount technology didn’t become popular until the mid-1980s, and even as recently as 1986; surface-mount components represented only around 10% of the total market. The technique took off in the late 1980s, and most high-tech electronic PCBs were using surface mount devices by the late 1990s.

SMT enables smaller components, higher board densities, use of top and bottom sides of the board for components, and a reflow soldering process. Today, active and passive components are offered in SMT packages whenever possible, with through-hole packages being the exception. SMT devices can be placed using an automated arrangement, while many larger through-hole ones require manual insertion and soldering. Obviously, this is costly and disruptive to the high-volume production process.

The demand for SMT versions is so overwhelming that many products are available only in that package type. SMT makes possible many super-tiny components we now count on; some are just a millimeter square or smaller.

Due to the popularity of SMT, vendors often announce when they have managed to make a former through-hole component into a SMT one. Doing so is not easy in many cases for ICs, as there are die-layout, thermal, packaging, and reliability issues.

There are also transitions for passives. For example, Vishay Intertechnology recently announced that it has transformed one of its families of axial-leaded safety resistors into surface-mount versions using a clever twisting to the leads in conjunction with a T-shaped PCB land pattern (Figure 2). This is not a trivial twist because these resistors must also meet various safety and regulatory mandates for performance under normal and fault conditions while being compatible with automated handling.

Figure 2 Transforming this leaded safety resistor from a through-hole to SMT device involved much more than a clever design as the SMT version must meet a long list of stringent safety-related requirements and tests. Source: Vishay

In other cases, vendors of leaded discrete devices such as mid-power MOSFETs have announced with fanfare that they have managed to engineer a version with the same ratings in an SMT package. No question about it; it’s a big deal in terms of attractiveness to the customer.

What about the SMT holdouts?

Despite the prevalence of, and desire for, SMT devices, some components are not easily transformed into SMT-friendly packaging that is also compatible with reflow soldering. Larger connecters for attaching discrete terminated wires to wiring blocks are a good example. If they were SMT devices, the stress they endure would flex the board and weaken their soldered connections as well as affect the integrity of the adjacent components. Their relatively large size also makes SMT handling a challenge.

But that dilemma is seeing some resolution. Connector vendor Weidmüller Group has developed what it calls through-hole reflow (THR) technology. These are terminal-block connectors for discrete wires that do require PCB holes and through-hole mounting for mechanical integrity. Yet, it can then be soldered using the standard reflow process along with other SMT devices on the board.

One of the vendor’s families with this capability was developed for Profinet applications and supports Ethernet-compliant data transmission up to 100 Mbps (Figure 3).

Figure 3 One of the available families of THR connector blocks is for Profibus installations. Source: Weidmüller

These connector blocks use glass-fiber-reinforced liquid crystal polymer (LCP) bodies to guarantee a high level of shape stability. The favorable temperature properties of the material (melting point of over 300°C) and the in-built pitch space (stand-off) of 0.3 mm (minimum) are well-suited for the solder-paste process. They come in choice of two pin lengths of 1.5 mm and 3.2 mm to precisely match board thickness, all with very tight tolerance on dimensional stability and pin centering (Figure 4).

Figure 4 The connector pin must have the right length and precise centering for reliable contact. Source: Weidmüller

The reflow wondering profile is like the ones required for other SMT components, so the entire board can be soldered in one pass (Figure 5).

Figure 5 The recommended reflow soldering profile for these THR connectors matches the profile of other SMT devices. Source: Weidmüller

Another connector family supports various USB connections (Figure 6).

Figure 6 A range of THR USB connectors is also available. Source: Weidmüller

With these THR connectors, you get the mechanical integrity of through-hole devices alongside the manufacturing benefit of automatic insertion (Figure 7) and reflow soldering. There is no need for a separate step to manually insert the connector and have a separate soldering step. You can also use them for through-hole wave-soldering as well, if you prefer.

Figure 7 Even the larger-block THR connectors can be automatically inserted using SMT pick-and-place systems. Source: Weidmüller

Connectors such as these will undoubtedly lower manufacturing costs while not compromising performance. Once again, it’s a reminder of the vital role and impact of mechanical know-how and material-science expertise to less-visible, low-glamour yet important advances in our “electronics” industry.

Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.

Related Content

The post Through-hole connector resolves surface-mount dilemma appeared first on EDN.

The Oura Ring 4: Does “one more” deliver much (if any) more?

Чтв, 12/04/2025 - 15:00

The most surprising thing to me about the Oura Ring 4, compared to its Gen3 predecessor, is how similar the two products are in terms of elemental usage perception. Granted, the precursor’s three internal finger-orientation bumps:

are now effectively gone:

and there are also multiple internal implementation differences between the two generations, some of which I’ll touch on in the paragraphs that follow. But they both use the same Android and iOS apps, generate the same data, and run for roughly the same ~1 week between charges.

One key qualifier on that last point: I bought them both used on eBay. The Ring 4, which claims 8 days of operating life when new, may have already accumulated more cycles from prior-owner usage than was the case with the Gen3 forebear, which touts 7 days’ operating life when new.

Smart ring “kissing cousins”

They look similar, too: the Gen3 in “Brushed Titanium” is the lower of the two rings on my left index finger in the following photos, with the Ring 4 in “Brushed Silver” above it:

And here’s the Ring 4 standalone, alongside my wedding band:

A smart ring enthusiast’s detailed analysis of the two product generations, complete with an abundance of comparative captured-data results, is below for those of you interested in more of an on-finger relative appraisal than I was able (and, admittedly, willing) to muster:

Sensing enhancements

Perhaps the biggest claimed innovation with the newer Ring 4 is Smart Sensing:

Smart Sensing is powered by an algorithm that works alongside the research-grade sensors within Oura Ring 4 to respond to each member’s unique finger physiology, including the structure and distinct features of your finger (i.e. skin tone, BMI, and age).

 The multiple sensors form an 18-path multi-wavelength photoplethysmography (PPG) subsystem, which adjusts dynamically to your lifestyle throughout the day and night.

As the functional representation in this conceptual video suggests:

there are two multi-LED clusters, each supporting three separate light wavelengths (red, green and infrared), with corresponding reception photodiodes in the rectangular structures to either side of each cluster (three structures total):

To complete the picture, here’s the inner top half of my Ring 4:

Six total LEDs, outputting to three total photodiodes, translates to 18 total possible light path options (which is presumably how Oura came up with the number I quoted earlier), with the optimal paths initially determined as part of the first-time ring setup:

and further fine-tuning is dynamically done while the ring is being worn, including compensating for non-optimum repositioning on the finger per the earlier-mentioned lack of distinct orientation bumps in this latest product generation.

What are the various-wavelength LEDs used for? Generally speaking, the infrared ones are capable of penetrating further into the finger tissue than are their visible-light counterparts, at some presumed tradeoff (accuracy, perhaps?). And specifically:

  • Red and infrared LEDs measure blood oxygen levels (SpO2) while you sleep.
  • Green and infrared LEDs track heart rate (HR) and heart rate variability (HRV) 24/7, as well as respiration rate during sleep.

All three LED types were also present with the Gen3 ring, albeit in a different multi-location configuration than the Ring 4 (albeit common to both the Heritage and Horizon Gen3 styles):

The labeling in the following Ring 4 “stock” image, by the way, isn’t locationally or otherwise accurate, as far as I can tell; the area labeled “accelerometer” is actually a multi-LED cluster, for example, and in contrast to the distinct “Red And Infrared…” and “Green And Infrared…” labels in the stock image, both of the clusters actually contain both green and red (plus infrared) LEDs:

Also embedded within the ring is a 3D accelerometer, which I’ve just learned, thanks to a Texas Instruments technical article I came across while researching this writeup, is useful not only for counting steps (along with, alas, keystrokes and other finger motions mimicking steps) but also “used in combination with the light signals as inputs into PPG algorithms.”

And there’s also a digital temperature sensor, although it doesn’t leverage direct skin contact for measurement purposes. Instead, it’s a negative temperature coefficient (NTC) thermistor whose (quoting from Wikipedia) “resistance decreases as temperature rises; usually because electrons are bumped up by thermal agitation from the valence band to the conduction band”.

Battery life optimizations

As noted in the public summary of a recent Ring 4 teardown by TechInsights, the newer smart ring has a higher capacity battery (26 mAh) than its Gen3 predecessor, which is likely a key factor in its day-longer specified operation between recharges. Additionally, the Ring 4’s Smart Sensing algorithms further optimize battery life as follows:

In order to optimize signal quality and power efficiency, Oura Ring 4 selects the optimal LED for each situation, instead of burning several LEDs simultaneously.

and

Smart Sensing also helps maximize the battery life of Oura Ring 4 by dynamically adjusting the brightness of the LEDs, using the dimmest possible setting to achieve the desired signal quality. This allows the battery life of Oura Ring 4 to extend up to eight days.

Here, for example, is a dim-light photo of both green LEDs in action, one in each cluster:

Generally speaking, the LEDs are active only briefly (when they’re illuminated at all, that is) and I haven’t yet succeeded in grabbing my smartphone and activating its camera in time to capture photos of any of the other combinations I’ve observed and note below. They include:

  • Single green LED (either cluster)
  • Concurrent single green and single red LEDs (one from each cluster), and
  • Both single (either cluster) and dual concurrent (both clusters) red LED(s)

I’ve also witnessed transitions from bright to dim output illumination, prior to turnoff, for both one and two concurrent green LEDs, but not (yet, at least) for either one or both red LED(s). And perhaps obviously, the narrow-spectrum eyes-and-brain visual sensing and processing subsystem in my noggin isn’t capable of discerning infrared (or even near-IR) emissions, so…

Third-party functional insights

Operating life between integrated battery recharges, which I’ve already covered, is key to wearer satisfaction with the product, of course, as is recharge speed to “full” for the next multi-day (hopefully) wearing period.

But for long-term satisfaction, a sufficiently high number of supported recharge cycles prior to effective battery expiration (and subsequent landfill donation) is also necessary. To wit, I’ll close with some interesting (at least to me) information that I indirectly (and surprisingly, happily) stumbled across.

First off, here’s what the Ring 4 looks like in the process of charging on its inductive dock:

In last month’s Oura Gen3 write-up, I shared a photo of the portable charging case (including an integrated battery) that I’d acquired from Doohoeek via Amazon, with the dock mounted inside. Behind it was the Doohoeek charging case for the Oura Ring 4. They look the same, don’t they?

That’s because, it turns out, they are the same, at least from a hardware standpoint. Requoting what I first mentioned last month, the “development story (which I got straight from the manufacturer) was not only fascinating in its own right but also gave me insider insight into how Oura has evolved its smart ring charging scheme for the smart ring over time. More about that soon, likely next month.

Here’s the Ring 4 and dock inside the second-generation Doohoeek case (which, by the way, is also backwards-compatible with the Gen3 ring and dock):

And as promised, here’s the full back-and-forth between myself (in bold) and the manufacturer (in italics) over Amazon’s messaging system:

As I believe you already realize, while Doohoeek’s first-generation battery case that I’d bought from you through Amazon works fine with the Oura Gen3, it doesn’t (any longer, at least) work with the Ring 4. For that, one of Doohoeek’s second-generation battery cases is necessary. Can you comment on what the incompatibility was that precluded ongoing reliable operation of the original battery case with the Ring 4 charging dock (although it still works fine for the Gen3)? A USB-PD handshaking issue between your battery and the charging dock? Or was it something specific to the ring itself?

Hi Brian,

thank you for your question! Here’s a brief technical explanation of the Ring 4 compatibility issue with our original charging case:

Our first-gen charging case used a smart current-detection algorithm to determine charging status. Under normal conditions, when the ring reached full charge, the current would drop and remain consistently low—triggering our case to stop charging. This worked flawlessly with Oura Gen3 and initially with the Ring 4.

However, after a recent Oura firmware update, the Ring 4 began exhibiting unstable current draw patterns during charging—specifically, prolonged periods of low current followed by unexpected current spikes, even when the ring was not fully charged. This behavior caused our case to misinterpret the ring as “fully charged” and prematurely terminate charging.

To resolve this, we redesigned our charging logic in the updated version to implement a more robust timing-based backup protocol.

We appreciate your interest and hope this clarifies the engineering challenge we addressed!

Best,

Doohoeek Support Team

This is perfect! It was obvious to me that whatever it was, it was something that a firmware update couldn’t resolve, and I’d wondered if ring-generated current draw variances were to blame. I suspect the Ring 4 is doing this to maximize battery life over extended charge cycle counts. Thanks again!

p.s…I also wonder why you didn’t change the product naming, box labeling, etc. so potential buyers could have reassurance as to which version they’d be getting?

Hi Brian,

Thank you for your insightful feedback — you’ve clearly thought deeply about how these systems interact, and we really appreciate that.

Yes, the current behavior on the Ring 4 appears optimized for long-term battery longevity 🙂

Regarding your question about naming and packaging:

We actually had already mass-produced the outer shells and packaging for old version when Oura pushed the update that changed the charging behavior. Rather than discard those components (and create unnecessary waste), we decided to prioritize a firmware-level fix and use the same exterior.

That’s why the outside looks identical, but the internal charging behavior is now completely updated.

If you’d like to confirm whether your unit is the latest version, you can check the FNSKU barcode on the package:

Old version (no longer in production) ONLY used: X004HYCA09

New version (may change in future production) currently used: X004Q62DV9

Customers can also contact us with a photo of the label, and we’d be happy to verify it for them personally.

Thanks again for your support and sharp eyes.

Best,

Doohoeek Support Team

Very interesting! So it IS possible to firmware-retrofit existing units. Would that require a unit shipment back to the factory for the update, or did you consider developing a Windows-based (for example) update utility for customer upgrade purposes (by tethering the battery case’s USB-C input to a computer)?

Hi Brian,

Great question.

Unfortunately, a firmware update is not possible for units that have already been shipped. The hardware design does not support customer-side or even a cost-effective return-to-factory update process.

The only practical solution we could implement was to correct the firmware in all newly produced units moving forward, which is what you have received.

We appreciate your understanding!

Best,

Doohoeek Support Team

And with that, having recently passed through 2,000 words, I’ll wrap up for today. Stay tuned for the aforementioned teardown-to-come (on a different Ring 4; I plan to keep using this one!), and until then, I as-always welcome your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post The Oura Ring 4: Does “one more” deliver much (if any) more? appeared first on EDN.

A digital filter system (DFS), Part 1

Срд, 12/03/2025 - 15:00

Editor’s note: In this Design Idea (DI), contributor Bonicatto designs a digital filter system (DFS. This is a benchtop filtering system that can apply various filter types to an incoming signal. Filtering range is up to 120 kHz.

In Part 1 of this DI, the DFS’s function and hardware implementation are discussed.

In Part 2 of this DI, the DFS’s firmware and performance are discussed.

Selectable/adjustable bench filter

Over the years, I have been able to obtain a lot of equipment needed for designing, testing, and diagnosing electronic equipment. I have accumulated power supplies, scopes, digital voltmeters (DVMs), spectrum analyzers, signal generators, vector network analyzers (VNAs), LCR meters, etc., etc.

One piece of equipment I never found is a reasonably priced lab bench filter—something that would take in a signal and filter it with a filter whose parameters could be set on the front panel.

There are some tools that run on a PC’s sound card, but I don’t like to connect my electronic tests on my PC for fear that I’ll damage the PC. The other issue is that I am looking for something that can go up to 100 kHz or so, which is not typical of many soundcards. So, it was time to try to design one.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What I came up with in a small bench-top device with one BNC input for the signal you want filtered and one BNC output for the resulting filtered signal (Figure 1). It has a touchscreen LCD to select a filter type and the cutoff/center frequency. So, what can it do?

Figure 1 The finished digital filter system that allows you to select a low-pass, high-pass, band-pass, or band-stop filter type.

You can select a low-pass, high-pass, band-pass, or band-stop filter type. The filter can also be either a two-pole Butterworth or a four-pole.

For the frequency, you can select anywhere from a few Hz to 120 kHz. The are also three gain controls (an analog input gain knob, an analog output gain, and an internal digital gain.)

The cost to build the filter is around $75, as well as some odds and ends you probably already have around.

I also included a download for a 3D printable enclosure. Let’s take a deeper look at this design.

The circuit

The design is centered around a digital filter executed in a Cortex M4 microcontroller (MCU). The three main blocks of the system are an analog front end (AFE), which is composed of four op-amps providing input gain adjustment and antialiasing filtering.

Next is a single board computer (SBC) powered by a Cortex M4. This provides an input for the ADC, controls the LCD and touchscreen, executes the digital filters, and controls the output DAC.

The last block is the analog back end (ABE), which again consists of four op-amps that make up the analog gain circuit and the analog output reconstruction filter.

Let’s take a look at the schematic to see more detail (Figure 2).

Figure 2 The DFS schematic showing the AFE, the ABE, and SBC that provides an input for the ADC, controls the TFT display, executes the digital filters, and controls the output DAC.

Here you can see the blocks we just talked about and a few other minor pieces. Let’s dive a little deeper.

The AFE

The AFE starts by AC-coupling the external signal you want to filter. Then, the first op-amp, after the protection diodes, provides an adjustable gain for the input. This uses a simple single-supply inverting op-amp circuit. RV1 is a potentiometer on the front panel (see Figure 1 above) that allows for a gain of the input from 1x to 5x.

Again, looking at the schematics, we next see a single-pole low-pass filter, which is tuned to 120 kHz. Next are a pair of 2-pole Sallen-Key low-pass filters with components selected to create a Butterworth filter set to 120 kHz.

So now our input signal has been filtered at a frequency that will allow the MCU’s ADC to sample without aliasing. I designed this filter and the ABE filter using TI’s WEBENCH Circuit Designer.

So, we have a 5-pole low-pass filter frontend that will give us a roll-off of 30 dB per octave, or 100 dB per decade.

The flywheel RC circuit is next. As explained in a previous article, the capacitor in this RC circuit provides a charge to hold up the voltage level when the ADC samples the input. More on this can be found at: ADC Driver Ref Design Optimizing THD, Noise, and SNR for High Dynamic Range

The ABE

We’ll skip the MCU for now and jump to the right side of the schematic. Here we see a circuit very similar to the AFE, but this is used as a reconstruction filter that removes artifacts created by the discrete steps used in the MCU’s DAC.

So, starting from the DAC output from the SBC, we see an adjustable gain stage which allows the user, via the output potentiometer, to increase the output level, if desired. This output gain can be adjusted from 1x to 5x.

Next in the schematic, you’ll see two stages of two-pole Sallen-Key low-pass filters configured exactly like the pair in the AFE. So again, they are configured as a 120 kHz Butterworth filter. 

The last op-amp circuit in the ABE is a 2x gain stage and buffer. Why a 2x gain stage? I’ll explain more later, but the gist is that the DAC has a limited slew rate compared to the sample rate I used. So, I reduced the value in the DAC by 2 and then compensated for it in this gain stage.

A note about the op-amps used in this design: The design calls for something that can handle 120 kHz passing through a gain of up to 5 and also dealing with the Sallen-Key filters (the TI WEBENCH shows a gain-bandwidth requirement of at least 6 MHz). I also needed a slew rate that could deal with a 120 kHz signal with a level of 3.3 Vpp. The STMicroelectronics TSV782 fit the bill nicely.

The last two components are the resistor and the capacitor before the output BNC connector. The resistor is used to stabilize the op-amp circuit if the output is connected to a large capacitance load. The 1uF capacitor provides AC coupling to the output BNC.

The MCU

The brains used in this design is a Feather M4 Express SBC, which contains a Microchip Technology’s ATSAMD51 that has a Cortex M4 core. This is primarily powered by a USB connection (or a battery we will discuss in Part 2).  

This ATSAMD51 has a few ADCs and DACs, and we use one of each in this design. It also has plenty of memory (512 kB of program memory and 192  kB of SRAM).

It runs at a usable 120 MHz and is enhanced with a floating-point processor. All this works nicely for the digital filtering we will explain in Part 2. Other features I used include a number of digital I/O ports, an SPI port, and a few other ADC inputs.

One feature I found very nice on the SBC was a 3.3 VDC linear regulator that not only powers the MCU, but has sufficient output to power all other devices in the design.

On the schematic (Figure 1), you can see that the AFE connects to an ADC input on the SBC, and an SBC DAC connects to the ABE circuit. Another major component is the TFT LCD and touchscreen, powered by the 3.3 VDC coming from the SBC.

Miscellaneous schematic items

That leaves a few extra items on the schematic.

Voltage reference

There are 2 simple ½ voltage dividers to generate 1.65 VDC from the 3.3 VDC supply. One is used on the AFE to get a mid-voltage reference for the single supply op-amp design. This reference is simply two equal resistors and a capacitor connected to ground, and from the center of the series-connected resistors.

A second reference was created for the ABE circuit. I used two references as I was laying this out on a protoboard, and the circuits were separated by a significant distance (without a ground plane).

LED indicator

There is also an LED used to indicate that the ADC is clipping the signal because the input is too large or too small. Another LED indicates the DAC is clipping for the same reasons. There will be more discussion on this in the firmware section in Part 2.

Floating ground

An interesting feature of the SBC is that it contains the charging circuit for a lithium polymer 3.7-V battery. This is optional in the design, but it does allow you to operate the DFS with a floating ground and a quiet voltage supply, which may help in your testing.

Enable

A somewhat unique feature, which turns out to be helpful, is an enable that is used to turn off the system if you pull it to ground.

If you use a battery, along with the USB, and want to use a typical power on/off switch, you would need to break the incoming USB line and the battery line, which makes it a 2-pole switch.

So, to get the DFS to power down, I pull the enable line to ground using a 3-pole SPDT switch, which I found has the typical “O/I” on/off indications. You can use a SPST switch; this will have to be switched to “I” to shut it down and “O” to turn it on.

USB voltage display

A ½ voltage divider, with a filter capacitor, is connected to the USB input and used as an input to one of the ADCs, so we can display the connected USB voltage.

Optional reset

The last item is an optional reset. I did not provide a hole to mount a pushbutton, but you can drill a hole in the back of the enclosure for a normally-open pushbutton.

More information

This device is a fairly easy to build. I built the circuit on a protoboard with SMT parts (thru-hole would have been easier). Maybe someone would like to lay out a PCB and share the design. I think you’ll find this DFS has a number of uses in your lab/shop.

The schematic, code, 3D print files, links to various parts, and more information and notes on the design and construction can be downloaded at: https://makerworld.com/en/@user_1242957023/upload

Editor’s Note: Stay tuned for Part 2 to learn more about the device’s firmware.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post A digital filter system (DFS), Part 1 appeared first on EDN.

Silly simple precision 0/20mA to 4/20mA converter

Срд, 12/03/2025 - 15:00

This Design Idea (DI) offers an alternative solution for an application borrowed from frequent DI contributor R. Jayapal, presented in: “A 0-20mA source current to 4-20mA loop current converter.” 

It converts a 0/20mA current mode input, such as produced by some process control instrumentation, into a standard industrial 4/20mA current loop output.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the circuit. It’s based on a (very) old friend—the LM337 three-legged regulator. Here’s how it works.

Figure 1 U1 plus R1 through R5 current steering networks convert 0/20mA input to 4/20mA output.

The fixed resistance of the R1 + R2 + R3 series network, working in parallel with the adjustable R4 + R5 pair, presents a combined load of 312 ohms to the 1.25v output of U1. That causes a zero-input current draw of 1.25/312 = 4 mA, trimmed by R5 (see calibration sequence detailed later).

Summed with this is a 0 to 16 mA current derived from the 0 to 20 mA input, controlled by the 4:1 ratio current split provided by the R1/R2/R3 current divider and fine trimmed by R2 (ditto). 

Note that 4 mA is below the guaranteed minimum regulation current specification for the LM337. In fact, most will work happily with half that much, but you might get a greedy one. So just be aware.

The result is a precision conversion of the 0 to 20mA input to an accurate 4 to 20mA loop current. Conversion precision and stability are insensitive to R2 trimmer wiper resistance due to the somewhat unusual input topology in play.

Calibration proceeds in a four-step linear (iteration-free one-pass) sequence consisting of:

  1. Set input = 0.0 mA.
  2. Adjust R5 for 4.00 mA loop current.
  3. Set input = 20.00 mA.
  4. Adjust R2 for 20.00 mA loop current.

Done.

The input voltage burden is a negative 1.0 volt. The output loop voltage drop is 4 volts minimum to 40 volts maximum. The maximum ambient temperature (with no U1 heatsink) is 100oC. Resistors should be precision types, and the trimmer pots should be multiturn cermet or similar.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Silly simple precision 0/20mA to 4/20mA converter appeared first on EDN.

Transitioning from Industry 4.0 to 5.0: It’s not simple

Втр, 12/02/2025 - 18:35
Industry 4.0 to Industry 5.0.

The shift from Industry 4.0 to 5.0 is not an easy task. Industry 5.0 implementation will be complex, with connected devices and systems sharing data in real time at the edge. It encompasses a host of technologies and systems, including a high-speed network infrastructure, edge computing, control systems, IoT devices, smart sensors, AI-enabled robotics, and digital twins, all designed to work together seamlessly to improve productivity, lower energy consumption, improve worker safety, and meet sustainability goals.

Industry 4.0 to Industry 5.0.(Source: Adobe Stock)

In the November/December issue, we take a look at evolving Industry 4.0 trends and the shift to the next industrial evolution: 5.0, building on existing AI, automation, and IoT technologies with a collaboration between humans and cobots.

Technology innovations are central to future industrial automation, and the next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making, according to Jack Howley, senior technology analyst at IDTechEx. He believes the global industry will be defined by the integration of AI with robotics and IoT technologies, transforming manufacturing and logistics across industries.

As factories become smarter, more connected, and increasingly autonomous, MES, digital twins, and AI-enabled robotics are redefining smart manufacturing, according to Leonor Marques, architecture and advocacy director of Critical Manufacturing. These innovations can be better-interconnected, contributing to smarter factories and delivering meaningful, contextualized, and structured information, she said.

One of those key enabling technologies for Industry 4.0 is sensors. TDK SensEI defines Industry 4.0 by convergence, the merging of physical assets with digital intelligence. AI-enabled predictive maintenance systems will be critical for achieving the speed, autonomy, and adaptability that smart factories require, the company said.

Edge AI addresses the volume of industrial data by embedding trained ML models directly into sensors and devices, said Vincent Broyles, senior director of global sales engineering at TDK SensEI. Instead of sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated, reducing latency and bandwidth use, he said.

Robert Otręba, CEO of Grinn Global, agrees that industrial AI belongs at the edge. It delivers three key advantages: low latency and real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs, he said.

Otręba thinks edge AI will power the next wave of industrial intelligence. “Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself.”

AI is no longer an optional enhancement, and this shift is driven by the need for real-time, contextually aware intelligence with systems that can analyze sensor data instantly, he said.

Lisa Trollo, MEMS marketing manager at STMicroelectronics, calls sensors the silent leaders driving the industrial market’s transformation, serving as the “eyes and ears” of smart factories by continuously sensing pressure, temperature, position, vibration, and more. “In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries,” she said.

Energy efficiency also plays a big role in industrial systems. Power management ICs (PMICs) are leading the way by enabling higher efficiency. In industrial and industrial IoT applications, PMICs address key power challenges, according to contributing writer Stefano Lovati. He said the use of AI techniques is being investigated to further improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.

Don’t miss the top 10 AC/DC power supplies introduced over the past year. These power supplies focus on improving efficiency and power density for industrial and medical applications. Motor drivers are also a critical component in industrial design applications as well as automotive systems. The latest motor drivers and development tools add advanced features to improve performance and reduce design complexity.

The post Transitioning from Industry 4.0 to 5.0: It’s not simple appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

Втр, 12/02/2025 - 18:00
Microchip's MCP19061 USB dual-charging-port board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip's MCP19061 USB dual-charging-port board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Block diagram shows two independently managed USB PD channels on Microchip's MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of Microchip's MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of Microchip's UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Simple state variable active filter

Втр, 12/02/2025 - 15:00

The state variable active filter (SVAF) is an active filter you don’t see mentioned much today; however, it’s been a valuable asset for us old analog types in the past. This became especially true when cheap dual and quad op-amps became common place, as one can “roll their own” SVAF with just one IC package and still have an op-amp left over for other tasks!

Wow the engineering world with your unique design: Design Ideas Submission Guide

The unique features of this filter are having low-pass (LP), high-pass (HP), and band-pass (BP) filter results simultaneously available, with low component sensitivity, and an independent filter “Q” while creating a quadratic 2nd order filter function with 40-dB/decade slope factors. The main drawback is requiring three op-amps and a few more resistors than other active filter types.

The SVAF employs dual series-connected and scaled op-amp integrators with dual independent feedback paths, which creates a highly flexible filter architecture with the mentioned “extra” components as the downside.

With the three available LP, HP, and BP outputs, this filter seemed like a nice candidate for investigating with the Bode function available in modern DSOs. This is especially so for the newer Siglent DSO implementations that can plot three independent channels, which allows a single Bode plot with three independent plot variables: LP, HP, and BP.

Creating a SVAF with a couple of LM358 duals (didn’t have any DIP-type quad op-amps like the LM324 directly available, which reminds me, I need to order some soon!!), a couple of 0.01-µF mylar Caps, and a few 10 kΩ and 1 kΩ resistors seemed like a fun project.

The SVAF natural frequency corner is simply 1/RC, as shown in the notebook image in Figure 1 as ~1.59 kHz with the mentioned component values. The filter’s “Q” was set by changing R4 and R5.

Figure 1 The author’s hand-drawn schematic with R1=R2, R3=R6, and C1=C2, resistor values are 1 kΩ and 10 kΩ, and capacitors are 0.01 µF.

This produced plots of a Q of 1, 2, and 4 shown in Figure 2Figure 3, and Figure 4, respectively, along with supporting LTspice simulations.

The DSO Bode function was set up with DSO CH1 as the input, CH2 (red) as the HP, CH3 (cyan) as the LP, and CH4 (green) as the BP. The phase responses can also be seen as the dashed color lines that correspond to the colors of the HP, LP, and BP amplitude responses.

While it is possible to include all the DSO channel phase responses, this clutters up the display too much, so on the right-hand side of each image, the only phase response I show is the BP phase (magenta) in the DSO plots.

Figure 2 The left side shows the Q =1 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =1 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 3 The left side shows the Q =2 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =2 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 4 The left side shows the Q =4 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =4 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

The Bode frequency was swept with 33 pts/dec from 10 Hz to 100 kHz using a 1-Vpp input stimulus from a LAN-enabled arbitrary waveform generator (AWG). Note how the three responses all cross at ~1.59 kHz, and the BP phase, or the magenta line for the images on the right side, crosses zero degrees here.

If we extend the frequency of the Bode sweep out to 1 MHz, as shown in Figure 5, well beyond where you would consider utilizing an LM358. The simulation and DSO Bode measurements agree well, even at this range. Note how the simulation depicts the LP LM358 op-amp output resonance ~100 kHz (cyan) and the BP Phase (magenta) response.

Figure 5 The left side shows the Q =7 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =7 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

I’m honestly surprised the simulation agrees this well, considering the filter was crudely assembled on a plug-in protoboard and using the LM358 op-amps. This is likely due to the inverting configuration of the SVAF structure, as our experience has shown that inverting structures tend to behave better with regard to components, breadboard, and prototyping, with all the unknown parasitics at play!

Anyway, the SVAF is an interesting active filter capable of producing simultaneous LP, HP, and BP results. It is even capable of producing an active notch filter with an additional op-amp and a couple of resistors (requires 4 total, but with the LM324, a single package), which the interested reader can discover.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

Related Content

The post Simple state variable active filter appeared first on EDN.

A budget battery charger that also elevates blood pressure

Пн, 12/01/2025 - 16:55

At the tail end of my September 1 teardown of EBL’s first-generation 8-bay battery charger:

I tacked on a one-paragraph confession, with an accompanying photo that as usual, included a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

I’ll wrap up with a teaser photo of another, smaller, but no less finicky battery charger that I’ve also taken apart, but, due to this piece as-is ending up longer-than-expected (what else is new?), I have decided to instead save for another dedicated teardown writeup for another day:

An uncertain lineage

That day is today. And by “finicky”, as was the case with its predecessor, I was referring to its penchant for “rejecting batteries that other chargers accepted complaint-free.”

Truth be told, I can’t recall how it came into my possession in the first place, nor how long I’ve owned it (aside from a nebulous “really long time”). Whatever semblance of an owner’s manual originally came with the charger is also long gone; tedious searches of both my file cabinet and online resources were fruitless. There’s not even a company name or product code to be found anywhere on the outer device labeling, just a vague “Smart Timer Charger” moniker:

The best I’ve been able to do, thanks to Google Image Search, is come across similar-looking device matches from a company called “Vidpro Power2000” (with the second word variously alternatively referred to as “Power 2000”) listed on Amazon under multiple different product names, such as the XP-333 when bundled with four 2900 mah AA NiMH batteries:

and the XP-350 with four accompanying 1000mAh AAA batteries, again NiMH-based:

My guess is that neither “Vidpro Power2000” nor whatever retail brand name was associated with this particular charger was actually the original manufacturer. And by the way, those three plastic “bumps” toward the top of the front panel, above the battery compartment and below the “Power2000” mark, aren’t functional, only cosmetic. The only two active LEDs are the rectangular ones at the front panel’s bottom edge, seen in action in an earlier photo.

Anyhoo, after some preparatory top, bottom, and side chassis views as supplements to the already shared front and back perspectives:

A few screws loose

Let’s work our way inside, beginning (and ending?) with the visible screw head in between the two foldable AC plug prongs:

Nope, that wasn’t enough:

Wonder what, if anything, is under the back panel sticker? A-ha:

There we are:

“Nice” unsightly blob of dried glue in the upper left corner there, eh?

No more screws, clips, or other retainers left; the PCB lifts away from the remainder of the plastic chassis straightaway:

As I noted earlier, those “three bumps” are completely cosmetic, with no functional purpose:

Dual-tone and contract manufacturer-grown

And speaking of cosmetics, the two-tone two-sided PCB is an unexpected aesthetic bonus:

As you may have already noticed from the earlier glimpse of the PCB’s backside, the trace regions are sizeable, befitting their hefty AC and DC power routing purposes and akin to those seen last time (where, come to think of it, the PCB was also two-tone for the two sides). But the PCB itself is elementary, seemingly with no embedded trace layers, therein explaining the between-regions routing jumpers that through-hole feed to the other side:

We’ve also finally found a product name: the “TL2000S” from “Samyatech”. My Google search results on the product code were fruitless; let me know in the comments if you had any better luck (I’m particularly interested in finding a PDF’d user manual). My research on the company was more fruitful, but only barely so. There are (or perhaps more accurately in this case, were) two companies that use(d) the “Samyatech” abbreviation, both named “Samya Technology” in full. One is based in Taiwan, the other is in South Korea. The former, I’m guessing, is our candidate:

Samya Technology is a manufacturer of charging solutions for consumer products. The company manufactures power banks, emergency chargers, mobile phone battery chargers, USB charging products, Solar based chargers, Secondary NiMH Batteries, Multifunction chargers, etc. The company has two production bases, one in Taiwan and the other in China.

The website associated with the main company URL, www.samyatech.com, is currently timing out for me. Internet Archive Wayback Machine snapshots suggest two more information bits:

  • The main URL used to redirect to samyatech.com.tw, which is also timing out, and
  • More generally, although I can’t read Chinese, so don’t take what I’m saying as “gospel”, it seems the company shut down at the start of the COVID-19 lockdown and didn’t reopen.

Up top is the AC-to-DC conversion circuitry, along with other passives:

And at the bottom are the aforementioned LEDs and their attached light pipes:

Back to the PCB backside, this time freed of its previous surrounding-chassis encumbrance:

That blotch of dried glue sure is ugly (not to mention, unlike its same-color counterparts on the other side that keep various components in place, of no obvious functional value), isn’t it?

Algorithmic (over)simplicity

The IC nexus of the design was a surprise (at least to me, perhaps less so to others who are already more immersed in the details of such designs):

At left is the AZ324M, a quad low-power op amp device from (judging by the company logo mark) Advanced Analog Circuits, part of BCD Semiconductor Manufacturing Limited, and subsequently acquired by Diodes Incorporated.

And at right? When I first saw the distinctive STMicroelectronics mark on one end of the package topside, I assumed I was dealing with a low-end firmware-fueled microcontroller. But I was wrong. It’s the HCF4060, a 14-stage ripple carry binary counter/divider and oscillator. As the Build Electronics Circuits website notes, “It can be used to produce selectable time delays or to create signals of different frequencies.”

This all ties to, as I’ve been able to gather from my admittedly limited knowledge and research, how basic battery chargers like this one work in the first place (along with why they tend to be so fickle). Perhaps obviously, it’s important upfront for such a charger to be able to discern whether the batteries installed in it are actually the intended rechargeable NiMH formulation.

So, it first subjects the cells to a short-duration, relatively high current pulse (referencing the HCF4060’s time delay function), then reads back their voltages. If it discerns that a cell has a higher-than-expected resistance, it assumes that this battery’s not rechargeable or is instead based on an alternative chemistry such as alkaline or NiCd…and terminates the charge cycle.

That said, rechargeable NiMH cells’ internal resistance also tends to increase with use and incremental recharge cycles. And batteries that are in an over-discharge state, whether from sitting around unused (a particular problem with early cells that weren’t based on low self-discharge architectures) or from being excessively drained by whatever device they were installed in, tend to be intolerant of elementary recharging algorithms, too.

That said, I’ve conversely in the past sometimes been able to convince this charger to accept a cell that it initially rejected, even if the battery was already “full” (if I’ve lost premises power and the charger acts flaky when the electricity subsequently starts flowing again later, for example) by popping it into an illuminated flashlight for a few minutes to drain off some of the stored electrons.

So…🤷‍♂️ And again, as I mentioned back in September, a more “intelligent” (albeit also more expensive) charger such as my La Crosse Technology BC-9009 AlphaPower is commonly much more copacetic with (including being capable of resurrecting) cells that simplistic chargers comparatively reject:

Some side-view shots in closing, including closeups:

And with that, I’ll turn it over to you for your thoughts in the comments. A reminder that I’m only nominally cognizant of analog and power topics (and truth be told, I’m probably being overly generous-of-self in even claiming that), dear readers—I’m much more of a “digital guy”—so tact in your responses is as-always appreciated! I’m also curious to poll your opinions as to whether I should bother putting the charger back together and donating it to another, as I normally do with devices I non-destructively tear down, or if it’d be better in this case to save potential recipients the hassle and instead destine it for the landfill. Let me know!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

 Related Content

The post A budget battery charger that also elevates blood pressure appeared first on EDN.

Delta-sigma demystified: Basics behind high-precision conversion

Пн, 12/01/2025 - 07:57

Delta-sigma (ΔΣ) converters may sound complex, but at their core, they are all about precision. In this post, we will peel back the layers and uncover the fundamentals behind their elegant design.

At the heart of many precision measurement systems lies the delta-sigma converter, an architecture engineered for accuracy. By trading speed for resolution, it excels in low-frequency applications where precision matters most, including instrumentation, audio, and industrial sensing. And it’s worth noting that delta-sigma and sigma-delta are interchangeable terms for the same signal conversion architecture.

Sigma-delta classic: The enduring AD7701

Let us begin with a nod to the venerable AD7701, a 16-bit sigma-delta ADC that sets a high bar for precision conversion. At its core, the device employs a continuous-time analog modulator whose average output duty cycle tracks the input signal. This modulated stream feeds a six-pole Gaussian digital filter, delivering 16-bit updates to the output register at rates up to 4 kHz.

Timing parameters—including sampling rate, filter corner, and output word rate—are governed by a master clock, sourced either externally or via an on-chip crystal oscillator. The converter’s linearity is inherently robust, and its self-calibration engine ensures endpoint accuracy by adjusting zero and full-scale references on demand. This calibration can also be extended to compensate for system-level offset and gain errors.

Data access is handled through a flexible serial interface supporting asynchronous UART-compatible mode and two synchronous modes for seamless integration with shift registers or standard microcontroller serial ports.

Introduced in the early 1990s, Analog Devices’ AD7701 helped pioneer low-power, high-resolution sigma-delta conversion for instrumentation and industrial sensing. While newer ADCs have since expanded on their capabilities, AD7701 remains in production and continues to serve in legacy systems and precision applications where its simplicity and reliability still resonate.

The following figure illustrates the functional block diagram of this enduring 16-bit sigma-delta ADC.

Figure 1 Functional block diagram of AD7701 showcases its key architectural elements. Source: Analog Devices Inc.

Delta-sigma ADCs and DACs

Delta-sigma converters—both analog-to-digital converters (ADCs) and digital-to-analog converters (DACs)—leverage oversampling and noise shaping to achieve high-resolution signal conversion with relatively simple analog circuitry.

In a delta-sigma ADC, the input signal is sampled at a much higher rate than the Nyquist frequency and passed through a modulator that emphasizes quantization noise at higher frequencies. A digital filter then removes this noise and decimates the signal to the desired resolution.

Conversely, delta-sigma DACs take high-resolution digital data, shape the noise spectrum, and output a high-rate bitstream that is smoothed by an analog low-pass filter. This architecture excels in audio and precision measurement applications due to its ability to deliver robust linearity and dynamic range with minimal analog complexity.

Note that from here onward, the focus is exclusively on delta-sigma ADCs. While DACs share similar architectural elements, their operational context and signal flow differ significantly. To maintain clarity and relevance, DACs are omitted from this discussion—perhaps a topic for a future segment.

Inside the delta-sigma ADC

A delta-sigma ADC typically consists of two core elements: a delta-sigma modulator, which generates a high-speed bitstream, and a low-pass filter that extracts the usable signal. The modulator outputs a one-bit serial stream at a rate far exceeding the converter’s data rate.

To recover the average signal level encoded in this stream, a low-pass filter is essential; it suppresses high-frequency quantization noise and reveals the underlying low-frequency content. At the heart of every delta-sigma ADC lies the modulator itself; its output bitstream represents input signal’s amplitude through its average value.

A block diagram of a simple analog first-order delta-sigma modulator is shown below.

Figure 2 The block diagram of a simple analog first-order delta-sigma modulator illustrates its core components. Source: Author

This modulator operates through a negative feedback loop composed of an integrator, a comparator, and a 1-bit DAC. The integrator accumulates the difference between the input signal and the DAC’s output. The comparator then evaluates this integrated signal against a reference voltage, producing a 1-bit data stream. This stream is fed back through DAC, closing the loop and enabling continuous refinement of the output.

Following the delta-sigma modulator, the 1-bit data stream undergoes decimation via a digital filter (decimation filter). This process involves data averaging and sample rate reduction, yielding a multi-bit digital output. Decimation concentrates the signal’s relevant information into a narrower bandwidth, enhancing resolution while suppressing quantization noise within the band of interest.

It’s no secret to most engineers that second-order delta-sigma ADCs push noise shaping further by using two integrators in the modulator loop. This deeper shaping shifts quantization noise farther into high frequencies, improving in-band resolution at a given oversampling ratio.

While the design adds complexity, it enhances signal fidelity and eases post-filtering demands. Second-order modulators are common in precision applications like audio and instrumentation, though stability and loop tuning become more critical as order increases.

Well, at its core, the delta-sigma ADC represents a seamless integration of analog and digital processing. Its ability to achieve high-resolution conversion stems from the coordinated use of oversampling, noise shaping, and decimation—striking a delicate balance between speed and precision.

Delta-sigma ADCs made approachable

Although delta-sigma conversion is a complex process, several prewired ADC modules—built around popular, low-cost ICs like the HX711, ADS1232/34, and CS1237/38—make experimentation remarkably accessible. These chips offer high-resolution conversion with minimal external components, ideal for precision sensing and weighing applications.

Figure 3 A few widely used modules simplify delta-sigma ADC practice, even for those just starting out. Source: Author

Delta-sigma vs. flash ADCs vs. SAR

Most of you already know this, but flash ADCs are the speed demons of the converter world—using parallel comparators to achieve ultra-fast conversion, typically at the expense of resolution.

Flash ADCs and delta-sigma architectures serve distinct roles, with conversion rates differing by up to two orders of magnitude. Delta-sigma ADCs are ideal for low-bandwidth applications—typically below 1 MHz—where high resolution (12 to 24 bits) is required. Their oversampling approach trades speed for precision, followed by filtering to suppress quantization noise. This also simplifies anti-aliasing requirements.

While delta-sigma ADCs excel in resolution, they are less efficient for multichannel systems. Architecture may use sampled-data modulators or continuous-time filters. The latter shows promise for higher conversion rates—potentially reaching hundreds of Msps—but with lower resolution (6 to 8 bits). Still in early R&D, continuous-time delta-sigma designs may challenge flash ADCs in mid-speed applications.

Interestingly, flash ADCs can also serve as internal building blocks within delta-sigma circuits to boost conversion rates.

Also, successive approximation register (SAR) ADCs sit comfortably between flash and delta-sigma designs, offering a practical blend of speed, resolution, and efficiency. Unlike flash ADCs, which prioritize raw speed using parallel comparators, SAR converters use a binary search approach that is slower but far more power-efficient.

Compared to delta-sigma ADCs, SAR designs avoid oversampling and complex filtering, making them ideal for moderate-resolution, real-time applications. Each architecture has its sweet spot: flash for ultra-fast, low-resolution tasks; delta-sigma for high-precision, low-bandwidth needs; and SAR for balanced performance across a wide range of embedded systems.

Delta-sigma converters elegantly bridge the analog and digital worlds, offering high-resolution performance through clever noise shaping and oversampling. Whether you are designing precision instrumentation or exploring audio fidelity, understanding their principles unlocks a deeper appreciation for modern signal processing.

Curious how these concepts translate into real-world design choices? Join the conversation—share your favorite delta-sigma use case or challenge in the comments. Let us map the noise floor together and surface the insights that matter.

T.K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Delta-sigma demystified: Basics behind high-precision conversion appeared first on EDN.

Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback

Птн, 11/28/2025 - 15:00

Efficient battery management becomes increasingly important as demand for portable power continues to rise, especially since balanced cells help ensure safety, high performance, and a longer battery life. When cells are mismatched, the battery pack’s total capacity decreases, leading to the overcharging of some cells and undercharging of others—conditions that accelerate degradation and reduce overall efficiency. The challenge is how to maintain an equal voltage and charge among the individual cells.

Typically, it’s possible to achieve cell balancing through either passive or active methods. Passive balancing, the more common approach because of its simplicity and low cost, equalizes cell voltages by dissipating excess energy from higher-voltage cells through a resistor or FET networks. While effective, this process wastes energy as heat.

In contrast, active cell balancing redistributes excess energy from higher-voltage cells to lower-voltage ones, improving efficiency and extending battery life. Implementing active cell balancing involves an isolated, bidirectional power converter capable of both charging and discharging individual cells.

This Power Tip presents an active cell-balancing design based on a bidirectional flyback topology and outlines the control circuitry required to achieve a reliable, high-performance solution.

System architecture

In a modular battery system, each module contains multiple cells and a corresponding bidirectional converter (the left side of Figure 1). This arrangement enables any cell within Module 1 to charge or discharge any cell in another module, and vice versa. Each cell connects to an array of switches and control circuits that regulate individual charge and discharge cycles.

Figure 1 A modular battery system block diagram with multiple cells a bidirectional converter where any cell within Module 1 can charge/discharge any cell in another module. Each cell connects to an array of switches and control circuits that regulate individual charge/discharge cycles. Source: Texas Instruments

Bidirectional flyback reference design

The block diagram in Figure 2 illustrates the design of a bidirectional flyback converter for active cell balancing. One side of the converter connects to the bus voltage (18 V to 36 V), which could be the top of the battery cell stack, while the other side connects to a single battery cell (3.0 V to 4.2 V). Both the primary and secondary sides employ flyback controllers, allowing the circuit to operate bidirectionally, charging or discharging the cell as required.

Figure 2 A bidirectional flyback for active cell balancing reference design. Source: Texas Instruments

A single control signal defines the power-flow direction, ensuring that both flyback integrated circuits (ICs) never operate simultaneously. The design delivers up to 5 A of charge or discharge current, protecting the cell while maintaining efficiency above 80% in both directions (Figure 3).

Figure 3 Efficiency data for charging (left) and discharging (right). Source: Texas Instruments

Charge mode (power from Vbus to Vcell)

In charge mode, the control signal enables the charge controller, allowing Q1 to act as the primary FET. D1 is unused. On the secondary side, the discharge controller is disabled, and Q2 is unused. D2 serves as the output diode providing power to the cell. The secondary side implements constant-current and constant-voltage loops to charge the cell at 5 A until reaching the programmed voltage (3.0 V to 4.2 V) while keeping the discharge controller disabled.

Discharge mode (power from Vcell to Vbus)

Just the opposite happens in discharge mode; the control signal enables the discharge controller and disables the charge controller. Q2 is now the primary FET, and D2 is inactive. D1 serves as the output diode while Q1 is unused. The cell side enforces an input current limit to prevent discharge of the cell above 5 A. The Vbus side features a constant-voltage loop to ensure that the Vbus remains within its setpoint.

Auxiliary power and bias circuits

The design also integrates two auxiliary DC/DC converters to maintain control functionality under all operating conditions. On the bus side, a buck regulator generates 10 V to bias the flyback IC and the discrete control logic that determines the charge and discharge direction. On the cell side, a boost regulator steps the cell voltage up to 10 V to power its controller and ensure that the control circuit is operational even at low cell voltages.

Multimodule operation

Figure 4 illustrates how multiple battery modules interconnect through the reference design’s units. The architecture allows an overcharged cell from a higher-voltage module, shown at the top of the figure, to transfer energy to an undercharged cell in any other module. The modules do not need to be connected adjacently. Energy can flow between any combination of cells across the pack.

Figure 4 Interconnection of battery modules using TI’s reference design for bidirectional balancing. Source: Texas Instruments

Future improvements

For higher-power systems (20 W to 100 W), adopting synchronous rectification on the secondary and an active-clamp circuit on the primary will reduce losses and improve efficiency, thus enhancing performance.

For systems exceeding 100 W, consider alternative topologies such as forward or inductor-inductor-capacitor (LLC) converters. Regardless of topology, you must ensure stability across the wide-input and cell-voltage ranges characteristic of large battery systems.

Modern multicell battery systems.

The bidirectional flyback-based active cell balancing approach offers a compact, efficient, and scalable solution for modern multicell battery systems. By recycling energy between cells rather than dissipating this energy as heat, the design improves both energy efficiency and battery longevity. Through careful control-loop optimization and modular scalability, this architecture enables high-performance balancing in portable, automotive, and renewable energy applications.

Sarmad Abedin is currently a systems engineer with Texas Instruments, working in the power design services (PDS) team, working on both automotive and industrial power supplies. He has been designing power supplies for the past 14 years and has experience in both isolated and non-isolated power supply topologies. He graduated from Rochester Institute of Technology in 2011 with his bachelor’s degree.

 

Related Content

The post Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback appeared first on EDN.

Does (wearing) an Oura (smart ring) a day keep the doctor away?

Чтв, 11/27/2025 - 15:00

Before diving into my on-finger impressions of Oura’s Gen3 smart ring, as I’d promised I’d do back in early September, I thought I’d start off by revisiting some of the business-related topics I mentioned in that initial post in the series. First off, I mentioned at the end of that post that Oura had just obtained a favorable final judgment from the United States International Trade Commission (ITC) that both China-based RingConn and India-based Ultrahuman had infringed on its patent portfolio. In the absence of licensing agreements or other compromises, both Oura competitors would be banned from further product shipments to and sales of their products in the US after a final 60-day review period ended on October 21, although retailer partners could continue to sell their existing inventory until it was depleted.

Product evolutions and competition developments

I’m writing these words 10 days later, on Halloween, and there’ve been some interesting developments. I’d intentionally waited until after October 21 in order to see how both RingConn and Ultrahuman would react, as well as to assess whether patent challenges would pan out. As for Ultrahuman, a blog post posted shortly before the deadline (and updated the day after) made it clear that the company wasn’t planning on caving:

  • A new ring design is already in development and will launch in the U.S. as soon as possible.
  • We’re actively seeking clarity on U.S. manufacturing from our Texas facility, which could enable a “Made in USA” Ring AIR in the near future.
  • We also eagerly await the U.S. Patent and Trademark Office’s review of the validity of Oura’s ‘178 patent, which it acquired in 2023, and is central to the ITC ruling. A decision is expected in December.

To wit, per a screenshot I captured the day after the deadline, Wednesday, October 22, sales through the manufacturer’s website to US customers had ceased.

And surprisingly, inventory wasn’t listed as available for sale on Amazon’s website, either.

RingConn conversely took a different tack. On October 22, again, when I checked, the company was still selling its products to US customers both from its own website and Amazon’s:

This situation baffled me until I hit up the company subreddit and saw the following:

Dear RingConn Family,

We’d like to share some positive news with you: RingConn, a leading smart ring innovator, has reached a settlement with ŌURA regarding a patent dispute. Under the terms of the agreement, RingConn’s software and hardware products will remain available in the U.S. market, without affecting its market presence.

See the company’s Reddit post for the rest of the message. And here’s the official press release.

Secondly, as I’d noted in my initial coverage:

One final factor to consider, which I continue to find both surprising and baffling, is the fact that none of the three manufacturers I’ve mentioned here seems to support having more than one ring actively associated with an account, therefore, cloud-logging and archiving data, at the same time. To press a second ring into service, you need to manually delete the first one from your account first. The lack of multi-ring support is a frequent cause of complaints on Reddit on elsewhere, from folks who want to accessorize multiple smart rings just as they do with normal rings, varying color and style to match outfits and occasions. And the fiscal benefit to the manufacturers of such support is intuitively obvious, yes?

It turns out I just needed to wait a few weeks. On October 1, Oura announced that multiple Oura Ring 4 styles would soon be supported under a single account. Quoting the press release, “Pairing and switching among multiple Oura Ring 4 devices on a single account will be available on iOS starting Oct. 1, 2025, and on Android starting Oct. 20, 2025.” That said, a crescendo of complaints on Reddit and elsewhere suggests an implementation delay; I’m 11 days past October 20 at this point and haven’t seen the promised Android app update yet, and at least some iOS users have waited a month at this point. Oura PR told me that I should be up and running by November 5; I’ll follow up in the comments as to whether this actually happened.

Charging options

That same day, by the way, Oura also announced its own branded battery-inclusive charger case, an omission that I’d earlier noted versus competitor RingConn:

 

That said, again quoting from the October 1 press release (with bolded emphasis mine), the “Oura Ring 4 Charging Case is $99 USD and will be available to order in the coming months.” For what it’s worth, the $28.99 (as I write these words) Doohoeek charging case for my Gen3 Horizon:

is working like a charm:

Behind it, by the way, is the upgraded Doohoeek $33.29 charging case for my Oura Ring 4, whose development story (which I got straight from the manufacturer) was not only fascinating in its own right but also gave me insider insight into how Oura has evolved its smart ring charging scheme for the smart ring over time. More about that soon, likely next month.

 

And here’s my Gen3 on the factory-supplied, USB-C-fed standard charger, again with its Ring 4 sibling behind it:

General impressions

As for the ring itself, here’s what it looks like on my left index finger, with my wedding band two digits over from it on the same hand:

And here again are all three rings I’ve covered in in-depth writeups to date: the Oura Gen3 Horizon at left, Ultrahuman Ring AIR in the middle and RingConn Gen 2 at right:

Like RingConn’s product:

both the Heritage:

and my Horizon variant of the Oura Gen3:

 

include physical prompting to achieve and maintain proper placement: sensor-inclusive “bump” guides on both sides of the backside inside, which the Oura Ring 4 notably dispenses with:

 

I’ve already shown you what the red glow of the Gen3 intermediary SpO2 (oxygen saturation) sensor looks like when in operation, specifically when I’m able to snap a photo of it soon enough after waking to catch it still in action before it discerns that I’ve stirred and turns off:

And here’s what the two green-color pulse rate sensors, one on either side of their SpO2 sibling:

look like in action:

Generally speaking, the Oura Gen3 feels a lot like the Ultrahuman Ring AIR; they both drop between 15-20% of battery charge level every 24 hours, leading to a sub-week operating life between recharges. That said, I will give Oura well-deserved kudos for its software user interface, which is notably more informative, intuitive and more broadly easier to use than its RingConn and Ultrahuman counterparts. Then again, Oura’s been around the longest and has the largest user base, so it’s had more time (and more feedback) to fine-tune things. And cynically speaking, given Oura’s $5.99/month or $69.99/year subscription fee, versus competitors’ free, it’d better be better!

Software insights

In closing, and in fairness, regarding that subscription, it’s not strictly required to use an Oura smart ring. That said, the information supplied without it:

is a pale subset of the norm:

What I’m showing in the overview screen images is a fraction of the total information captured and reported, but it’s all well-organized and intuitive. And as you can see on that last one, the Oura smart ring is adept at sensing even brief catnaps 😀

With that, and as I’ve already alluded, I now have an Oura Ring 4 on-finger—two of them, in fact, one of which I’ll eventually be tearing down—which I aspire to write up shortly, sharing my impressions both versus its Gen3 predecessor and its competitors. Until then, I as-always welcome your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Does (wearing) an Oura (smart ring) a day keep the doctor away? appeared first on EDN.

Inside the battery: A quick look at internal resistance

Чтв, 11/27/2025 - 11:14

Ever wondered why a battery that reads full voltage still struggles to power your device? The answer often lies in its internal resistance. This hidden factor affects how efficiently a battery delivers current, especially under load.

In this post, we will briefly examine the basics of internal resistance—and why it’s a critical factor in real-world performance, from handheld flashlights to high-power EV drivetrains.

What’s internal resistance and why it matters

Every battery has some resistance to the flow of current within itself—this is called internal resistance. It’s not a design flaw, but a natural consequence of the materials and construction. The electrolyte, electrodes, and even the connectors all contribute to it.

Internal resistance causes voltage to drop when the battery delivers current. The higher the current draw, the more noticeable the drop. That is why a battery might read 1.5 V at rest but dip below 1.2 V under load—and why devices sometimes shut off even when the battery seems “full.”

Here is what affects it:

  • Battery type: Alkaline, lithium-ion, and NiMH cells all have different internal resistances.
  • Age and usage: Resistance increases as the battery wears out.
  • Temperature: Cold conditions raise resistance, reducing performance.
  • State of charge: A nearly empty battery often shows higher resistance.

Building on that, internal resistance gradually increases as batteries age. This rise is driven by chemical wear, electrode degradation, and the buildup of reaction byproducts. As resistance climbs, the battery becomes less efficient, delivers less current, and shows more voltage drop under load—even when the resting voltage still looks healthy.

Digging a little deeper—focusing on functional behavior under load—internal resistance is not just a single value; it’s often split into two components. Ohmic resistance comes from the physical parts of the battery, like the electrodes and electrolyte, and tends to stay relatively stable.

Polarization resistance, on the other hand, reflects how the battery’s chemical reactions respond to current flow. It’s more dynamic, shifting with temperature, charge level, and discharge rate. Together, these resistances shape how a battery performs under load, which is why two batteries with identical voltage readings might behave very differently in real-world use.

Internal resistance in practice

Internal resistance is a key factor in determining how much current a battery can deliver. When internal resistance is low, the battery can supply a large current. But if the resistance is high, the current it can provide drops significantly. Also, higher the internal resistance, the greater the energy loss—this loss manifests as heat. That heat not only wastes energy but also accelerates the battery’s degradation over time.

The figure below illustrates a simplified electrical model of a battery. Ideally, internal resistance would be zero, enabling maximum current flow without energy loss. In practice, however, internal resistance is always present and affects performance.

Figure 1 Illustration of a battery’s internal configuration highlights the presence of internal resistance. Source: Author

Here is a quick side note regarding resistance breakdown. Focusing on material-level transport mechanisms, battery internal resistance comprises two primary contributors: electronic resistance, driven by electron flow through conductive paths, and ionic resistance, governed by ion transport within the electrolyte.

The total effective resistance reflects their combined influence, along with interfacial and contact resistances. Understanding this layered structure is key to diagnosing performance losses and carrying out design improvements.

As observed nowadays, elevated internal resistance in EV batteries hampers performance by increasing heat generation during acceleration and fast charging, ultimately reducing driving range and accelerating cell degradation.

Fortunately, several techniques are available for measuring a battery’s internal resistance, each suited to different use cases and levels of diagnostic depth. Common methods include direct current internal resistance (DCIR), alternating current internal resistance (ACIR), and electrochemical impedance spectroscopy (EIS).

And there is a two-tier variation of the standard DCIR technique, which applies two sequential discharge loads with distinct current levels and durations. The battery is first discharged at a low current for several seconds, followed by a higher current for a shorter interval. Resistance values are calculated using Ohm’s law, based on the voltage drops observed during each load phase.

Analyzing the voltage response under these conditions can reveal more nuanced resistive behavior, particularly under dynamic loads. However, the results remain strictly ohmic and do not provide direct information about the battery’s state of charge (SoC) or capacity.

Many branded battery testers, such as some product series from Hioki, apply a constant AC current at a measurement frequency of 1 kHz and determine the battery’s internal resistance by measuring the resulting voltage with an AC voltmeter (AC four-terminal method).

Figure 2 The Hioki BT3554-50 employs AC-IR method to achieve high-precision internal resistance measurement. Source: Hioki

The 1,000-hertz (1 kHz) ohm test is a widely used method for measuring internal resistance. In this approach, a small 1-kHz AC signal is applied to the battery, and resistance is calculated using Ohm’s law based on the resulting voltage-to-current ratio.

It’s important to note that AC and DC methods often yield different resistance values due to the battery’s reactive components. Both readings are valid—AC impedance primarily reflects the instantaneous ohmic resistance, while DC measurements capture additional effects such as charge transfer and diffusion.

Notably, the DC load method remains one of the most enduring—and nostalgically favored—approaches for measuring a battery’s internal resistance. Despite the rise of impedance spectroscopy and other advanced techniques, its simplicity and hands-on familiarity continue to resonate with seasoned engineers.

It involves briefly applying a load—typically for a second or longer—while measuring the voltage drop between the open-circuit voltage and the loaded voltage. The internal resistance is then calculated using Ohm’s law by dividing the voltage drop by the applied current.

A quick calculation: To estimate a battery’s internal resistance, you can use a simple voltage-drop method when the open-circuit voltage, loaded voltage, and current draw are known. For example, if a battery reads 9.6 V with no load and drops to 9.4 V under a 100-mA load:

Internal resistance = 9.6 V-9.4 V/0.1 A = 2 Ω

This method is especially useful in field diagnostics, where direct resistance measurements may not be practical, but voltage readings are easily obtained.

In simplified terms, internal resistance can be estimated using several proven techniques. However, the results are influenced by the test method, measurement parameters, and environmental conditions. Therefore, internal resistance should be viewed as a general diagnostic indicator—not a precise predictor of voltage drop in any specific application.

Bonus blueprint: A closing hardware pointer

For internal resistance testing, consider the adaptable e-load concept shown below. It forms a simple, reliable current sink for controlled battery discharge, offering a practical starting point for further refinement. As you know, the DC load test method allows an electronic load to estimate a battery’s internal resistance by observing the voltage drop during a controlled current draw.

Figure 3 The blueprint presents an electronic load concept tailored for internal resistance measurement, pairing a low-RDS(on) MOSFET with a precision load resistor to form a controlled current sink. Source: Author

Now it’s your turn to build, tweak, and test. If you have got refinements, field results, or alternate load strategies, share them in the comments. Let us keep the circuit conversation flowing.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Inside the battery: A quick look at internal resistance appeared first on EDN.

NB-IoT module adds built-in geolocation capabilities

Срд, 11/26/2025 - 16:21

The ST87M01-1301 NB-IoT wireless module from ST provides narrowband cellular connectivity along with both GNSS and Wi-Fi–based positioning for outdoor and indoor geolocation. Its integrated GNSS receiver enables precise location tracking using GPS constellations, while the Wi-Fi positioning engine delivers fast, low-power indoor location services by scanning nearby 802.11b access points and leveraging third-party geocoding providers.

As the latest member of the ST87M01 series of NB-IoT (LTE Cat NB2) industrial modules, this variant supports multi-frequency bands with extended multi-regional coverage. Its compact, low-power design makes it well suited for smart IoT applications such as asset tracking, environmental monitoring, smart metering, and remote healthcare. A 10.6×12.8-mm, 51-pin LGA package further enables miniaturization in space-constrained designs.

ST provides an evaluation kit that includes a ready-to-use Conexa IoT SIM card and two SMA antennas, helping developers quickly prototype and validate NB-IoT connectivity in real-world conditions. This is supported by an expanding ecosystem featuring the Easy-Connect software library and design examples.

ST87M01 series product page

STMicroelectronics

The post NB-IoT module adds built-in geolocation capabilities appeared first on EDN.

Boost controller powers brighter automotive displays

Срд, 11/26/2025 - 16:21

A 60-V boost controller from Diodes, the AL3069Q packs four 80-V current-sink channels for driving LED backlights in automotive displays. Its adaptive boost-voltage control allows operation from a 4.5-V to 60-V input range—covering common automotive power rails at 12 V, 24 V, and 48 V—and its switching frequency is adjustable from 100 kHz to 1 MHz.

The AL3069Q’s four current-sink channels are set using an external resistor, providing typical ±0.5% current matching between channels and devices to ensure uniform brightness across the display. Each channel delivers 250 mA continuous or up to 400 mA pulsed, enabling support for a range of display sizes and LED panels up to 32-inch diagonal, such as those used in infotainment systems, instrument clusters, and head-up displays. PWM-to-analog dimming, with a minimum duty cycle of 1/5000 at 100 Hz, improves brightness control while minimizing LED color shift.

Diode’s AL3069Q offers robust protection and fault diagnostics, including cycle-by-cycle current limit, soft-start, UVLO, programmable OVP, OTP, and LED-open/-short detection. Additional safeguards cover sense resistor, Schottky diode, inductor, and VOUT faults, with a dedicated pin to signal any fault condition.

The automotive-compliant controller costs $0.54 each in 1000-unit quantities.

AL3069Q product page 

Diodes

The post Boost controller powers brighter automotive displays appeared first on EDN.

Hybrid device elevates high-energy surge protection

Срд, 11/26/2025 - 16:21

TDK’s G series integrates a metal oxide varistor and a gas discharge tube into a single device to provide enhanced surge protection. The two elements are connected in series, combining the strengths of both technologies to deliver greater protection than either component can offer on its own. This hybrid configuration also reduces leakage current to virtually zero, helping extend the overall lifetime of the device.

The G series comprises two leaded variants—the G14 and G20—with disk diameters of 14 mm and 20 mm, respectively. G14 models support AC operating voltages from 50 V to 680 V, while G20 versions extend this range to 750 V. They can handle maximum surge currents of 6,000 A (G14) and 10,000 A (G20) for a single 8/20-µs pulse, and absorb up to 200 J (G14) or 490 J (G20) of energy.

Operating over a temperature range of –40 °C to +105 °C, the G series is suitable for use in power supplies, chargers, appliances, smart metering, communication systems, and surge protection devices. Integrating both protection elements into a single, epoxy-coated 2-pin package simplifies design and reduces board space compared to using discrete components.

To access the datasheets for the G14 series (ordering code B72214G) and the G20 series (B72220G), click here.

TDK Electronics 

The post Hybrid device elevates high-energy surge protection appeared first on EDN.

Power supplies enable precise DC testing

Срд, 11/26/2025 - 16:20

R&S has launched the NGT3600 series of DC power supplies, delivering up to 3.6 kW for a wide range of test and measurement applications. This versatile line provides clean, stable power with low voltage and current ripple and noise. With a resolution of 100 µA for current and 1 mV for voltage, as well as adjustable output voltages up to 80 V, the supplies offer both precision and flexibility.

The dual-channel NGT3622 combines two fully independent 1800-W outputs in a single compact instrument. Its channels can be connected in series or parallel, allowing either the voltage or the current to be doubled. For applications requiring even more power, up to three units can be linked to provide as much as 480 V or 300 A across six channels. The NGT3622 supports current and voltage testing under load, efficiency measurements, and thermal characterization of components such as DC/DC converters, power supplies, motors, and semiconductors.

Engineers can use the NGT3600 series to test high-current prototypes such as base stations, validate MPPT algorithms for solar inverters, and evaluate charging-station designs. In the automotive sector, the series supports the transition to 48-V on-board networks by simulating these networks and powering communication systems, sensors, and control units during testing.

All models in the NGT3600 series are directly rack-mountable with no adapter required. They will be available beginning January 13, 2026, from R&S and selected distribution partners. For more information, click here.

Rohde & Schwarz 

The post Power supplies enable precise DC testing appeared first on EDN.

Space-ready Ethernet PHYs achieve QML Class P

Срд, 11/26/2025 - 16:20

Microchip’s two radiation-tolerant Ethernet PHY transceivers are the company’s first devices to earn QML Class P/ESCC 9000P qualification. The single-port VSC8541RT and quad-port VSC8574RT support data rates up to 1 Gbps, enabling dependable data links in mission-critical space applications.

Achieving QML Class P/ESCC 9000P certification involves rigorous testing—such as Total Ionizing Dose (TID) and Single Event Effects (SEE) assessments—to verify that devices tolerate the harsh radiation conditions of space. The certification also ensures long-term availability, traceability, and consistent performance.

The VSC8541RT and VSC8574RT withstand 100 krad(Si) TID and show no single-event latch-up at LET levels below 78 MeV·cm²/mg at 125 °C. The VSC8541RT integrates a single Ethernet copper port supporting MII, RMII, RGMII, and GMII MAC interfaces, while the VSC8574RT includes four dual-media copper/fiber ports with SGMII and QSGMII MAC interfaces. Their low power consumption and wide operating temperature ranges make them well-suited for missions where thermal constraints and power efficiency are key design considerations.

VSC8541RT product page  

VSC8574RT product page 

Microchip Technology 

The post Space-ready Ethernet PHYs achieve QML Class P appeared first on EDN.

Active current mirror

Срд, 11/26/2025 - 15:00

Current mirrors are a commonly useful circuit function, and sometimes high precision is essential. The challenge of getting current mirrors to be precise has created a long list of tricks and techniques. The list includes matched transistors, monolithic transistor multiples, emitter degeneration, fancy topologies with extra transistors, e.g., Wilson, cascode, etc.

But when all else fails and precision just can’t suffer any compromise, Figure 1 shows the nuclear option. Just add a rail-to-rail I/O (RRIO) op-amp!

Figure 1 An active current sink mirror. Assuming resistor equality and negligible A1 offset error, A1 feedback forces Q1 to maintain accurate current sink I/O equality I2 = I1.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The theory of operation of the ACM couldn’t be more straightforward. Vr , which is equal to I1*R, is wired to A1’s noninverting input, forcing it to drive Q1 to conduct I2 such that I2R = I1R.

Therefore, if the resistors are equal, A1’s accuracy limiting parameters, like offset voltage, gain-bandwidth, bias and offset currents, etc., are adequate, and Q1 doesn’t saturate, I1 can be equal to I2 just as precisely as you like.

Obviously, Vr must be >> Voffset, and A1’s output span must be >> Q1’s threshold even after subtracting Vr.

Substitute a PFET for Figure 1’s NFET, and a current-sourcing mirror results, as shown in Figure 2.

Figure 2 An active current source mirror. This is identical to Figure 1, except this Q1 is a PFET and the polarities are swapped.

Active current mirror (ACM) precision can be better than that of easily available sense resistors. So, a bit of post-assembly trimming, as illustrated in Figure 3, might be useful.

Figure 3 If adequately accurate resistors aren’t handy, a trimmer pot might be useful for post-assembly trimming.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Active current mirror appeared first on EDN.

Charting the course for a truly multi-modal device edge

Срд, 11/26/2025 - 13:55

The world is witnessing an artificial intelligence (AI) tsunami. While the initial waves of this technological shift focused heavily on the cloud, a powerful new surge is now building at the edge. This rapid infusion of AI is set to redefine Internet of Things (IoT) devices and applications, from sophisticated smart homes to highly efficient industrial environments.

This evolution, however, has created significant fragmentation in the market. Many existing silicon providers have adopted a strategy of bolting on AI capabilities to legacy hardware originally designed for their primary end markets. This piecemeal approach has resulted in inconsistent performance, incompatible toolchains, and a confusing landscape for developers trying to deploy edge AI solutions.

To unlock the transformative potential of edge AI, industry must pivot. We must move beyond retrofitted solutions and embrace a purpose-built, AI-native approach that integrates hardware and software right from the foundational design.

 

The AI-native mandate

“AI-native” is more than a marketing term; it’s a fundamental architectural commitment where AI is the central consideration, not an afterthought. Here’s what it looks like.

  • The hardware foundation: Purpose-built silicon

As IoT workloads evolve to handle data across multiple modalities, from vision and voice to audio and time series, the underlying silicon must present itself as a flexible, secure platform capable of efficient processing. Core to such design considerations include NPU architectures that can scale, and are supported by highly integrated vision, voice, video and display pipelines.

  • The software ecosystem: Openness and portability

To accelerate innovation and combat fragmentation for IoT AI, the industry needs to embrace open standards. While the ‘language’ of model formats and frameworks is becoming more industry-standard, the ecosystem of edge AI compilers is largely being built from vendor-specific and proprietary offerings. Efficient execution of AI workloads is heavily dependent on optimized data movement and processing across scalar, vector, and matrix accelerator domains.

By open-sourcing compilers, companies encourage faster innovation through broader community adoption, providing flexibility to developers and ultimately facilitating more robust device-to-cloud developer journeys. Synaptics is encouraging broader adoption from the community by open-sourcing edge AI tooling and software, including Synaptics’ Torq edge AI platform, developed in partnership with Google Research.

  • The dawn of a new device landscape

AI-native silicon will fuel the creation of entirely new device categories. We are currently seeing the emergence of a new class of devices truly geared around AI, such as wearables—smart glasses, smartwatches, and wristbands. Crucially, many of these devices are designed to operate without being constantly tethered to a smartphone.

Instead, they soon might connect to a small, dedicated computing element, perhaps carried in a pocket like a puck, providing intelligence and outcomes without requiring the user to look at a traditional phone display. This marks the beginning of a more distributed intelligence ecosystem.

The need for integrated solutions

This evolving landscape is complex, demanding a holistic approach. Intelligent processing capabilities must be tightly coupled with secure, reliable connectivity to deliver a seamless end-user experience. Connected IoT devices need to leverage a broad range of technologies from the latest Wi-Fi and Bluetooth standards to Thread and ZigBee.

Chip, device and system-level security are also vital, especially considering multi-tenant deployments of sensitive AI models. For intelligent IoT devices, particularly those that are battery-powered or wearable, security must be maintained consistently as the device transitions in and out of different power states. The combination of processing, security, and power must all work together effectively.

Navigating this new era of the AI edge requires a fundamental shift in mindset, a change from retrofitting existing technology to building products with a clear, AI-first mission. Take the case of Synaptics SL2610 processor, one of the industry’s first AI-native, transformer-capable processors designed specifically for the edge. It embodies the core hardware and software principles needed for the future of intelligent devices, running on a Linux platform.

By embracing purpose-built hardware, rallying around open software frameworks, and maintaining a strategy of self-reliance and strategic partnerships, the industry can move past the current market noise and begin building the next generation of truly intelligent, powerful, and secure devices.

Mehul Mehta is a Senior Director of Product Marketing at Synaptics Inc., where he is responsible for defining the Edge AI IoT SoC roadmap and collaborating with lead customers. Before joining Synaptics, Mehul held leadership roles at DSP Group spanning product marketing, software development, and worldwide customer support.

Related Content

The post Charting the course for a truly multi-modal device edge appeared first on EDN.

Сторінки