EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 20 min 8 sec ago

Researchers shrink ferroelectric memory stacks

Fri, 01/02/2026 - 17:32

Researchers in Japan have developed ultrathin ferroelectric capacitors that maintain strong polarization at a stack thickness of just 30 nm, including top and bottom electrodes. Using scandium-doped aluminum nitride films sandwiched between platinum electrodes, the team achieved high remanent polarization, demonstrating the potential for high-density, energy-efficient memory in compact electronic devices.

The work, led by Professor Hiroshi Funakubo of Science Tokyo in collaboration with Canon ANELVA, marks a departure from previous approaches that only thinned the ferroelectric layer. By optimizing the full capacitor stack—5-nm platinum bottom electrode, 20-nm (Al0.9Sc0.1)N ferroelectric layer, and 5-nm platinum top electrode—the researchers maintained robust ferroelectric performance while drastically reducing device size.

Key to the success was a post-heat treatment of the bottom platinum electrode at 840°C, which improved its crystal orientation and enhanced polarization switching in the ultrathin films. This process ensures that the scaled-down capacitors remain compatible with semiconductor integration, enabling on-chip embedding alongside logic circuits.

The breakthrough lays the groundwork for compact ferroelectric memories, such as FeRAM and ferroelectric tunnel junctions, for future IoT and mobile electronics. By further exploring alternative electrode materials and processing techniques, the team aims to create even more durable, energy-efficient, and miniaturized on-chip memory devices.

Full details on the research are available here.

Institute of Science Tokyo 

The post Researchers shrink ferroelectric memory stacks appeared first on EDN.

Inturai launches quantum-safe ESP32 security

Fri, 01/02/2026 - 17:32

Inturai Ventures, in partnership with cybersecurity firm PQStation, has unveiled quantum-safe encryption for connected devices across the defense, aged care, and home security sectors. Under the agreement, Inturai holds exclusive rights to deploy PQStation’s technology in these markets. The collaboration focused on securing MQTT traffic using post-quantum cryptography (PQC) on the ESP32 platform. Billions of devices worldwide run on the ESP32, a dual-core microcontroller SoC with integrated Wi-Fi and Bluetooth.

Example ESP-32 device that can now run Post Quantum Secure. (CNW Group/Inturai Ventures Corp.)

The encryption was tested in two configurations: one using only post-quantum cryptography and another combining PQC with conventional security. Both approaches maintained strong performance, with low latency and minimal power impact, demonstrating that even small, low-power devices can operate securely against future quantum threats.

Governments across the United States, Canada, Australia, and the European Union are requiring post-quantum security upgrades to begin by 2026. In some jurisdictions, including Australia and the EU, critical sectors such as defense and healthcare must complete the transition as early as 2028.

This joint development with PQStation is central to Inturai’s mission to protect critical data in real-time sensor networks and positions the company to deploy quantum-safe protocols across critical sectors worldwide. Inturai expects significant benefits across its healthcare, drone, and military pipeline from this breakthrough, as the global ESP32 module market is projected to reach $4.6 billion by 2032 (Dataintelo).

Inturai Ventures  

PQStation

The post Inturai launches quantum-safe ESP32 security appeared first on EDN.

OWC rolls out 2-meter Thunderbolt 5 cable

Fri, 01/02/2026 - 17:32

Other World Computing (OWC) offers a fully certified 2-meter Thunderbolt 5 (USB-C) cable for both Macs and PCs. Engineered with signal amplification, precision shielding, and end-to-end signal integrity, the cable delivers a long-length solution for workflows that require maximum speed, display performance, and power delivery—along with the full capabilities of Thunderbolt 5.

This extended-length cable joins the company’s lineup of 0.3-meter, 0.8-meter, and 1-meter Thunderbolt 5 cables. It is Thunderbolt-certified and validated by multiple independent testing labs to meet the complete Thunderbolt 5 specification, including:

  • Up to 80-Gbps bidirectional data throughput
  • Up to 120-Gbps video bandwidth for multi-display, high-performance workflows
  • Up to 240-W power delivery
  • Supports up to three 8K displays
  • Fully compatible with Thunderbolt 5, 4, and 3, as well as USB4 and USB-C devices—universal for virtually any USB-C host or power/charging connection

The 2-meter Thunderbolt 5 cable costs $79.99 and is now available for pre-order, with delivery expected in early January 2026.

Other World Computing 

The post OWC rolls out 2-meter Thunderbolt 5 cable appeared first on EDN.

PicoScope 7.2 enables smarter waveform analysis

Fri, 01/02/2026 - 17:31

Pico Technology has released a major upgrade to its PicoScope software, improving waveform capture, analysis, and measurement. Version 7.2 adds built-in features like waveform overlays and advanced serial filtering, enabling faster, clearer, and more efficient control of PicoScope PC-based instruments.

Waveform Overlays is a visualization tool that displays multiple waveform captures stacked in a single view. This feature makes it easier to spot intermittent glitches, jitter, and anomalies often missed in single-shot captures.

New serial decoding filters make it easy to pinpoint specific packets, data types, or date ranges without combing through long serial captures. These advanced filters work seamlessly across all 40 serial protocols supported by PicoScope 7.

To learn more about what’s new in PicoScope 7.2, click here. It is available as a free update for all existing and new PicoScope users on Windows, Mac, and Linux operating systems.

Pico Technology

The post PicoScope 7.2 enables smarter waveform analysis appeared first on EDN.

Compute modules are built for industrial AI

Fri, 01/02/2026 - 17:31

Based on Qualcomm’s Dragonwing IQ-X platform, Advantech’s three edge AI compute boards deliver up to 45 TOPS of AI acceleration for industrial applications. The AOM-6731 AI module, AIMB-293 mini-ITX motherboard, and SOM-6820 COM Express Type 6 module offer powerful processing alongside robust 5G and Wi-Fi 7 connectivity.

Leveraging Oryon CPUs with up to 12 cores running as fast as 3.4 GHz, Dragonwing IQ-X enables rapid data handling and seamless multitasking while consuming up to three times less power than competing solutions. Single- and multithreaded compute performance is further enhanced by on-device Hexagon NPUs, bolstering AI capabilities. Integrated Adreno VPUs and GPUs support multimedia-intensive applications.

Onboard LPDDR5x memory achieves a 1.3× speed boost—from 6,400 MT/s to 8,533 MT/s—while reducing power consumption by 20% versus standard LPDDR5. UFS 3.1 Gear 4 storage increases data transfer speeds from 1,000 Mbps (PCIe Gen3 NVMe) to 16,000 Mbps. UFS 4.0 is also available for optimal performance in harsh industrial environments. 

Samples of the AOM-6731 AI module and SOM-6820 COM Express module are now available, while the AIMB-293 motherboard will be offered for engineering evaluations starting March 2026.

Advantech

The post Compute modules are built for industrial AI appeared first on EDN.

An intimidating vacuum tube

Fri, 01/02/2026 - 15:00

Older table-top AC-DC radios used a classic line-up of tubes. Think 12SA7, 12SK7,12SQ7, 35Z5GT, and 50L6GT. As I grew into my teens, I got interested in how these radios worked and soon discovered that their vacuum tubes could get very hot, especially the last two, the half-wave rectifier (35Z5GT) and the beam power tetrode audio output stage (50L6GT).

One day, I carelessly allowed a window curtain to brush against a hot 50L6GT, and the fabric of that curtain actually melted. Mom was not thrilled.

With that history still fresh in mind, I later came across another vacuum tube called the 117L7/M7GT whose data sheet looked much like this:

Figure 1 A datasheet for the 117L7/M7GT with the two hottest tube functions from previously studied radios in a single unit. 

This thing was scary!

The two hottest tube functions from the radios I’d been studying were combined into one device. Both functions were placed within a single glass envelope vacuum tube.

Take a look at these guys:

Figure 2 Two 117L7/M7GT tubes combining the heat of the beam power tube and the rectifier tube within a single glass envelope.

Imagine the combined heat of the beam power tube and the rectifier tube within a single glass envelope. If the one tube that damaged Mom’s window curtain was thermally dangerous, I cringe to think how hot these tubes could get and what damage they might be capable of causing.

I still shudder at the thought.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post An intimidating vacuum tube appeared first on EDN.

2025: A year in which chaos seemingly thrived

Thu, 01/01/2026 - 15:00

A year back, this engineer titled his 2024 retrospective “interconnected themes galore”. That said, both new and expanded connections can sometimes lead to chaotic results, yes?

As any of you who’ve already seen my precursor “2026 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of my two end-of-year writeups once again this year. This time, I’ll be looking back over 2025: for historical perspective, here are my prior retrospectives for 2019, 2021, 2022, 2023, and 2024 (we skipped 2020).

As I’ve done in past years, I thought I’d start by scoring the key topics I wrote about a year ago in forecasting the year to come:

  • The 2024 United States election (outcome, that is)
  • Ongoing unpredictable geopolitical tensions, and
  • AI: Will transformation counteract diminishing ROI?

Maybe I’m just biased, but in retrospect, I think I nailed ‘em all as being particularly impactful. In the sections that follow, I’m going to elaborate on several of the above themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly notable (IMHO, of course).

Tariffs, constrained shipments, and government investments

A significant portion of the initial “2024 United States election outcome” section in my year-back look-ahead piece was devoted to the likely potential for rapidly-announced significant tariffs by the new U.S. administration against various other countries, both import- and export-based in nature, and both “blanket” and product-specific, as well as for predictable reactive tariffs and shipment constraints by those other countries in response.

And indeed this all came to pass, most notably with the “Liberation Day” Executive Order-packaged suite of import duties issued on April 2, 2025, many of which were subsequently amended (multiple times in a number of cases) in the subsequent months in response to other countries’ tit-for-tat reactions, trade agreements, and other détente cooling-off measures, and the like.

My point in bringing this all up, echoing what I wrote a year back (as well as both the month and the year before that), is not to be political. As I’ve written several times before:

I have not (and will not) reveal personal opinions on any of this.

and I will again “stay the course” this time. Whether or not tariffs are wise or, for that matter, were even legally issued as-is are decisions for the Supreme Court (near term) and the voters (eventually) to decide. So then why do I mention it at all? Another requote:

Americans are accused of inappropriately acting as if their country and its citizens are the “center of the world”. That said, the United States’ policies, economy, events, and trends inarguably do notably affect those of its allies, foes and other countries and entities, as well as the world at large, which is why I’m including this particular entry in my list.

This time, I’m going to focus on a couple of different angles on the topic. Maybe your company sells its products and/or services only within the country in which it’s headquartered. Or maybe, on the opposite end of the spectrum, it’s a multinational corporation with divisions scattered around the world. Or any point in between these spectrum extremes.

Regardless (and regardless too of whether or not it’s a U.S.-headquartered company), both the tariff and shipment-restriction policies of the U.S. and other countries will undoubtedly and notably affect your business strategies.

Unfortunately, though, while such tariff and restriction policies can be issued, amended, and rescinded “on a dime”, your company’s strategies inherently can’t be even close to as nimble, no matter how you aspire to both proactively and reactively structure your organization and its associated supply chains.

As I write these words I’m reminded, for example, of a segment I saw in a PBS NewsHour episode last weekend that discussed (among other things) Christmas goods suppliers’ financial results impacts of tariffs, along with the just-in-case speculative stockpiling they began doing a year ago in preparation (conceptually echoing my own “Chi-Fi” pre-tariff purchases at the beginning of 2025):

The other angle on the issue that I’d like to highlight involves the increasingly prevalent direct government involvement in companies’ financial fortunes.

Back in August, for example, just two weeks after initially demanding that Intel’s new CEO resign due to the perception of improper conflicts involving Chinese companies, the Trump administration announced that it was instead converting prior approved CHIPS Act funding for Intel into stock purchases, effectively transforming the U.S. into a ~10% Intel shareholder.

More recently, NVIDIA was once again approved to ship its prior-generation H200 AI accelerators into China…in exchange for the U.S. getting a 25% share of the resultant sales revenue, and following up on broader 15%-revenue-share agreements made by both AMD and NVIDIA back in August in exchange for securing China-export licenses.

And President Trump has already publicly stated that such equity and revenue-sharing arrangements, potentially broadening to also include other U.S. companies, will increasingly be the norm versus the exception in the future. Again, wise or not? I’ll keep my own opinions to myself and rely on time to answer that one. For now, I’ll just say…different.

Robotaxis

Waymo is on a roll. The Google-sibling Alphabet subsidiary now blankets not only San Francisco, California (where its usage by customers is increasingly the norm versus a novelty exception) but large chunks of the broader Silicon Valley region, now including freeways and airports.

It’s also currently offering full service in Los Angeles, Phoenix (AZ), and Austin (TX) as I write these words in late December 2025, with active testing underway in roughly a dozen more U.S. municipalities, plus Japan and the UK, and with already-announced near-term service plans in around a dozen more. As Wikipedia notes:

As of November 2025, Waymo has 2,500 robotaxis in service. As of December 2025, Waymo is offering 450,000 paid rides per week. By the end of 2026, Waymo aims towards increasing this to 1 million taxi rides a week and are laying the groundwork to expand to over 20 cities, including London and Tokyo, up from the current six.

And this is key: these are fully autonomous vehicles, with no human operators inside (albeit still with remote human monitors who can, as needed, take over manual control):

Problem-free? Not exactly. Just in the few weeks prior to my writing these words, several animals have been hit, a Waymo car has wandered into an active police-presence scene, and they more generally haven’t seemingly figured out yet how to appropriately respond to school buses signaling they’re in the process of actively picking up and/or dropping off passengers.

So not perfect: those are the absolute statistics. But what about relative metrics?

Again and again, in data published both by Waymo (therefore understandably suspect) and independent observers and agencies, autonomous vehicles are seen as notably safer, both for occupants and the environment around them, than those piloted by humans…and the disparity is only growing in self-driving vehicles’ favor over time. And in China, for example, the robotaxi programs are, if anything, even more aggressive from both testing and active deployment standpoints.

To that last point, I’ll conclude this section with another note on this topic. In fairness, I feel compelled to give Tesla rare but justified kudos for finally kicking off the rollout of its own robotaxi service mid-year in Austin, after multiple yearly iterations of promises followed by delays.

Just a few days ago, as I write this, in fact, the company began testing without human monitors in the front seats (not that they were effective anyway, in at least one instance).

Agentic AI

In the subhead for my late-May Microsoft Build 2025 conference coverage, I sarcastically noted:

What is “agentic AI”? This engineer says: “I dunno, either.”

Snark aside, I truthfully already had at least some idea of what the “agentic web”, noted in the body text of that same writeup as an example of the trendy lingo that our industry is prone to exuberantly (albeit only impermanently) spew, meant. And I’ve certainly learned much more about it in the intervening months. Here’s what Wikipedia says about AI agents in its topic intro:

In the context of generative artificial intelligence, AI agents (also referred to as compound AI systems or agentic AI) are a class of intelligent agents distinguished by their ability to operate autonomously in complex environments. Agentic AI tools prioritize decision-making over content creation and do not require human prompts or continuous oversight.

And what about the aforementioned broader category of intelligent agents, of which AI agents are a subset? Glad you asked:

In artificial intelligence, an intelligent agent is an entity that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge. AI textbooks define artificial intelligence as the “study and design of intelligent agents,” emphasizing that goal-directed behavior is central to intelligence. A specialized subset of intelligent agents, agentic AI (also known as an AI agent or simply agent), expands this concept by proactively pursuing goals, making decisions, and taking actions over extended periods.

A recent post on Google’s Cloud Blog included, I thought, I concise summary of the aspiration:

“Agentic workflows” represent the next logical step in AI, where models don’t just respond to a single prompt but execute complex, multi-step tasks. An AI agent might be asked to “plan a trip to Paris,” requiring it to perform dozens of interconnected operations: browsing for flights, checking hotel availability, comparing reviews, and mapping locations. Each of these steps is an inference operation, creating a cascade of requests that must be orchestrated across different systems.

Key to the “interconnected operations” that are “orchestrated across different systems” is MCP, the open-source Model Context Protocol, which I highlighted in my late-May coverage. Originally created by two developers at Anthropic and subsequently announced by the company in late 2024, it’s now regularly referred to as “USB-C for AI” and has been broadly embraced and adopted by numerous organizations and their technologies and products.

Long-term trend aside, my decision to include agentic AI in my year-end list was notably influenced by the fact that agents (specifically) and AI chatbots (more generally) are already being widely implemented by developers as well as, notably, adopted by the masses. OpenAI recently added an AI holiday shopping research feature to its ChatGPT chatbot, for example, hot on the heels of competitor Google’s own encouragement to “Let AI do the hard parts of your holiday shopping”. And what of Amazon’s own Rufus AI service? Here’s TechCrunch’s beginning-of-December take on Amazon’s just-announced results:

On Black Friday, Amazon sessions that resulted in a sale were up 100% in the U.S. when the AI chatbot Rufus was used. They only increased by 20% when Rufus wasn’t used. 

Trust a hallucination- and bias-prone deep learning model to pick out presents for myself and others? Not me. But I’m guessing that both to some degree now, and increasingly in the future, I’ll be in the minority.

Humanoid Robots

By now, I’m sure that many of you have already auditioned at least one (and if you’re like me, countless examples) of the entertaining and awe-inspiring videos published by Boston Dynamics over the years (and by the way, if you’ve ever wondered why the company was subsequently acquired by Hyundai, this excellent recent IEEE Spectrum coverage of the company’s increasingly robotics-dominated vehicle manufacturing plant in Georgia is a highly recommended read). While early showcased examples such as Spot were, as its name reflects, reminiscent of dogs and other animals (assuming they had structural relevance to anything at all, that is…hold that thought), the company’s newer Atlas, along with examples from a growing list of other companies, is distinctly humanoid-reminiscent. Quoting from Wikipedia:

A humanoid robot is a robot resembling the human body in shape. The design may be for functional purposes, such as interacting with human tools and environments and working alongside humans, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body. Androids are humanoid robots built to more closely resemble the human physique. (The term Gynoid is sometimes used for those that resemble women.)

As Wikipedia notes, part of the motivation for this trend is the fact that the modern world has been constructed with the human body in mind, and it’s therefore more straightforward from a robotics-inclusion standpoint to create automotons that mimic their human creators (and forebears?) than to adapt the environment to more optimally suit other robot form factors. Plus, I’m sure that at least some developers are rationalizing that robots that resemble humans are more likely to be accepted alongside humans, both in the workplace and in the home.

Still, I wonder how much sub-optimization of the overall robotic implementation potential is occurring in pursuit of this seeming single-minded human mimicking aspiration. I wonder, too, how much influence early robot examples in entertainment, such as Rosie (or Rosey) from The Jetsons or Gort from The Day the Earth Stood Still, have had in shaping the early thinking of children destined to be engineers when they grew up. And from a practical financial standpoint, given the large number of humanoid robot examples coming from China alone, I can’t help but wonder just how many “androids” (the robot, not the operating system) the world really needs, and how massive the looming corporate weeding-out may be as a result.

Unforeseen acquisitions

This last one might not have been seismically impactful from a broad industry standpoint…or then again, it may end up being so, both for Qualcomm and its competitors. Regardless, I’m including it because it personally rocked me back on my heels when I heard the news. In early October, Qualcomm announced its intention to acquire Arduino. For those of you not already familiar with Arduino, here’s Wikipedia’s intro:

Arduino is an Italian open-source hardware and software company…that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices. Its hardware products are licensed under a CC BY-SA license, while the software is licensed under the GNU Lesser General Public License (LGPL) or the GNU General Public License (GPL), permitting the manufacture of Arduino boards and software distribution by anyone.

First fruits of the merger are the UNO Q, a “next-generation single board computer featuring a “dual brain” architecture—a Linux Debian-capable microprocessor and a real-time microcontroller—to bridge high-performance computing with real-time control” and “powered by the Qualcomm Dragonwing QRB2210 processor running a full Linux environment”, and the Arduino App Lab, an “integrated development environment built to unify the Arduino development journey across Real-time OS, Linux, Python and AI flows.”

So, what’s the background to my surprise? This excerpt from IEEE Spectrum’s as-usual thorough coverage sums it up nicely: “Even so, the acquisition seems odd at first glance. Qualcomm sells expensive, high-performance SoC designs meant for flagship smartphones and PCs. Arduino sells microcontroller boards that often cost less than a large cheese pizza.”

Not to mention that Qualcomm’s historical customer base is comparatively small in number, large in per-customer volume, and rapid in each customer’s generational-uptake silicon churn, the exact opposite of Arduino’s typical customer profile (or that of Raspberry Pi, for that matter, who’s undoubtedly also “curious” about the acquisition and its outcome).

Auld Lang Syne (again)

I’m writing this in late December 2025. You’ll presumably be reading it sometime in January 2026, given that I’m targeting New Year’s Day publication for it. I’ll split the difference and, as I did last year, wrap up by first wishing you all a Happy New Year! 😉

As usual, I originally planned to cover a number of additional topics in this piece. But (also) as usual, I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Having just passed through 2,700 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post 2025: A year in which chaos seemingly thrived appeared first on EDN.

SCR topology transmogrifies into BJT two-wire precision current source

Wed, 12/31/2025 - 15:00

Recently, frequent Design Idea (DI) author Christopher Paul showcased an innovative and high performance true-two-wire current source using a depletion mode MOSFET as the pass device in “A precision, voltage-compliant current source.”

In subsequent comments the question arose whether similar performance is possible using a bipolar junction transistor instead of Christopher’s FET in a similar (looking) topology?

Wow the engineering world with your unique design: Design Ideas Submission Guide

It posed an intriguing design problem for which I offer here a possible (if implausible) solution. Bizarrely, it’s (roughly) based on the classic discrete transistor model of an SCR, shown in Figure 1.

Figure 1 SCR positive feedback loop suggests an unlikely basis for a BJT current source.

Figure 2 shows the nonlinear positive feedback loop of the thyristor morphing into a linear current source.

Figure 2 Q1 and Q3 current mirror, regulator Z1, and BJT Q1 comprise precision 2-wire current source. The source current is 1.05 * 1.24/R1, or 1.30/R1. * = 0.1% precision resistor

Shunt regulator Z1 and pass transistor Q2 form a very familiar precision current source circuit. In fact, it looks a lot like the one Christopher Paul uses in his MOSFET-based design. Negative feedback from current sense resistor R1 makes shunt regulator Z1 force Q2 to maintain a constant emitter current of 1.24v/R1.

Also, similar (looking) to Christopher Paul’s topology, bias for Z1 and Q2 is provided by a PNP current mirror. However, unlike the symmetrical mirror in Christopher Paul’s design, this one is made asymmetrical to accommodate Z1’s max recommended current rating.

Significant emitter degeneration (~2.5 volts) is employed to encourage accurate current ratios and keep positive feedback loop gain manageable so Z1 can ride herd on it.

Startup resistor R3 is needed because the bias for the transistors and regulator is provided by the SCR-ish regenerative positive feedback loop. R3 provides a trickle of current, a few hundred nanoamps, sufficient to jumpstart (trigger?) the loop when power is first applied.

To program the source for a chosen output current (Io).

 If Io > 5 mA, then:
R1 = 1.30/Io
R2 = 49.9/Io
R4 = 2.40/Io

 If Io < 5 mA, then:
R1 = 1.55/Io
R2 = 8/Io
R4 = 2/Io

Minimum accurate Io = 500 µA.  Maximum = 200 mA.

And for a finishing touch, frequent commentator Ashutosh points out that it’s good practice to protect loads against erroneous and possibly destructive fault currents. Figure 3 suggests a flexible and highly reliable insurance policy. Wire one of these gems in series with Figure 2 and fault current concerns will vanish.

Figure 3 Accurate, robust, fast acting, self-resetting, fault current limiter where Ilimit = 1.25/R1.

In closing, I leave it to you, the reader, to decide whether Figure 2’s resemblance to Christopher Paul’s design is merely superficial, truly meaningful, outright plagiaristic, or just weird.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post SCR topology transmogrifies into BJT two-wire precision current source appeared first on EDN.

Power Tips #148: A simple software method to increase the duty-cycle resolution in DPWM

Wed, 12/31/2025 - 15:00

Have you ever had a duty-cycle resolution issue in your digitally controlled power supply?

In a digital pulse width modulation (DPWM)-controlled power supply, the duty-cycle adjustment is not continuous, but has a minimum step. This is one significant difference between digital control and analog control.

In order to really understand the resolution issue, let’s look at the exaggerated DPWM waveform in Figure 1.

Figure 1 An exaggerated DPWM waveform where the DPWM is acting as the output by comparing its clock counter with a preset comparison value. Source: Texas Instruments

DPWM is acting as the output by comparing its clock counter with a preset comparison value; when the counter equals the comparison value, it will generate a trigger signal, and flip the PWM outputs. When you adjust the comparison to different values, the flipping edge will act earlier or later. Because the counter value can be the only integer, the minimum adjustment step of the duty cycle is expressed by Equation 1:

Oscillation caused by low duty-cycle resolution

The duty-cycle resolution of DPWM brings a disturbance to power-supply control. If the duty-cycle resolution is too low, it may bring limit cycle oscillations (LCOs) to the control loop and cause output voltage ripple. This problem is more serious in high-switching-frequency systems.

Let’s take a 48-V to 5-V synchronous buck converter as an example, as shown in Figure 2.

Figure 2 A 48-V to 5-V synchronous buck converter example. Source: Texas Instruments

Assuming a 500-kHz switching frequency when using 120-MHz PWM frequency, recalling Equation 1, the minimum duty-cycle step is . The minimum duty-cycle adjustment brings the voltage difference with , which means 4% voltage ripples of the output, shown in Figure 3. This is obviously unacceptable.

Figure 3 A low-resolution duty cycle causes output voltage ripple. Source: Texas Instruments

Increase duty-cycle resolution

The most direct way to resolve this duty-cycle resolution issue is to use high-resolution PWM (HRPWM). HRPWM is a powerful peripheral that can reduce the adjustment step significantly—to the 10ps level—but it is typically only available in high-performance MCUs, which may be too powerful or expensive for the design.

Is there a simple method to resolve the duty-cycle resolution issue without extra cost? Can you increase the duty-cycle resolution by using software, or an algorithm?

Looking again at the DPWM waveform, the duty cycle is generated by two variables: the comparison value and the period value, which Equation 2 calculates as:

The common method of adjusting the duty cycle is changing the comparison value and keeping the ‘Period’ value in constant; in other words, the buck converter is operating in fixed switching frequency. What happens if you adjust the duty-cycle by varying the switching frequency? Mostly, a small variation of the switching frequency is not harmful but helpful to power converters, it will reduce the electromagnetic interference and help to pass the EMI regulations.

If you keep the comparison value unchanged, but adjust one count to the period value, how much is the duty-cycle variation? Is it larger or smaller than adjusting the comparison value? Please look into the Equation 3:

Keeping in mind that, the duty-cycle variation by adjusting the comparison value is , because D is always smaller than 1, and  is nearly equal to , you can see that  will be always smaller than .

Which means, adjusting the period value will generate smaller variation to the duty-cycle than adjusting the comparison value.  The improvement is more significant when the duty cycle is much smaller than 1. If you point out the duty-cycle values on one numerical axis with varying the period value, you will clearly see that, when you adding the period value with fixed comparison value, the duty cycle will reduce with a smaller step, as shown in Figure 4.

Figure 4 Duty-cycle values when varying both period and comparison. Source: Texas Instruments

Varying the frequency

Based on the analysis above, it is possible to generate a higher resolution by adjusting the period value. But, in power converter, the switching frequency generally can’t vary much, otherwise the magnetic component design will become very challenge. So, the next question is, how to generate the expected duty cycle with the combination of these two variables?

The method is, first, decided the comparison value with a preset period value, and then, finetune the period value to get the closed duty cycle. The fine tune process either can by increasing the period value with the larger the comparison value, or by reducing the period value with the smaller the comparison value. Figure 5 shows the flowchart of the software by increasing the period value with the larger comparison value, the decreasing method will be similar to this, just need reverse the calculate direction.

Figure 5 Software flowchart for adjusting both the comparison and period values simultaneously. Source: Texas Instruments

At last, I need to figure out that, this software method is principally independent of HRPWM hardware technology, such as a micro-edge positioner. So it is applicable to a digital control loop with HRPWM peripherals same.

Improvement results

Let’s return to the example of the 48-V to 5-V synchronous buck converter in Figure 2. After adopting this software method, it’s possible to reduce the duty-cycle resolution too; the output voltage ripple drops tremendously to <40 mV, as shown in Figure 6. This is acceptable to most of the electrical appliance.

Figure 6 Improved output voltage ripple using the software method. Source: Texas Instruments

This method doesn’t need to use HRPWM to solve the duty-cycle resolution problem, but slightly increasing the duty-cycle resolution with a software algorithm can make your product more competitive by enabling the use of a low-end MCU.

Furthermore, this method is a purely mathematical algorithm; in other words, it is not limited to low-resolution PWM only but also works for HRPWM. So it can be used in some extremely high requirement conditions to further increase the duty-cycle resolution with HRPWM.

Desheng Guo is a system engineer at Texas Instruments, where he is responsible for developing power solutions as part of the power delivery industrial segment. He created multiple reference designs and is familiar with AC-DC power supply, digital control, and GaN products. He received a master’s degree from the Harbin Institute of Technology in power electronics in 2007, and previously worked for Huawei Technology and Delta Electronics.

Related Content

The post Power Tips #148: A simple software method to increase the duty-cycle resolution in DPWM appeared first on EDN.

Magnetometers: Sensing the invisible fields

Wed, 12/31/2025 - 13:38

From ancient compasses to modern smartphones, magnetometers have quietly shaped how we sense and navigate the world. Let us explore the fundamentals behind these field-detecting devices.

Magnetic fields are all around us, yet invisible to the eye. Magnetometers turn those hidden forces into measurable signals, guiding everything from navigation systems to consumer electronics. Well, let us dive into the principles that allow a simple sensor to translate invisible forces into actionable data.

A magnetometer is a device that measures magnetism: the direction, strength, or relative change of a magnetic field at a given location. Measuring the magnetization of a magnetic material, such as a ferromagnet, is one example. A compass is a simple magnetometer: it detects the direction of the ambient magnetic field, in this case the Earth’s.

The Earth’s magnetic field can be approximated as a dipole, offset by about 440 kilometers from the planet’s center and inclined roughly 11 degrees to its rotational axis. At the surface, its strength averages around 0.4 to 0.5 gauss, about 40–50 microtesla, which is quite small compared to laboratory magnetic fields.

Only a few types of magnetometers are sensitive enough to detect such weak fields, including mechanical compasses, fluxgate sensors, Hall-effect devices, magnetoelastic instruments, and magneto resistive sensors.

One of the landmark magnetoresistive sensors from the 1990s was KMZ51 from Philips. Released in 1996, it offered high sensitivity by exploiting the magnetoresistive effect of thin-film permalloy. At its core, the device integrated a Wheatstone bridge structure, which converted changes in magnetic resistance into measurable signals.

To enhance stability and usability, Philips added built-in compensation and set/reset coils: the compensation coil provided feedback to counter drift, while the set/reset coil re-aligned the sensor’s magnetic domains to maintain accuracy. These design features made KMZ51 particularly effective for electronic compasses, current sensing, and detecting the Earth’s weak magnetic field—applications where precision and reliability were essential. KMZ51 remains a classic example of how clever sensor design can make the invisible measurable.

Figure 1 Simplified circuit diagram of KMZ51 illustrates its Wheatstone bridge and integrated compensation and set/reset coils. Source: Philips

On a related side note, deflection, compass, and fluxgate magnetometers represent three distinct stages in the evolution of magnetic sensing. The deflection magnetometer, essentially a large compass box with a pivoted needle, measures the Earth’s horizontal field by observing how an external magnet deflects the needle under the tangent law. The familiar compass magnetometer, in its simplest form, aligns a magnetic needle with the ambient field to indicate direction, a principle that has been carried forward into modern electronic compasses.

Fluxgate magnetometers, by contrast, employ a soft magnetic core driven into alternating saturation; the resulting signal in a sense coil reveals both the magnitude and direction of the external field with far greater sensitivity. Together, these instruments illustrate the progression from basic mechanical deflection to precise electronic detection, each expanding the engineer’s ability to measure and interpret the invisible lines of magnetism.

Tangent law and Tan B position in compass deflection magnetometers

In the Tan B position, the bar magnet is oriented so that the magnetic field along its equatorial line is perpendicular to the Earth’s horizontal magnetic field component. Under this arrangement, the suspended magnetic needle deflects through an angle β, and the tangent law applies:

Tanβ= B/BH

B is the magnetic field produced at the location of the needle by the bar magnet.

BH is the horizontal component of the Earth’s magnetic field, which tends to align the needle along the geographic north–south direction.

This relationship shows that the deflection angle β depends on the ratio of the magnet’s equatorial field to the Earth’s horizontal field. This simple geometric relationship makes the Tan B position a fundamental method for determining unknown magnetic field strengths, bridging classroom demonstrations with practical magnetic measurements.

Figure 2 The image illustrates magnetometer architectures—from pivoted needle to fluxgate core—across design generations. Source: Author

Quick take: Magnetometers on the workbench

Magnetometers range from fluxgate arrays orbiting in satellites to quantum sensors probing in research labs—but this session is just a quick take. The spotlight here leans toward today’s DIY enthusiasts and benchtop builders, where Hall-effect sensors and MEMS modules serve as practical entry points. Think of it as a wake-up call, sprinkled with a few lively detours, all pointing toward the components that make magnetometers accessible for everyday projects.

Hall-effect sensors remain the most approachable entry point, translating magnetic fields into voltage shifts that DIY-ers can easily measure with a scope or microcontroller. MEMS magnetometers push things further, offering compact three-axis sensing in modules that drop straight into maker projects or wearables.

These devices not only simplify experimentation but also highlight how magnetic sensing has become democratized—no longer confined to aerospace or geophysics labs but are available in breakout boards and low-cost modules.

For the benchtop builder, this means magnetometers can be explored alongside other familiar sensors, integrated into Arduino or Raspberry Pi projects, or used to probe the invisible magnetic environment around everyday circuits. In short, the practical face of magnetometers today is accessible, modular, and ready to be wired into experiments without demanding a physics lab.

Getting started with magnetometers is straightforward, thanks to readily available pre-wired modules. Popular options often incorporate ICs such as the HMC5883L, LIS3MDL, and TLV493D, among others.

Although not for the faint-hearted, it’s indeed possible to build fluxgate magnetometers from scratch. The process, however, demands precision winding of coils, careful core selection, stable drive electronics, and meticulous calibration—all of which can be daunting for DIY enthusiasts. These difficulties often make home-built designs prone to noise, drift, and inconsistent sensitivity.

For those who want reliable results without the engineering overhead, ready-made fluxgate magnetometer modules are a practical choice, offering calibrated performance and ease of integration straight out of the box. A good example is the FG-3+ fluxgate magnetic field sensor from FG Sensors, which provides compact and sensitive measurement capabilities for hobbyist and applied projects.

FG-3+ is a high-sensitivity fluxgate magnetic field sensor capable of measuring Earth’s magnetic field with up to 1,000-fold greater precision than conventional integrated IC solutions. Its output is a stable 5-volt rectangular pulse, with the pulse period directly proportional to the magnetic field strength.

Figure 3 The FG-3+ fluxgate magnetic field sensor integrates seamlessly into both experimental and applied projects. Source: FG Sensors

Closing thoughts

This marks the end of this quick-take post on magnetometers, presented in a deliberately unconventional style. We have only scratched the surface; the field is rich with subtleties and deflections that deserve deeper exploration. If this overview piqued your interest, I encourage you to experiment with sensor modules, study fluxgate designs, and share your findings with the engineering community.

And while magnetometers probably will not help you track UFOs, at least not yet, they remain a fascinating gateway into sensing the invisible forces all around us. The more we build, test, and exchange ideas, the stronger our collective understanding becomes. Onward to the next signal.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Magnetometers: Sensing the invisible fields appeared first on EDN.

Where co-packaged optics (CPO) technology stands in 2026

Tue, 12/30/2025 - 15:21

Co-packaged optics (CPO) technology, a key enabler for next-generation data center architectures, promises unprecedented bandwidth density and power efficiency by tightly integrating optical engines with switch silicon. But after nearly a decade of existence, where does this next-generation optical interconnect technology stand in terms of broad commercial realization?

But before we delve into CPO’s technology roadmap and its future deployment prospects, here is a brief introduction to this silicon photonics architecture and how it empowers artificial intelligence (AI), high-performance computing (HPC), and high-speed networking applications where electrical signaling over copper wires is reaching its limits.

Figure 1 CPO integrates optical transceivers directly with switch ASICs or processors to enable low-power, high-bandwidth links. Source: Broadcom

CPO, which integrates optical components directly into a single package, minimizes the electrical path length, significantly reducing signal loss, enhancing high-speed signal integrity, and containing latency. In other words, CPO enhances data throughput by leveraging high-bandwidth optical engines that deliver higher data transfer rates and are less susceptible to electromagnetic interference (EMI) than traditional copper connections.

Moreover, this silicon-photonics integration improves power efficiency by reducing the need for high-power electrical drivers, repeaters, and retimers. Case in point: by shortening the copper trace, CPO could improve the link budget enough to remove digital signal processor (DSP) or retimer functionality. That significantly reduces the overall power per bit, a key metric in AI data center management.

Below is a sneak peek at major CPO activities during 2025; it offers a glimpse of product launches and the actual readiness of CPO’s basic building blocks.

CPO’s 2025 progress report

In January 2025, Marvell announced advances in its custom XPU architecture integrated with CPO technology. The company showcased how its custom AI accelerator architecture combines XPU compute silicon, HBM, and other chiplets with its 3D SiPho engines on the same substrate using high-speed SerDes, die-to-die interfaces, and advanced packaging technologies.

That eliminates the need for electrical signals to leave the XPU package into copper cables or across a PCB. Furthermore, connections between XPUs can achieve faster data transfer rates and distances that are 100x longer than electrical cabling. Marvell’s 3D SiPho engine supports 200 Gbps electrical and optical interfaces.

Figure 2 XPU with integrated CPO enhances AI server performance by increasing XPU density from tens within a rack to hundreds across multiple racks. Source: Marvell

“AI scale-up servers require connectivity with higher signaling speeds and longer distances to support unprecedented XPU cluster sizes,” said Nick Kucharewski, senior VP and GM of the Network Switching Business Unit at Marvell. “Integrating co-packaged optics into custom XPUs is the logical next step to scale performance with higher interconnect bandwidths and longer reach.”

Four months later, in May 2025, Broadcom offered a glimpse of its third-generation 200G per lane CPO technology. The company’s CPO journey began in 2021 with the Tomahawk 4-Humboldt chipset, and the second-generation Tomahawk 5-Bailly chipset became the industry’s first volume-production CPO solution.

“Broadcom has spent years perfecting our CPO platform solutions, as evidenced by the maturity of our second-generation 100G/lane products and the ecosystem readiness,” said Near Margalit, VP and GM of the Optical Systems Division at Broadcom. The company also claims that, in addition to edge switch ASICs and optical-engine technology, it offers a comprehensive ecosystem of passive optical components, interconnects, and system solutions partners.

Figure 3 CPO offers a sustainable path forward by addressing the power constraints and physical limitations of traditional pluggable optics. Source: Broadcom

In October 2025, Broadcom claimed that Meta has tested its CPO solutions for one million link hours without a single link flap in a high-temperature lab characterization environment. A link flap is a brief connectivity disruption; it’s a critical reliability metric in high-performance data center networks.

Besides CPO heavyweights like Broadcom and Marvell, there are notable startups in the silicon photonics realm, striving to overcome electrical I/O bottlenecks. For instance, Ayar Labs, a supplier of optical interconnect solutions, has incorporated its TeraPHY optical engines into ASIC design services of Global Unichip Corp. (GUC), a Hsinchu, Taiwan-based chip developer.

In November 2025, Ayar Labs announced that it has integrated its optical engines into GUC’s advanced packaging and ASIC workflow, a critical step toward future CPO deployment. The joint design effort helps address key challenges of CPO integration: architectural, power and signal integrity, mechanical, and thermal.

Figure 4 In this CPO, two TeraPHY optical engine chiplets (left) are shown with a customer FPGA (center) within the same SoC package. Source: Ayar Labs

“The future of AI and data center scale-up will not be possible without optics to overcome the electrical I/O bottleneck,” said Vladimir Stojanovic, CTO and co-founder of Ayar Labs. “Working with GUC on advanced packaging and silicon technologies is an important step in demonstrating how our optical engines can accelerate the implementation of co-packaged optics for hyperscalers and AI scale-up.”

CPO in 2026 and beyond

While CPO proponents are eager to claim that the CPO revolution is at our doorstep, industry watchers like Yole Group see large-scale deployments between 2028 and 2030. Meanwhile, pluggable modules—inserted into the front panel of a switch sitting at the edge of the PCB—will remain competitive.

Market research firm LightCounting also predicts that optical modules will continue to account for the majority of optical links in data centers throughout the decade. At the same time, however, optical transceiver technology will continue to steadily shift toward placing the optics closer to the ASIC.

That’s because traditional pluggable optical modules are increasingly constrained by signal loss, power consumption, and latency due to long electrical traces between the switch ASIC and the optical engine. CPO overcomes these limitations by placing the optical engine much closer to the switching silicon.

The migration of the optical engine closer to the switch ASIC shortens the length of copper trace used for electrical signalling, thereby improving electrical performance. However, the seamless attachment of optical engines to switch ASICs or XPUs requires a range of packaging approaches, including 2.5D interposers, through-silicon vias (TSVs), fan-out wafer-level packaging, and 3D integration enabled by hybrid bonding.

These advanced packaging technologies are steadily evolving, and so is CPO deployment. IDTechEx projects that the CPO market will exceed $20 billion by 2036, growing at a robust CAGR of 37% from 2026 to 2036.

Related Content

The post Where co-packaged optics (CPO) technology stands in 2026 appeared first on EDN.

Guard circuit provides impedance matching

Tue, 12/30/2025 - 15:00

The first hits from a Google search of the term “guard circuit” produce a series of references to the National Guard on some security circuit. Deep in the list is a printed circuit board company that touts that they design guard rings on critical circuits. So just what are they?

Wow the engineering world with your unique design: Design Ideas Submission Guide

Guard circuit

Analog Devices references guard shields around their op amps as well as the printed circuit traces [1]. These traces are called guard rings; they circle and shield critical circuits. Another well-known reference on electromagnetic interference (EMI) discusses guard shields in the early edition [2]. The use of op amp shields, together with shielded pairs, and grounded so as to eliminate differential input noise. This is accomplished by connecting the cable shield to the op amp shield. Another section discusses guarded meters.

In this example, the recommended connection should be made so as not to cause current flow through any measuring leads. The term “guard shield” is missing from the author’s subsequent book on the same topic [3].

High-power active devices can use guard shields, in the form of a thin conductive strip placed between two electrical insulating yet thermal conductive gaskets, used to mount the device to a heat sink [4]. The guard shield is returned to the circuit common. This results in lower leakage capacitance between the device case and the heat sink, and lower parasitic currents.

Active circuit guard wiring techniques

Guarding can be done using active circuit devices such as an operational amplifier, as shown in Figure 1. The amplifier is wired as a coupler or isolator; the feedback is between the output and the positive input. The coaxial shield is connected to that output, which is the active shield, a low impedance source equal to the input voltage. A large leakage resistor is shown to complete the Spice simulation. The center wire is connected to the measured devices or circuit.

Figure 1 An active circuit guarding with op amps wired as a coupler or isolator and the feedback is between the output and positive input. 

Guard circuit applications

Another possible application for the guard technique is interfacing a pulse signal. A pulse signal’s Fourier transform has a fundamental and odd harmonics. For high-frequency signal transmission, twisted pairs such as Cat 5 are frequently used. The source and load impedance should be equal to prevent reflections. But what if this is not the case? If a guarded circuit is used, the source is connected to the operational amplifier input, which has a high input impedance, and the wire is guarded from the return path.

An example where this circuit could be employed is interfacing industrial or process fluid flow meters. A variety of meters, such as positive displacement, which uses oval gears, and a pickup circuit to count revolutions. This includes turbine meters, which have blades internal to the meter and rotate proportionally to the flow rate.

The vortex flow meter is based on the Von Karman effect. As the fluid flows around a fixed body or blunt object, vorticity is shed alternately. The frequency of this vortex shedding is proportional to the fluid velocity. This signal can be sensed in several ways and is a pulse signal.

The Coriolis mass flow meters make use of two vibrating tubes. Flow through the tubes causes Coriolis forces to twist the tubes, resulting in a phase shift. The time difference between the waves is measured and is directly proportional to the mass flow rate.

All these meters have a calibration factor or K, which is a constant relating to the calibration, for example, K= 800 pulses per gallon. The pulses, electrical circuits, and internal resistances can vary depending on the meter. There are a variety of signal levels as well as input and output resistances between these meters and the input circuit cards.

A frequent application for these meters is to charge a known fluid volume in a tank. An accurate method is to count up or down pulses in an industrial controller. It is more accurate to measure the signal as a pulse, adding interface circuitry such as an analog flow rate signal, and integrating that signal will be subject to circuit inaccuracies and, assuming the operation is done in an industrial controller, be subject to scan sampling errors.

Figure 2 Active circuit guarding, pulse interface circuit based on 200 feet RG-58 coax cable with distributed capacitance and resistance.

Test circuit

This proposed circuit was tested based on a pulse waveform based on a typical meter as discussed. The pulse assumed is 1-ms wide with a 3-ms period. The pulse is generated by a LMC555 wired in astable operation with a 1-kΩ pull up load to a 5-V supply.

The isolation operational amplifier is 1/4 LM324 wired such that the output is a non inverting unity amplifier. The guard circuit is a 40 foot RG-58 coaxial cable. The amplifier is powered by its own 9-V battery. The only connection between both supplies is the single conductor wire parallel to the coax.

The results are shown in Figure 3, the circuit was able to provide an output the same as the input, and able to interface with any input impedance.

Figure 3 Pulse waveforms where yellow is the output and green is the input.

These waveforms agreed with the Spice simulation. The output closely followed the input.

Note the output waveform when expanded time scale when rising. The rapid increase followed by a ramp to the steady state is because the op amp has a very high gain, and is charging based on its supply voltage. However when the outer coax is charged to a point below the steady state output, the RC equivalent circuit is still charging expecting that the steady state at supply voltage. However when input difference is zero, the ramp ceases.

Figure 4 The pulse waveforms where yellow is the output and green is the input. The time scale 1/100 the previous figure (Figure 3).

Because almost all these flow signal transmitters have isolated electronics, the third wire, signal common, may be the same wire as the power supply return. This supply power is typically supplied from the pulse sensing electronics.

If so, that conductive path or reference is already available, usually in the same pair as the supply wire, in the form of a twisted, shielded cable. This provides magnetic and electric field EMI protection. The user only needs to provide the coaxial cable to the flow meter.

More than a shield

A guard shield is more than just a shield, either a solid conductive surface or braided cylinder, it is in concert with thoughtful wiring techniques to both active and passive components that result in mitigating EMI.

Related Content

References

  1. Sheingold, Daniel H., Transducer Interfacing Handbook, Analog Devices, Inc., Norwood, MA., 1980.
  2. Ott, H. W., Noise Reduction Techniques in Electronic Systems, John Wiley & Sons, New York, New York, 1988.
  3. Ott, H. W., Electromagnetic Compatibility Engineering, John Wiley & Sons, New York, New York, 2009.
  4. Morrison, R., Grounding and Shielding Circuits and Interference, fifth edition, IEEE Press, John Wiley & Sons, New York, New York, 2007.

Bob Heider worked as an electrical and controls engineer for a large chemical company for over 30 years. This was followed by several years in academic and research roles with Washington University, St. Louis, MO. He is continuing to work part-time as well as mentor some student groups.

The post Guard circuit provides impedance matching appeared first on EDN.

2026: A technology forecast for AI’s ever-evolving bag of tricks

Mon, 12/29/2025 - 17:45

Read on for our intrepid engineer’s latest set of predictions for the year(s) to come.

As has been the case the last couple of years, we’re once again flip-flopping what might otherwise seemingly be the logical ordering of this and its companion 2025 look-back piece. I’m writing this 2026 look-ahead for December publication, with the 2025 revisit to follow, targeting a January 2026 EDN unveil. While a lot can happen between now and the end of 2025, potentially affecting my 2026 forecasting in the process, this reordering also means that my 2025 retrospective will be more comprehensive than might otherwise be the case.

Without any further ado, and as usual, ordered solely in the cadence in which they initially came out of my cranium…

AI-based engineering

Likely unsurprisingly, as will also be the case with the subsequent 2025 retrospective-to-come, AI-related topics dominate my forecast of the year(s) to come. Take “vibe coding”, which entered the engineering and broader public vernacular only in February and quickly caught fire. Here’s Wikipedia’s introduction to the associated article on the subject:

Vibe coding is an artificial intelligence-assisted software development technique popularized by Andrej Karpathy in February 2025. The term was listed on the Merriam-Webster website the following month as a “slang & trending” term. It was named Collins Dictionary‘s Word of the Year for 2025.

Vibe coding describes a chatbot-based approach to creating software where the developer describes a project or task to a large language model (LLM), which generates code based on the prompt. The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure.

Sounds great, at least in theory, right? Just tell the vibe coding service and underlying AI model what you need your software project to do; it’ll as-needed pull together the necessary code snippets from both open-source and company-proprietary repositories all by itself. If you’re already a software engineer, it enables you to crank out more code even quicker and easier than before.

And if you’re a software or higher-level corporate manager, you might even be able to lay off (or at least pay grade-downscale) some of those engineers in the process. Therein explaining the rapid rollout of vibe coding capabilities from both startups and established AI companies, along with evaluations and initial deployments that’ll undoubtedly expand dramatically in the coming year (and beyond). What could go wrong? Well…

Advocates of vibe coding say that it allows even amateur programmers to produce software without the extensive training and skills required for software engineering. Critics point out a lack of accountability, maintainability, and the increased risk of introducing security vulnerabilities in the resulting software.

Specifically, a growing number of companies are reportedly discovering that any upfront time-to-results benefits incurred by AI-generated code end up being counterbalanced by the need to then reactively weed out resulting bugs, such as those generated by hallucinated routines when the vibe coding service can’t find relevant pre-existing examples (assuming the platform hasn’t just flat-out deleted its work, that is).

To that point, I’ll note that vibe coding, wherein not reviewing the resultant software line-by-line is celebrated, is an extreme variant of the more general AI-assisted programming technology category.

But even if a human being combs through the resultant code instead of just compiling and running it to see what comes out the other end, there’s still no guarantee that the coding-assistance service won’t have tapped into buggy, out-of-date software repositories, for example. And there’s always also the inevitable edge and corner cases that won’t be comprehended upfront by programmers relying on AI engines instead of their own noggins.

That all said, AI-based programming is already having a negative impact on both the job prospects for university students in the computer science curriculum and the degree-selection and pursuit aspirations of those preparing to go to college, not to mention (as already alluded to) the ongoing employment fortunes of programmers already in the job market.

And for those of you who are instead focused on hardware, whether that be chip- or board-level design, don’t be smug. There’s a fundamental reason, after all, why a few hours before I started writing this section, NVIDIA announced a $2B investment in EDA toolset and IP provider Synopsys.

Leveraging AI to generate optimized routing layouts for the chips on a PCB or the functional blocks on an IC is one thing; conventional algorithms have already been handling this for a long time. But relying on AI to do the whole design? Call me cynical…but only cautiously so.

Memory (and associated system) supply and prices

Speaking of timely announcements, within minutes prior to starting to write this section (which, to be clear, was also already planned), I saw news that Micron Technology was phasing out its nearly 30-year old Crucial consumer memory brand so that it could redirect its not-unlimited fabrication capacity toward more lucrative HBM (high bandwidth memory) devices for “cloud” AI applications.

And just yesterday (again, as I’m writing these words), a piece at Gizmodo recommended to readers: “Don’t Build a PC Right Now. Just Don’t”. What’s going on?

Capacity constraints, that’s what. Remember a few years back, when the world went into a COVID-19 lockdown, and everyone suddenly needed to equip a home office, not to mention play computer games during off-hours?

Device sales, with many of them based on DRAM, mass storage (HDDs and/or SSDs), and GPUs, shot through the roof, and these system building blocks also then went into supply constraints, all of which led to high prices and availability limits.

Well, here we go again. Only this time, the root cause isn’t a pandemic; it’s AI. In the last few years’ worth of coverage on Apple, Google, and others’ device announcements, I’ve intentionally highlighted how much DRAM each smartphone, tablet, and computer contains, because it’s a key determinant of whether (and if so, how well) it can run on-device inference. 

Now translate that analogy to a cloud server (the more common inference nexus) and multiply both the required amount and performance of memory by multiple orders of magnitude to estimate the demand here. See the issue? And see why, given the choice to prioritize either edge or datacenter customers, memory suppliers will understandably choose the latter due to the much higher revenues and profits for a given capacity of HBM versus conventional-interface DRAM?

Likely unsurprising to my readers, nonvolatile memory demand increases are pacing those of their volatile memory counterparts. Here again, speed is key, so flash memory is preferable, although to the degree that the average mass storage access profile can be organized as sequential versus random, the performance differential between SSDs and lower cost-per-bit HHDs (which, mind you, are also increasingly supply-constrained by ramping demand) can be minimized.

Another traditional workaround involves beefing up the amount of DRAM—acting as a fast cache—between the mass storage and processing subsystems, although per the prior paragraph it’s a particularly unappealing option this time around.

I’ve still got spare DRAM DIMMs and M.2 SSD modules, along with motherboards, cases, PSUs, CPUs, and graphics cards, and the like sitting around, left over from my last PC-build binge.

Beginning over the upcoming holidays, I plan to fire up my iFixit toolkits and start assembling ‘em again, because the various local charities I regularly work with are clearly going to be even more desperate than usual for hardware donations.

The same goes for smartphones and the like, and not just for fiscally downtrodden folks…brace yourselves to stick with the devices you’ve already got for the next few years. I suspect this particular constraint portion of the long-standing semiconductor boom-and-bust cadence will be with us even longer than usual.

Electricity rates and environmental impacts

Not a day seemingly goes by without me hearing about at least one (and usually multiple) new planned datacenter(s) for one of the big names in tech, either being built directly by that company or in partnership with others, and financed at least in part by tax breaks and other incentives from the municipalities in which they’ll be located (here’s one recent example).

And inevitably that very same day, I’ll also see public statements of worry coming from various local, state, and national government groups, along with public advocacy organizations, all concerned about the environmental and other degrading impacts of the substantial power and water needs demanded by this and other planned “cloud” facilities (ditto, ditto, and ditto).

Truth be told, I don’t entirely “get” the municipal appeal of having a massive AI server farm in one’s own back yard (and I’m not alone). Granted, there may be a short-duration uptick in local employment from construction activity.

The long-term increase in tax revenues coming from large, wealthy tech corporations is an equally enticing Siren’s Song (albeit counterbalanced by the aforementioned subsidies). And what politician can’t resist proudly touting the outcome of his or her efforts to bring Alphabet (Google)/Amazon/Apple/ Meta/Microsoft/[insert your favorite buzzy company name here] to his or her district?

Regarding environmental impacts, however, I’ll “showcase” (for lack of a better word) one particularly egregious example: Elon Musk’s xAI Colossus 1 and 2 data centers in Memphis, Tennessee.

The former, a repurposed Electrolux facility, went online in September 2024, only 122 days after construction began. The latter, for which construction started this March, is forecasted, when fully equipped, to be the “First Gigawatt Datacenter In The World”. Sounds impressive, right? Well, there’s also this, quoting from Wikipedia:

At the site of Colossus in South Memphis, the grid connection was only 8 MW, so xAI applied to temporarily set up more than a dozen gas turbines (Voltagrid’s 2.5 MW units and Solar Turbines’ 16 MW SMT-130s) which would steadily burn methane gas from a 16-inch natural gas main. However, according to advocacy groups, aerial imagery in April 2025 showed 35 gas turbines had been set up at a combined 422 MW. These turbines have been estimated to generate about “72 megawatts, which is approximately 3% of the (TVA) power grid”. According to the Southern Environmental Law Center (SELC), the higher number of gas turbines and the subsequent emissions requires xAI to have a ‘major source permit’, however, the emissions from the turbines are similar to the nearby large gas-powered utility plants.

In Memphis, xAI was able to sidestep some environmental rules in the construction of Colossus, such as operating without permits for the on-site methane gas turbines because they are “portable”. The Shelby County Health department told NPR that “it only regulates gas-burning generators if they’re in the same location for more than 364 days. In the neighborhood of South Memphis, poor air quality has given residents elevated asthma rates and lower life expectancy. A ProPublica report found that the cancer risk for those living in this area already have four times the risk of cancer than what the Environmental Protection Agency (EPA) considers to be an acceptable risk. In November 2024, the grid connection was upgraded to 150 MW, and some turbines were removed.

Along with high electricity needs, the expected water demand is over five million gallons of water per day in “… an area where arsenic pollution threatens the drinking water supply.” This is reported by the non-profit Protect Our Aquifer, a community organization founded to protect the drinking water in Memphis. While xAI has stated they plan to work with MLGW on a wastewater treatment facility and the installation of 50 megawatts of large battery storage facilities, there are currently no concrete plans in place aside from a one-page factsheet shared by MLGW.

Geothermal power

Speaking of the environment, the other night I watched a reality-calibrating episode of The Daily Show, wherein John Stewart interviewed Elizabeth Kolbert, Pulitzer Prize-winning author and staff writer at The New Yorker:

I say “calibrating” because it forced me to confront some uncomfortable realities regarding global warming. As regular readers may already realize, either to their encouragement or chagrin, I’m an unabashed believer in the following:

  1. Global warming is real, already here, and further worsening over time
  2. Its presence and trends are directly connected to human activity, and
  3. Those trends won’t automatically (or even quickly) stop, far from reversing course, even if that causational human activity ceases.

What I was compelled to accept after watching Stewart and Kolbert’s conversation, augmenting my existing opinion that human beings are notoriously short-sighted in their perspectives, frequently to their detriment (both near- and long-term), were conclusions such as the following:

  1. Expecting humans to willingly lower (or even flatline, versus constantly striving to upgrade) their existing standards of living for the long-term good of their species and the planet they inhabit is fruitless
  2. And given that the United States (where I live, therefore the innate perspective) is currently the world’s largest supplier of fossil fuel—specifically, petroleum and natural gas—energy sources, powerful lobbyists and other political forces will preclude serious consideration of and responses to global warming concerns, at least in the near term.

In one sense, those in the U.S. are not alone with their heads-in-the-sand stance. Ironically, albeit intentionally, the photo I included at the beginning of the prior section was of a coal-burning power plant in China.

That said, at the same time, China is also a renewable energy leader, rapidly becoming the world’s largest implementer of both wind and solar cell technology, both of which are now cheaper than fossil fuels for new power plant builds, even after factoring out subsidies. China also manufactures the bulk of the world’s lithium-based batteries, which enable energy storage for later use whenever the sun’s not shining and the wind’s not blowing.

To that latter point, though, while solar, wind, and many other renewable energy sources, such as tidal power, have various “green” attributes both in an absolute sense and versus carbon-based alternatives, they’re inconsistent in output over time. But there’s another renewable option, geothermal power, that doesn’t suffer from this impermanence, especially in its emerging “enhanced” variety. Traditional geothermal techniques were only limited-location relevant, with consequent challenges for broader transmission of any power generated, as Wikipedia explains:

The Earth’s heat content is about 1×1019 TJ (2.8×1015 TWh). This heat naturally flows to the surface by conduction at a rate of 44.2 TW and is replenished by radioactive decay at a rate of 30 TW. These power rates are more than double humanity’s current energy consumption from primary sources, but most of this power is too diffuse (approximately 0.1 W/m2 on average) to be recoverable. The Earth’s crust effectively acts as a thick insulating blanket which must be pierced by fluid conduits (of magma, water or other) to release the heat underneath.

Electricity generation requires high-temperature resources that can only come from deep underground. The heat must be carried to the surface by fluid circulation, either through magma conduits, hot springs, hydrothermal circulation, oil wells, drilled water wells, or a combination of these. This circulation sometimes exists naturally where the crust is thin: magma conduits bring heat close to the surface, and hot springs bring the heat to the surface.

To bolster the identification of such naturally geothermal-friendly locations (the photo at the beginning of this section was taken in Iceland, for example), companies such as Zanskar are (cue irony) using AI to locate previously unknown hidden sources. I’m admittedly also pleasantly surprised that the U.S. Department of Energy just announced geothermal development funding.

And, to even more broadly deploy the technology, other startups like Fervo Energy and Quaise Energy are prototyping ultra-deep drilling techniques first pioneered with (again, cue irony) fracking to pierce the crust and get to the constant-temperature, effectively unlimited energy below it, versus relying on the aforementioned natural conduit fractures. That it can be done doesn’t necessarily mean that it can be done cost-effectively, mind you, but I for one won’t ever underestimate the power of human ingenuity.

World models (and other LLM successors)

While the prior section focused on accepting the reality of ongoing AI technology adoption and evolution, suggesting one option (of several; don’t forget about nuclear fusion) for powering it in an efficient and environmentally responsible manner, this concluding chapter is in some sense a counterpoint. Each significant breakthrough to date in deep learning implementations, while on the one hand making notable improvements in accuracy and broader capabilities, has also demanded ever-beefier compute, memory, and other system resources to accomplish its objectives…all of which require more energy to power them, along with more water to remove the heat byproduct of this energy consumption. The AI breakthrough introduced in this section is no exception.

Yann LeCun, one of the “godfathers” of AI whom I’ve mentioned here at EDN numerous times before (including just one year ago), has publicly for several years now been highly critical of what he sees as the inherent AGI (artificial general intelligence) and other limitations of LLMs (large language models) and their transformer network foundations.

A recent interview with LeCun published in the Wall Street Journal echoed many of these longstanding criticisms, adding a specific call-out for world models as their likely successor. Here’s how NVIDIA defines world models, building on my earlier description of multimodel AI:

World models are neural networks that understand the dynamics of the real world, including physics and spatial properties. They can use input data, including text, image, video, and movement, to generate videos that simulate realistic physical environments. Physical AI developers use world models to generate custom synthetic data or downstream AI models for training robots and autonomous vehicles.

Granted, LeCun has no shortage of detractors, although much of the criticism I’ve seen is directed not at his ideas in and of themselves but at his claimed tendency to overemphasize his role in coming up with and developing them at the expense of other colleagues’ contributions.

And granted, too, he’s planning on departing Meta, where he’s managed Facebook’s Artificial Intelligence Research (FAIR) unit for more than a decade, for a world model-focused startup. That said, I’ll forever remember witnessing his decade-plus back live demonstration of early CNN (convolutional neural network)-based object recognition running on his presentation laptop and accelerated on a now-archaic NVIDIA graphics subsystem:

He was right then. And I’m personally betting on him again.

Happy holidays to all, and to all a good night

I wrote the following words a couple of years ago and, as was also the case last year, couldn’t think of anything better (or even different) to say this year, given my apparent constancy of emotion, thought, and resultant output. So, once agai,n with upfront apologies for the repetition, a reflection of my ongoing sentiment, not laziness:

I’ll close with a thank-you to all of you for your encouragement, candid feedback and other manifestations of support again this year, which have enabled me to once again derive an honest income from one of the most enjoyable hobbies I could imagine: playing with and writing about various tech “toys” and the foundation technologies on which they’re based. I hope that the end of 2025 finds you and yours in good health and happiness, and I wish you even more abundance in all its myriad forms in the year to come. Let there be Peace on Earth.

p.s…let me (and your fellow readers) know in the comments not only what you think of my prognostications but also what you expect to see in 2026 and beyond!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post 2026: A technology forecast for AI’s ever-evolving bag of tricks appeared first on EDN.

Gray codes: Fundamentals and practical insights

Mon, 12/29/2025 - 14:55

Gray codes, also known as reflected binary codes, offer a clever way to minimize errors when digital signals transition between states. By ensuring that only one bit changes at a time, they simplify hardware design and reduce ambiguity in applications ranging from rotary encoders to error correction.

This article quickly revisits the fundamentals of Gray codes and highlights a few practical hints engineers can apply when working with them in real-world circuits.

Understanding reflected binary (Gray) code

The reflected binary code (RBC), more commonly known as Gray code after its inventor Frank Gray, is a systematic ordering of binary numbers designed in a way that each successive value differs from the previous one in only a single bit. This property makes Gray code distinct from conventional binary sequences, where multiple bits may flip simultaneously during transitions.

To illustrate, consider the decimal values 1 and 2. In standard binary, they are represented as 001 and 010, requiring two bits to change when moving from one to the other. In Gray code, however, the same values are expressed as 001 and 011, ensuring that only one bit changes during the increment. This seemingly small adjustment has significant practical implications: it reduces ambiguity and minimizes the risk of misinterpretation during state changes.

Gray codes have long been valued in engineering practice. They help suppress spurious outputs in electromechanical switches, where simultaneous bit changes could otherwise produce transient errors. In modern applications, Gray coding also supports error reduction in digital communication systems. By simplifying logic operations and constraining transitions to a single bit, Gray codes provide a robust foundation for reliable digital design.

Gray code vs. natural binary

Unlike standard binary encoding, where multiple bits may change during a numerical increment, Gray code ensures that only a single bit flips between successive values. This one-bit transition minimizes ambiguity during state changes and enables simple error detection: if more than one bit changes unexpectedly, the system can flag the data as invalid.

This property is especially useful in position encoders and digital control systems, where transient errors from simultaneous bit changes can lead to misinterpretation. The figure below compares the progression of values in natural binary and Gray code, highlighting how Gray code preserves single-bit transitions across the sequence.

Figure 1 Table compares Gray code and natural binary sequences, highlighting single-bit transitions between increments. Source: Author

When it comes to reliability in absolute encoder outputs, Gray code is the preferred choice because it prevents data errors that can arise with natural binary during state transitions. In natural binary, a sluggish system response may momentarily misinterpret a change; for instance, the transition from 0011 to 0100 could briefly appear as 0111 if multiple bits switch simultaneously. Gray code avoids this issue by ensuring that only one bit changes at a time, making the output stream inherently more reliable and easier for controllers to validate in practice.

Furthermore, converting Gray code back to natural binary is straightforward and can be done quickly on paper. Begin by writing down the Gray code sequence and copying the leftmost bit directly beneath it. Then, add this copied bit to the next Gray code bit to the right, ignoring any carries, and place the result beside the first copied digit. Continue this process step by step across the sequence until all bits have been converted. The final row represents the equivalent natural binary value.

For example, consider the Gray code 1011.

  • Copy the leftmost bit → binary begins as 1.
  • Next, add (XOR) the copied bit with the next Gray bit: 1 XOR 0 = 1 → binary becomes 11.
  • Continue: 1 XOR 1 = 0 → binary becomes 110.
  • Finally: 0 XOR 1 = 1 → binary becomes 1101.

Thus, the Gray code 1011 corresponds to the natural binary value 1101.

Gray code 1011 is not a standard weighted code, yet in a 4‑bit sequence, it corresponds to the decimal value 13. Its natural binary equivalent, 1101, also evaluates to 13, as shown below:

(1×23) + (1×22) + (0x21) + (1×20) = 8 + 4 + 0 + 1 = 13

Since both representations yield the same decimal result, the conversion is verified as correct.

Figure 2 Table demonstrates step‑by‑step conversion of Gray code 1011 into its natural binary equivalent 1101. Source: Author

Gray code output in rotary encoders

The real difference in a Gray code encoder lies in its output. Instead of returning a binary number that directly reflects the rotor’s position, the encoder produces a Gray code value.

As discussed earlier, Gray code differs fundamentally from binary: it’s not a weighted number system, where each digit consistently contributes to the overall value. Rather, it is an unweighted code designed so that only one bit changes between successive states.

In contrast, natural binary often requires multiple bits to flip simultaneously when incrementing or decrementing, which can introduce ambiguity during transitions. In extreme cases, this ambiguity may cause a controller to misinterpret the encoder’s position, leading to errors in system response. By limiting changes to a single bit, Gray code minimizes these risks and ensures more reliable state detection.

Shown below is a 4-bit Gray code output rotary encoder. As can be seen, it has four output terminals labeled 1, 2, 4, and 8, along with a Common terminal. The numbers 1-2-4-8 represent the bit positions in a 4-bit code, with each terminal corresponding to one output line of the encoder. As the rotor turns, each line switches between high and low according to the Gray code sequence, producing a unique 4-bit pattern for every position.

Figure 3 Datasnip shows terminal ID of the 25LB22-G-Z 4-bit Gray code encoder. Source: Grayhill

The Common terminal serves as the reference connection—typically ground or supply return—against which the four output signals are measured. Together, these terminals provide the complete Gray code output that can be read by a controller or logic circuit to determine the encoder’s angular position.

Sidenote: Hexadecimal output encoders

While many rotary encoders provide Gray code or binary outputs, some devices are designed to deliver signals in hexadecimal format. In these encoders, the rotor position is represented by a 4-bit binary word that maps directly to hexadecimal digits (0–F). This approach simplifies integration with digital systems that already process data in hex, such as microcontrollers or diagnostic interfaces.

Unlike Gray code, hexadecimal outputs do not guarantee single-bit transitions between states, so they are more prone to ambiguity during mechanical switching. However, they remain useful in applications where compact representation and straightforward decoding outweigh the need for transition reliability.

Microcontroller decoding of Gray code

There are several ways to decode Gray codes, but the most common way today is to feed the output bits into a microcontroller and let software handle the counting. In practice, the program reads the signals from the rotary encoder through I/O ports and first converts the Gray code into a binary value. That binary value is then translated into binary-coded decimal (BCD), which provides a convenient format for driving digital displays.

From there, the microcontroller can update a seven-segment display, LCD, or other interface to present the rotor’s position in a clear decimal form. This software-based approach not only simplifies hardware design but also offers flexibility to scale with higher-resolution encoders or integrate additional processing features.

On a personal note, my first experiment with a Gray code encoder involved decoding the outputs using hardware logic circuits rather than software.

This post has aimed to peel back a few layers of Gray codes, offering both context and clarity. Of course, there is much more to explore—and those deeper dives will come in time. For now, I invite you to share your own experiences, insights, or questions about Gray codes in the comments. Your perspectives can spark the next layer of discussion and help shape future explorations.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Gray codes: Fundamentals and practical insights appeared first on EDN.

Crowbar circuits: Revisiting the classic protector

Fri, 12/26/2025 - 16:00

Crowbar circuits have long been the go-to safeguard against overvoltage conditions, prized for their simplicity and reliability. Though often overshadowed by newer protection schemes, the crowbar remains a classic protector worth revisiting.

In this quick look, we will refresh the fundamentals, highlight where they still shine, and consider how their enduring design continues to influence modern power systems.

Why “crowbar”?

The name comes from the vivid image of dropping a metal crowbar across live terminals to force an immediate short. That is exactly what the circuit does—when an overvoltage is detected, it slams the supply into a low-resistance state, tripping a fuse or breaker and protecting downstream electronics. The metaphor stuck because it captures the brute-force simplicity and fail-safe nature of this classic protection scheme.

Figure 1 A crowbar protection circuit responds to overvoltage by actively shorting the power supply and disconnecting it to protect the load from damage. Source: Author

Crowbars in the CRT era: When fuses took the fall

In the era of bulky cathode-ray tube (CRT) televisions, power supply reliability was everything. Designers knew that a single regulator fault could unleash destructive voltages into the horizontal output stage or even the CRT itself. The solution was elegantly brutal: the crowbar circuit. Built around a thyristor or silicon-controlled rectifier (SCR), it sat quietly until the supply exceeded the preset threshold.

Then, like dropping a literal crowbar across the rails, it slammed the output into a dead short, blowing the fuse and halting operation in an instant. Unlike softer clamps such as Zener diodes or metal oxide varistors, the crowbar’s philosophy was binary—either safe operation or total shutdown.

For service engineers, this protection often meant the difference between replacing a fuse and replacing an entire deflection board. It was a design choice that reflected the pragmatic toughness of the CRT era: it’s better to sacrifice a fuse than a television.

Beyond CRT televisions, crowbar protection circuits find application in vintage computers, test and measurement instruments, and select consumer products.

Crowbar overvoltage protection

A crowbar circuit is essentially an overvoltage protection mechanism. It remains widely used today to safeguard sensitive electronic systems against transients or regulator failures. By sensing an overvoltage condition, the circuit rapidly “crowbars” the supply—shorting it to ground—thereby driving the source into current limiting or triggering a fuse or circuit breaker to open.

Unlike clamp-type protectors that merely limit voltage to a safe threshold, the crowbar approach provides a decisive shutdown. This makes it particularly effective in systems where even brief exposure to excessive voltage can damage semiconductors, memory devices, or precision analog circuitry. The simplicity of the design, often relying on a silicon-controlled rectifier or triac, ensures fast response and reliable action without adding significant cost or complexity.

For these reasons, crowbar protection continues to be a trusted safeguard in both legacy and modern designs—from consumer electronics to laboratory instruments—where resilience against unpredictable supply faults is critical.

Figure 2 Basic low-power DC crowbar illustrates circuit simplicity. Source: Author

As shown in Figure 2, an overvoltage across the buffer capacitor drives the Zener diode into conduction, triggering the thyristor. The capacitor is then shorted, producing a surge current that blows the local fuse. Once latched, the thyristor reduces the rail voltage to its on-state level, and the sustained current ensures safe disconnection.

Next is a simple practical example of a crowbar circuit designed for automotive use. It protects sensitive electronics if the vehicle’s power supply voltage, such as from a load dump or alternator regulation failure, rises above the safe setpoint. The circuit monitors the supply rail, and when the voltage exceeds the preset threshold, it drives a dead short across the rails. The resulting surge current blows the local fuse, shutting down the supply before connected circuitry can be damaged.

Figure 3 Practical automotive crowbar circuit protects connected device via local fuse action. Source: Author

Crowbar protection: SCR or MOSFET?

Crowbar protection can be implemented with either an SCR or a MOSFET, each with distinct tradeoffs.

An SCR remains the classic choice: once triggered by a Zener reference, it latches into conduction and forces a hard short across the supply rail until the local fuse opens. This rugged simplicity is ideal for high-energy faults, though it lacks automatic reset capability.

A MOSFET-based crowbar, by contrast, can be actively controlled to clamp or disconnect the rail when overvoltage is detected. It offers faster response and lower on-state voltage, which is valuable for modern low-voltage digital rails, but requires more complex drive circuitry and may be less tolerant of large surge currents.

Now I remember working with the LTM4641 μModule regulator, notable for its built-in N-channel overvoltage crowbar MOSFET driver that safeguards the load.

GTO thyristors and active crowbar protection

On a related note, gate turn-off (GTO) thyristors have also been applied in crowbar protection, particularly in high-power systems. Unlike a conventional SCR that latches until the fuse opens or power is removed, a GTO can be actively turned off through its gate, allowing controlled reset after an overvoltage event. This capability makes GTO-based crowbars attractive in industrial and traction applications where sustained shorts are undesirable.

Importantly, GTO thyristors enable “active” crowbars, in contrast to conventional SCRs that latch until power is removed. That is, an active crowbar momentarily shorts the supply during a transient, and gate-controlled turn-off then restores normal operation without intervention. In practice, asymmetric GTO (A-GTO) thyristors are preferred in crowbar protection, while symmetric (S-GTO) types see limited use due to higher losses.

However, their demanding gate-drive requirements and limited surge tolerance have restricted their use in low-voltage supplies, where SCRs remain dominant and MOSFETs or IGBTs now provide more practical and controllable alternatives.

Figure 4 A fast asymmetric GTO thyristor exemplifies speed and strength for demanding power applications. Source: ABB

A wrap-up note

Crowbar circuits may be rooted in classic design, but their relevance has not dimmed. From safeguarding power supplies in the early days of solid-state electronics to standing guard in today’s high-density systems, they remain a simple yet decisive protector. Revisiting them reminds us that not every solution needs to be complex—sometimes, the most enduring designs are those that do one job exceptionally well.

As engineers, we often chase innovation, but it’s worth pausing to appreciate these timeless building blocks. Crowbars embody the principle that reliability and clarity of purpose can outlast trends. Whether you are designing legacy equipment or modern platforms, the lesson is the same: protection is not an afterthought, it’s a foundation.

I will close for now, but there is more to explore in the enduring story of circuit protection. Stay tuned for future posts where we will continue connecting classic designs with modern challenges.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Crowbar circuits: Revisiting the classic protector appeared first on EDN.

Does the cold of deep space offer a viable energy-harvesting solution?

Fri, 12/26/2025 - 15:00

I’ve always been intrigued by “small-scale” energy harvest where the mechanism is relatively simple while the useful output is modest. These designs, which may be low-cost but may also use sophisticated materials and implementations, often make creative use of what’s available, generating power on the order of about 50 milliwatts.

These harvesting schemes often have the first-level story of getting “a little something for almost nothing” until you look more deeply in the detail. Among the harvestable sources are incidental wind, heat, vibration, incremental motion, and even sparks.

The most recent such harvesting arrangement I saw is another scheme to exploit the thermal differential between the cold night sky and Earth’s warmer surface. The principle is not new at all (see References)—it has been known since the mid-18th century—but it returns in new appearances.

This approach, from the University of California at Davis, uses a Stirling engine as the transducer between thermal energy and  mechanical/electrical energy, Figure 1. It was mounted on a flat metal plane embedded into the Earth’s surface for good thermal contact while pointing at the sky.

Figure 1 Nighttime radiative cooling engine operation. (A) Schematic of engine operation at night. Top plate radiatively couples to the night sky and cools below ambient air temperature. Bottom plate is thermally coupled to the ground and remains warmer, as radiative access to the night sky is blocked by the aluminum top plate. This radiative imbalance creates the temperature differential that drives the engine. (B) Downwelling infrared radiation from the sky and solar irradiance are plotted throughout the evening and into the night on 14 August 2023. These power fluxes control the temperature of the emissive top plate. The fluctuations in the downwelling infrared are caused by passing clouds, which emit strongly in the infrared due to high water content. (C) Temperatures of the engine plates compared to ambient air throughout the run. The fluctuations in the top plate and air temperature match the fluctuations in the downwelling infrared. The average temperature decreases as downwelling power decreases. (D) Engine frequency and temperature differential remain approximately constant. Temporary increases in downwelling infrared, which decrease the engine temperature differential, are physically manifested in a slowing of the engine.

Unlike other thermodynamic cycles (such as Rankine, Brayton, Otto, or Diesel), which require phase changes, combustion, or pressurized systems, the Stirling engine can operate passively and continuously with modest temperature differences. This makes them especially suitable for demonstrating mechanical power generation using passive thermal heat from the surroundings and radiative cooling without the need for fuels or active control systems.

Most engines which use thermal differences first generate heat from some source to be used against the cooler ambient side. However, there’s nothing that says the warmer side can’t be at the ambient temperature while the other side is colder relative to the ambient one.

Their concept and execution are simple, which is always attractive. The Stirling engine (essentially a piston driving a flywheel), is put on a 30 × 30 centimeter flat-metal panel that acts as a heat-radiating antenna. The entire assembly sits on the ground outdoors at night; the ground acts as the warm side of the engine as the antenna channels the cold of space. 

Under best-case operation, the system delivered about 400 milliwatts of electrical power per square meter, and was used to drive a small motor. That is about 0.4% efficiency compared to theoretical maximum. Depending on your requirements, that areal energy density is somewhere between not useful and useful enough for small tasks such as charging a phone or powering a  small fan to ventilate greenhouses, Figure 2.

Figure 2 Power conversion analysis and applications of radiative cooling engine. (A) Mechanical power plotted against temperature differential for various cold plate temperatures (TC). (Error bars show standard deviation.). Solid lines represent potential power corresponding to different quality engines denoted by F, the West number. (B) Voltage sweep across the attached DC motor shows maximum power point for extraction of mechanical to electrical power conversion at various engine temperature differentials (note: typical passive sign convention for electrical circuits is used). Solid red lines are quadratic fits of the measured data points (colored circles). Inset shows the dc motor mounted to the engine. (C) Bar graph denotes the remaining available mechanical power and the electrical power extracted (plus motor losses) when the DC motor is attached. (D) Axial fan blade attachment shown along with the hot-wire anemometer used to measure air speed. (E) Air speed in front of the fan is mapped for engine hot and cold plate temperatures of 29°C and 7°C, respectively. White circles indicate the measurement points. (F) Maximum air speed (black dots) and frequency (blue dots) as a function of engine temperature differential. Shaded gray regions show the range of air speeds necessary to circulate CO2 to promote plant growth inside greenhouses and the ASHRAE-recommended air speed for thermal comfort inside buildings.

Of course, there are other considerations such as harvesting only at night (hmmm…maybe as a complement to solar cells?) are needing a clear sky with dry air for maximum performance. Also, the assembly is, by definition, fully exposed to rain, sun, and wind, which will likely shorten its operation life.

The instrumentation they used was also interesting, as was their thermal-physics analysis they did as part of the graduate-level project. The flywheel of the engine was not only an attention-getter, its inherent “chopping” action also made it easy to count motor revolutions using a basic light-source and photosensor arrangement. The analysis based on the thermal cycle of the Stirling engine concluded that its Carnot-cycle efficiency was about 13%.

This is all interesting, but where does it stand on the scale of viability and utility? On one side, it is a genuine source of mechanical and follow-up electrical energy at very low cost. But that is only under very limited conditions with real-world limitations.

I think this form of harvesting gets attention because, as I noted upfront, it offers some usable energy at little apparent cost. Further, it’s very understandable, requires exotic materials or components, and comes with dramatic visual of the Stirling engine and its flywheel. It tells a good story that gets coverage and likely those follow-on grants. They have also filed a provisional patent related to the work; I’d like to see the claims they make.

But when you look at its numbers closely and reality becomes clearer, some of that glamour fades. Perhaps it could be used for a one-time storyline in a “McGyver-like” TV show script where the hero improvises such a unit, uses it to charge a dead phone, and is able to call for help. Screenwriters out there, are you paying attention?

Until then, you can read their full, readable technical paper “Mechanical power generation using Earth’s ambient radiation” published in the prestigious journal Science Advances from the American Association for the Advancement of Science; it was even featured on their  cover, Figure 3, proving The “free” aspects of this harvesting and its photo-friendly design really do get attention!

Figure 3 The harvesting innovation was considered sufficiently noteworthy to be featured as the cover and lead story of Science Advances.

What’s your view on the utility and viability of this approach? Do you see any strong, ongoing applications?

Related Content

References

The post Does the cold of deep space offer a viable energy-harvesting solution? appeared first on EDN.

Ignoring the regulator’s reference redux

Thu, 12/25/2025 - 15:00

Stephen Woodward’s “Ignoring the regulator’s reference” Design Ideas (DI) (see Figure 1) is an excellent, working example of how to include a circuit in the feedback loop of an op amp to support the stabilization of the circuit’s operating point. This is also previously seen in “Improve the accuracy of programmable LM317 and LM337-based power sources” and numerous other places[1][2][3]). I’ll refer to his DI as “the DI” in subsequent text.

Figure 1 The DI’s Figure 1 schematic has been redrawn to emphasize the positioning of the U1 regulator in the A1 op amp’s feedback loop. The Vdac signal controls U1 while ignoring its internal reference voltage.

Wow the engineering world with your unique design: Design Ideas Submission Guide

A few minor tweaks optimize this circuit’s dynamic performance and leave the design equations and comments in the DI unchanged. Let’s consider the case in which U1’s reference voltage is 0.6 V, Vdac varies from 0 to 3 V, and Vo varies from 5 to 0 V.

The DI tells us that in this case, R1a is not populated and that R1b is 150k. It also mentions driving Figure 1’s Vdac from the DACout signal of Figure 2, also found in “A nice, simple, and reasonably accurate PWM-driven 16-bit DAC.”

Figure 2 Each PWM input is an 8-bit DAC. VREF should be at least 3.0 V to support the SN74AC04 output resistances calculable from its datasheet. Ca and C1 – C3 are COG/NPO.

The Figure 2 PWMs could produce a large step change, causing DACout and therefore Vdac to quickly change from 0 to 3 V.

Figure 3 shows how Vo and the output of A1 react to this while driving a hypothetical U1, which is capable of producing an anomaly-free [4] 0-volt output.

Figure 3 Vo and A1’s output from Figure 1 react to a step change in Vdac.

Even though Vo eventually does what it is supposed to, there are several things not to like about these waveforms. Vo exhibits an overshoot and would manifest an undershoot if it didn’t clip at the negative rail (ground). The output of A1 also exhibits clipping and overshooting. Why are these things happening?

The answer is that the current flowing through R5 also flows through R3, causing an immediate change in the output voltage of A1. That change causes a proportional current to flow through R4. However, the presence of C2 prevents an immediate change in Vo and delays compensatory feedback from arriving at A1’s non-inverting input. How can this delay be avoided?

Shorting out R3 makes matters worse. The solution is to remove C2, speeding up the ameliorative feedback. Figure 4 shows the results.

Figure 4 With C2 eliminated, so are the clipping and the over- and undershoots. The A1 output moves only a few millivolts because of the large DC gain of the regulator, and because it is no longer necessary to charge C2 through R4 in response to an input change.

Vo now settles to ½ LSbit of a 16-bit source in 2.5 ms. Changing C3 to 510 pF (10% COG/NPO) reduces that time to 1.4 ms. Smaller values of C3 provide little further advantage.

The Vo-to-VSENSE feedback becomes mostly resistive above 0.159 / (R · C3) Hz, where:

R = R3 + R5 · R1a / (R5 + R1a)

In this case, that’s 1600 Hz, well below the unity gain frequency of pretty much any regulator, and so there should be no stability issues for the overall circuit. Note that A1’s output remains almost exactly equal to the regulator’s reference voltage. This, and the freedom to choose the R5/R1a and R2/R1b ratios, leaves open the option of using an op amp whose inputs and output needn’t approach its positive supply rail.

The (original) DI is a solid design that obtains some dynamic performance benefits from reducing the value of one capacitor and eliminating another.

Related Content

References

  1. https://en.wikipedia.org/wiki/Current_mirror#Feedback-assisted_current_mirror
  2. https://www.onsemi.com/pdf/datasheet/sa571-d.pdf see section on compandor
  3. https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/14/CircuitCookbook_2D00_OpAmps.pdf see triangle wave generator, page 90
  4. Enabling a variable output regulator to produce 0 volts? Caveat, designer!

The post Ignoring the regulator’s reference redux appeared first on EDN.

Active two-way current mirror

Wed, 12/24/2025 - 15:00

EDN Design Ideas (DI) published a design of mine in May of 2025 for a passive two-way current mirror topology that, in analogy to optical two-way mirrors, can reflect or transmit. 

That design comprises just two BJTs and one diode. But while its simplicity is nice, its symmetry might not be. That is to say, not precise enough for some applications.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Fortunately, as often happens when the precision of an analog circuit falls short, and the required performance can’t suffer compromise, a fix can consist of adding an RRIO op amp. Then, if we substitute two accurately matched current-sensing resistors and a single MOSFET for the BJTs, the result is the active two-way current mirror (ATWCM) as shown in Figure 1.

Figure 1 The active two-way current sink/source mirror. The input current source is mirrored as a sink current when D1 is forward biased, and transmitted as a source current when D1 is reverse biased.

Figure 2 shows how the ATWCM operates when D1 is forward-biased, placing it in mirror mode.

Figure 2 ATWCM in mirror mode, I1 sink current generates Vr, forcing A1 to coax Q1 to mirror I2 = I1.

The operation of the ATWCM in mirror mode couldn’t be more straightforward. Vr = I1R wired to A1’s noninverting input forces it to drive Q1 to conduct I2 such that I2R = I1R. 

Therefore, if the resistors are equal, A1’s accuracy-limiting parameters (offset voltage, gain-bandwidth, bias and offset currents, etc.) are adequately small, and Q1 does not saturate, I1 = I2 just as precisely as you like.

Okay, so I lied.  Actually, the operation of the ATWCM in transmission mode is even simpler, as Figure 3 shows.

Figure 3 ATWCM in transmission mode. A reverse-biased D1 means I1 has nowhere to go except through the resistors and (saturated and inverted) Q1, where it is transmitted back out as I2.

I1 flowing through the 2R net resistance forces A1 to rail positive, saturating Q1 and providing a path back to the I2 pin. Since Q1 is biased inverted, its body diode will close the circuit from I1 to I2 until A1 takes over. A1 has nothing to do but act as a comparator.

Flip D1 and substitute a PFET for Q1, and of course, a source/sink will result, shown in Figure 4.

Figure 4 Source/sink two-way mirror with a D1 flipped the opposite direction, and Q1 replaced with a PFET. 

Figure 5 shows the circuit in Figure 4 running a symmetrical rail-to-rail tri-wave and square-wave output multivibrator.

Figure 5 Accurately symmetrical tri-wave and square-wave result from inherent A1Q2 two-way mirror symmetry.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Active two-way current mirror appeared first on EDN.

Aiding drone navigation with crystal sensing

Wed, 12/24/2025 - 14:31

Designers are looking to reduce the cost of drone systems for a wide range of applications but still need to provide accurate positioning data. This however is not as easy is it might appear.

There are several satellite positioning systems, from the U.S.-backed GPS and European Galileo to NavIC in India and Beidou in China, providing data down to the meter. However, these need to be augmented by an inertial measurement unit (IMU) that provides more accurate positioning data that is vital.

Figure 1 An IMU is vital for the precision control of the drone and peripherals like gimbal that keeps the camera steady. Source: Epson

An IMU is typically a sensor that can measure movement in six directions, along with an accelerometer to detect the amount of movement. The data is then used by the developer of an inertial measurement system (IMS) with custom algorithms, often with machine learning, combined with the satellite data and other data from the drone system.

The IMU is vital for the precision control of the drone and peripherals such as the gimbal that keeps the camera steady, providing accurate positioning data and compensating for the vibration of the drone. This stability can be implemented in a number of ways with a variety of sensors, but providing accurate information with low noise and high stability for as long as possible has often meant the sensor is expensive with high power consumption.

This is increasingly important for medium altitude long endurance (MALE) drones. These aircraft are designed for long flights at altitudes of between 10,000 and 30,000 feet, and can stay airborne for extended periods, sometimes over 24 hours. They are commonly used for military surveillance, intelligence gathering, and reconnaissance missions through wide coverage.

These MALE drones need a stable camera system that is reliable and stable in operation and a wide range of temperatures, providing accurate tagging of the position of any data captured.

One way to deliver a highly accurate IMU with lower cost is to use a piezoelectric quartz crystal. This is well established technology where an oscillating field is applied across the crystal and changes in motion are picked up with differential contacts across the crystal.

For a highly stable IMU for a MALE drone, three crystals are used, one for each axis, stimulated at different frequencies in the kilohertz range to avoid crosstalk. The differential output cancels out noise in the crystal and the effect of vibrations.

Precision engineering of piezoelectric crystals for high-stability IMUs

Using a crystal method provides data with low noise, high stability, and low variability. The highly linear response of the piezoelectric crystal enables high-precision measurement of various kinds of movement over a wide range from slow to fast, allowing the IMU to be used in a broad array of applications.

An end-to-end development process allows the design of each crystal to be optimized for the frequencies used for the navigation application along with the differential contacts. These are all optimized with the packaging and assembly to provide the highly linear performance that remains stable over the lifetime of the sensor.

It uses 25 years of experience with wet etch lithography for the sensors across dozens of patents. That produces yields in the high nineties with average bias variations, down to 0.5% variant from unit to unit.

An initial cut angle on the quartz crystal achieves the frequency balance for the wafer, then the wet etch lithography is applied to the wafer to create a four-point suspended cantilever structure that is 2-mm long. Indentations are etched into the structure for the wire bonds to the outside world.

The four-point structure is a double tuning fork with detection tines and two larger drive tines in the centre. The differential output cancels out spurious noise or other signals.

This is simpler to make than micromachined MEMS structures and provides more long-term stability and less variability across the devices.

The differential structure and low crosstalk allow three devices to be mounted closely together without interfering with each other, which helps to reduce the size of the IMU. A low pass filter helps to reduce any risk of crosstalk.

The six-axis crystal sensor is then combined with an accelerometer for the IMU. For the MALE drone gimbal applications, this accelerometer must have a high dynamic range to handle the speed and vibration effects of operation in the air. The linearity advantage of using a piezoelectric crystal provides accuracy for sensing the rotation of the sensor and does not degrade with higher speeds.

Figure 2 Piezoelectric crystals bolster precision and stability in IMUs. Source: Epson

This commercial accelerometer is optimized to provide the higher dynamic range and sits alongside a low power microcontroller and temperature sensors, which are not common in low-cost IMUs currently used by drone makers.

The microcontroller technology has been developed for industrial sensors over many years and reduces the power consumption of peripherals while maintaining high performance.

The microcontroller is used to provide several types of compensation, including temperature and aging, and so provides a simple, stable, and high-quality output for the IMU maker. Quartz also provides very predictable operation across a wide temperature range from -40 ⁰C to +85 ⁰C, so the compensation on the microcontroller is sufficient and more compensation is not required in the IMU, reducing the compute requirements.

All of this is also vital for the calibration procedure. Ensuring that the IMU can be easily calibrated is key to keeping the cost down and comes from the inherent stability of the crystal.

Calibration-safe mounting

The mounting technology is also key for the calibration and stability of the sensor. A part that uses surface mount technology (SMT), such as a reflow oven, for mounting to a board, which is exposed to high temperatures that can disrupt the calibration and alter the lifetime of the part in unexpected ways.

Instead, a module with a connector is used, so the 1-in (25 x 25 x 12 mm) part can be soldered to the printed circuit board (PCB). This avoids the need to use the reflow assembly for surface mount devices where the PCB passes through an oven, which can upset the calibration of the sensor.

Space-grade IMU design

A higher performance variant of the IMU has been developed for space applications. Alongside the quartz crystal sensor, a higher performance accelerometer developed in-house is used in the IMU. The quartz sensor is inherently impervious to radiation in low and medium earth orbits and is coupled with a microcontroller that handles the temperature compensation, a key factor for operating in orbits that vary between the cold of the night and the heat of the sun.

The sensor is mounted in a hermetically sealed ceramic package that is backfilled with helium to provide higher levels of sensitivity and reliability than the earth-bound version. This makes the quartz-based sensor suitable for a wide range of space applications.

Next-generation IMU development

The next generation of etch technology being explored now promises to enable a noise level 10 times lower than today with improved temperature stability. These process improvements enable cleaner edges on the cantilever structure to enhance the overall stability of the sensor.

Achieving precise and reliable drone positioning requires the integration of advanced IMUs with satellite data. The use of piezoelectric quartz crystals in IMUs for drone systems offers significant benefits, including low noise, high stability, and reduced costs, while commercial accelerometers and optimized microcontrollers further enhance performance and minimize power consumption.

Mounting and calibration procedures ensure long-term accuracy and reliability to provide stable and power-efficient control for a broad range of systems. All of this is possible through the end-to-end expertise in developing quartz crystals, and designing and implementing the sensor devices, from the etch technology to the mounting capabilities.

David Gaber is group product manager at Epson.

Related Content

The post Aiding drone navigation with crystal sensing appeared first on EDN.

Tuneful track-tracing

Tue, 12/23/2025 - 15:00

Another day, another dodgy device. This time, it was the continuity beeper on my second-best DMM. Being bored with just open/short indications, I pondered making something a little more informative.

Perhaps it could have an input stage to amplify the voltage, if any, across current-driven probes, followed by a voltage-controlled tone generator to indicate its magnitude, and thus the probed resistance. Easy! . . . or maybe not, if we want to do it right.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the (more or less) final result, which uses a carefully-tweaked amplifying stage feeding a pitch-linear VCO (PLVCO). It also senses when contact has been made, and so draws no power when inactive.

Most importantly, it produces a tone whose musical pitch is linearly related to the sensed resistance: you can hear the difference between fat power traces and long, thin signal ones while probing for continuity or shorts on a PCB without needing to look at a meter.

Figure 1 A power switch, an amplifying stage with some careful offsets, and a pitch-linear VCO driving an output transducer make a good continuity tester. The musical pitch of the tone produced is proportional to the resistance across the probe tips.

This is simpler than it initially looks, so let’s dismantle it. R1 feeds the test probes. If they are open-circuited, p-MOSFET Q1 will be held off, cutting the circuit’s power (ignoring <10 nA leakage).

Any current flowing through the probes will bring Q1.G low to turn it on, powering the main circuit. That also turns Q2 on to couple the probe voltage to A1a.IN+ via R2. Without Q2, A1a’s input protection diodes would draw current when power was switched off.

R1 is shown as 43k for an indication span of 0 to ~24 Ω, or 24 semitones. Other values will change the range, so, for example, 4k3 will indicate up to 2.4 Ω with 0.1-Ω semitones. Adding a switch gave both ranges. (The actual span is up to ~30 Ω—or 3.0 Ω—but accuracy suffers.) Any other values can be used for different scales; the probe current will, of course, change.

A1a amplifies the probe voltage by 1001-ish, determined by R3 and R4. We are working right down to 0 V, which can be tricky. R5 offsets A2a.IN- by ~5 mV, which is more than the MCP6002’s quoted maximum input offset of 3.5 mV. R2 and R6–8 help to add a slightly greater bias to A1a.IN+ that both null out any offset and set the operating point. This scheme may avert the need for a negative rail in other applications.

Tuning the tones

The A1b section is yet another variant on my basic pitch-linear VCO, the reset pulse being generated by Q4/C3/R13. (For more informative details of the circuit’s general operation, see the original Design Idea.) The ’scope traces in Figure 2 should clarify matters.

Figure 2. Waveforms within the circuit to show its operation while probing different resistances.

This type of PLVCO works best with a control voltage centered between the supply rails and swinging by ±20% about that datum, giving a bipolar range of ~±1 octave. Here, we need unipolar operation, starting around that -20% lowest-frequency point.

Therefore, 0 Ω on the input must give ~0.3 Vcc to generate a ~250 Hz tone; 12 Ω, 0.5 Vcc (for ~500 Hz); and 24 Ω, ~0.7 Vcc (~1 kHz). Anything above ~0.8 Vcc will be out of range—and progressively less accurate—and must be ignored.

The output is now a tone whose pitch corresponds to the resistance across the probes, scaled as one semitone per ohm and spanning two octaves for a 24 Ω range (if R1 is 43k).

The modified exponential ramp on C2 is now sliced by A2b, using a suitable fraction of the control voltage as a reference, to give a “square” wave at its output—truly square at one point only, but it sounds OK, and this approach keeps the circuit simple. A2a inverts A2b’s output, so they form a simple balanced (or bridge-tied load) driver for an earpiece. (There are problems here, but they can wait.)

R9 and R10 reduce A1a’s output a little as high resistances at the input cause it to saturate, which would otherwise stop A1b’s oscillation. This scheme means that out-of-range resistances still produce an audio output, which is maxed out at ~1.6 kHz, or ~30 . Depending on Q1’s threshold voltage, several tens of kΩs across the probes are enough to switch it on—a tad outside our indication range.

Loud is allowed

Now for that earpiece, and those potential problems. Figure 1’s circuit worked well enough with an old but sensitive ~250-Ω balanced-armature mic/’phone but was fairly hopeless when trying to drive (mostly ~32 Ω) earphones or speakers.

For decent volume, try Figure 4, which is beyond crude, but functional. Note the separate battery, whose use avoids excessive drain on the main one while isolating the main circuit from the speaker’s highish currents.

Again, no power is drawn when the unit is inactive. (Reused batteries—strictly, cells—from disposed-of vapes are often still half-full, and great for this sort of thing! And free.) A2a is now spare . . .

Figure 3 A simple, if rather nasty, way of driving a loudspeaker.

Setting-up is necessary, because offsets are unpredictable, but simple. With a 12-Ω resistance across the probes, adjust R7 to give Vcc/2 at A1b.5. Done!

Comments on the components

The MCP6002 dual op-amp is cheap and adequate. (The ’6022 has a much lower offset but a far higher price, as well as drawing more current. “Zero-offset” devices are yet more expensive, and trimmer R7 would probably still be needed.)

Q3, and especially Q1, must have a low RDS(on) and VGS(th); my usual standby ZVP3306As failed on both counts, though ZVN3306As worked well for Q2/4/5. (You probably have your own favorite MOSFETs and low-voltage RRIO op-amps.) To alter the frequency range, change C2. Nothing else is critical.

As noted above, R1 sets the unit’s sensitivity and can be scaled to suit without affecting anything else. With 43k, the probe current is ~70 µA, which should avoid any possible damage to components on a board-under-test.

(Some ICs’ protection diodes are rated at a hopefully-conservative 100 µA, though most should handle at least 10 mA.) R2 helps guard against external voltage insults, as well as being part of the biasing network.

And that newly-spare half of A2? We can use it to make an active clamp (thanks, Bob Dobkin) to limit the swing from A1a rather than just attenuating it. R1 must be increased—51k instead of 43k—because we no longer need extra gain.

Figure 4 shows the circuit. When A2a’s inverting input tries to rise higher than its non-inverting one—the reference point—D1 clamps it to that reference voltage.

Figure 4. An active clamp is a better way of limiting the maximum control voltage fed to the PLVCO.

The slight frequency changes with supply voltage can be ignored; a 20°C temperature rise gave an upward shift of about a semitone. Shame: with some careful tuning, this could otherwise also have done duty as a tuning fork.

“Pitch-perfect” would be an overstatement, but just like the original PLVCO, this can be used to play tunes! A length of suitable resistance wire stretched between a couple of drawing pins should be a good start . . . now, where’s that half-dead wire-wound pot? Trying to pick out a seasonal “Jingle Bells” could keep me amused for hours (and leave the neighbors enraged for weeks).

 Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

The post Tuneful track-tracing appeared first on EDN.

Pages