Українською
  In English
EDN Network
Sonic excellence: Music (and other audio sources) in the office, part 2

Last time, our engineer covered the audio equipment stacks on either side of his laptop. But what do they connect to, and what connects to them? Read on for the remaining details.
I wrapped up the initial entry in this two-part series with the following prose:
So far, we’ve covered the two stacks’ details. But what does each’s remaining S/PDIF DAC input connect to? And to what do they connect on the output end, and how? Stay tuned for part 2 to come next for the answers to these questions, along with other coverage topics.
“Next” is now. Here again is the unbalanced (i.e., single-ended) connection setup to the right of my laptop:

And here’s its higher-end balanced counterpart to the left:

As was the case last time in describing both stacks, I’m going to begin this explanation of the remainder of the audio playback chain at the end (with the speakers and power amplifiers), working my way from there back through the stacks to the beginning (the other audio sources). I’ll start by sharing another photo, of the right-channel speaker and associated hardware, that regular readers have already seen, first as a reference in the comment section and subsequently as an embedded image within the main writeup:

Here’s the relevant excerpt from the first post’s comments section:
I thought I’d share one initial photo from my ears-on testing of the Schiit Rekkr. The speakers are located 3.5 feet away from me and tilted slightly downward (for tweeter-positioning alignment with my ears) and toward the center listening position. As mentioned in the writeup, they’re Audioengine’s P4 Passives. And the stands are from Monoprice. As you’ll see, I’m currently running two Rekkrs, each one in monoblock mode.
Here’s a “stock” photo of the speakers:

Along with a “stock” photo of one of the stands:

At this point, you might be asking yourself a question along the lines of the following: “He’s got two audio equipment stacks…how does he drive a single set of speakers from both of them?” The answer, dear readers, is at the bottom of the left speaker, which you haven’t yet seen:

That’s another Schiit Sys passive switch, the same as the one in the earlier right-of-laptop stack albeit a different color, and this time underneath the Rekkr power amplifier at that location:


The rear-panel RCA outputs of the Schiit Vali 2++ (PDF) tube-based headphone amplifier at the top of the right-side stack:


and the 3.5 mm (“1/8 in.”) single-ended headphone mini-jack at the front of the Drop + THX AAA 789 amplifier at the top of the left-side stack:

Both route to it, and I as-desired use the Sys to switch between them, with the Sys outputs then connected to the Rekkrs. Well…sorta. There’s one more link in the chain between the Sys and the Rekkrs that I haven’t yet mentioned.
Wireless connectivityThe Audioengine P4 Passives deliver great sound, especially considering their compact size, but their front ported design can’t completely counterbalance the fact that the woofers are only 4” in diameter. Therefore explaining the other Audioengine speaker in the room, the company’s compact (albeit perfect for the office’s diminutive dimensions) P6 subwoofer based on a 6″ long throw front-firing woofer along with an integrated 140W RMS Class D amplifier:


And since I’d purchased the wireless variant of the P6, Audiosource had also bundled its W3 wireless transmitter and receiver kit with the subwoofer:


The Sys left and right outputs both get split, with each output then routing in parallel both to the relevant Rekkr and to the W3 transmitter input’s correct channel. The receiver is roughly 12 feet away, to my left at the end of the room and connected to (and powered by) the back panel of the W6 subwoofer.
The transmitter and receiver aren’t even close to being line-of-sight aligned with each other, but the 2.4 GHz ISM band link between them still does a near-perfect job of managing connectivity. The only time I encounter dropouts, and then only briefly, is when a water-rich object (i.e., one of the humans, or our dog) moves in-between them. And although I was initially worried that the active W3 transmissions might destructively interfere with Bluetooth and/or Wi-Fi, I’m happy to report I haven’t experienced any degradation here, either.
Audio source diversityThat covers one end of the chain: now, what about the non-computer audio sources? There’s only one device, actually, shared between the two stacks, although its functional flexibility enables both native and connected content source diversity. And you’ve already heard about it, too; it’s the Bluesound NODE N130 that I initially mentioned at the beginning of 2023:
It integrates support for a diversity of streaming music services; although I could alternatively “tune in” to this same content via a connected computer, it’s nice not to have to bother booting one up if all I want to do is relax and audition some tunes. Integrated Bluetooth connectivity mates it easily with my Audio-Technica AT-LP60XBT turntable:
And a wiring harness mates it with the analog audio output of my ancient large-screen LCD computer monitor, acting as a pseudo TV in combination with my Xbox 360 (implementing Media Center Extender functionality) and Google Chromecast with Google TV.
The Bluesound NODE N130 has three audio output options, which conveniently operate concurrently: analog and both optical and coaxial (RCA) S/PDIF. The first goes to my Yamaha SR-C20A sound bar, the successor to the ill-fated Hisense unit I groused about back in mid-2023:
And the other two route to the Drop + Grace Design Standard DAC Balanced at the bottom of the left-side stack (optical S/PDIF):


and the Schiit Modi Multibit 1 DAC at the bottom of the right-side stack (coaxial RCA S/PDIF):

The multi-stack connectivity repetition is somewhat superfluous, but since it was a feasible possibility, I figured, why not? That said, I can always redirect one of the stack’s DACs to some other digitally tethered to-be-acquired widget in the future. And as mentioned in part 1 of this series, the Modi Multibit 1’s other (optical) S/PDIF input remains unpopulated right now, too.
That’s all (at least for now), folksAfter a two-post series spanning 2,000+ words, there’s my combo home office and “man cave” audio setup in (much more than) a nutshell. Feedback is, as always, welcomed in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Sonic excellence: Music (and other audio sources) in the office, part 1
- Audio amplifiers: How much power (and at what tradeoffs) is really required?
- Audio Amplifiers from Class A, B, D to T
- Class D audio power amplifiers: Adding punch to your sound design
- Audio amplifier selection in hearable designs
- The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit
The post Sonic excellence: Music (and other audio sources) in the office, part 2 appeared first on EDN.
Extend the LM358 op-amp family’s output voltage range

The LM358 family of dual op amps is among those hoary industry work-horse devices that are inexpensive and still have their uses. These parts’ outputs can approach (and for the inputs even include) their negative supply rail voltage. Unfortunately, this is not the case for the positive supply rail. However, cascading the op amp with a few simple, inexpensive components can surmount this limitation of the outputs.
Figure 1 This simple rail-to-rail gain stage, consisting of Q1, Q2, R1, Rf, Rg, Rcomp, and Ccomp, is driven by the output of the LM258A op-amp. Feedback network Rf1 and Rg1 help to ensure that the inverting input feedback voltage is within the op-amp’s common-mode input range and to set a stable loop gain characteristic.
I had some LM258As on hand, which I had bought instead of the LM358As because of the slightly better input offset voltage and bias current ratings, which also spanned a wider set of temperatures. Interestingly, the input common-mode range for the non-A version of the part is characterized over temperature as Vcc – 2V for Vcc between 5 and 30V. But the A version is characterized at 30-V only. Go figure! As you’ll see, the tests I ran encountered no difficulties.
The parts’ AC characteristics are spec’d identically, suggesting that the even cheaper LM358 should encounter no stability issues. With the components shown in Figure 1, the loop gain above 100 kHz is about that of the LM258A configured as a voltage follower. Below 10 kHz, there’s approximately an extra 8 dB of gain. The following (Figures 2 through Figure 7) are some screen shots of ‘scope traces for various tests of the circuit at 1 kHz. The scales for all traces are the same: 1 V and 200 µs per large divisions.

Figure 2 Here, rail-to-rail swings of the circuit’s output are apparent.

Figure 3 The circuit recovers from clipping gracefully.

Figure 4 With a 0.1 µF load, slewing problems arise.

Figure 5 A 470-ohm load in parallel with 0.1 µF is stable and doesn’t exhibit slewing problems.

Figure 6 But with 0.1 µF as the sole load, the circuit is not stable.

Figure 7 Swapping the 470-ohm Rcomp with 100-ohms restores stability with 0.1 µF as the sole load.
In conclusion, a pair of cheap transistors, an inexpensive cap, and a few non-precision resistors provide a cost-effective way to turn the LM358 family of op amps into one with rail-to-rail output capabilities.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Simple PWM interface can program regulators for Vout < Vsense
- LM358 Datasheet and Pinout
- com: Experimenting with LM358 and OPA2182 ICs
- Tricky PWM Controller – An Analog Beauty!
- LED Lamp Dimmer Project Circuit
- Op amp one-shot produces supply-independent pulse timing
The post Extend the LM358 op-amp family’s output voltage range appeared first on EDN.
The AI design world in 2026: What you need to know

We live in an AI era, but behind the buzzword lies an intricate world of hardware and software building blocks. Like every other design, AI systems span multiple dimensions, ranging from processors and memory devices to interface design and EDA tools. So, EDN is publishing a special section that aims to untangle the AI labyrinth and thus provide engineers and engineering managers greater clarity from a design standpoint.
For instance, while AI is driving demand for advanced memory solutions, memory technology is taking a generational leap by resolving formidable engineering challenges. An article will examine the latest breakthroughs in memory technology and how they are shaping the rapidly evolving AI landscape. It will also provide a sneak peek at memory bottlenecks in generative AI, as well as thermal management and energy-efficiency constraints.

Figure 1 HBM offers higher bandwidth and better power efficiency in a similar DRAM footprint. Source: Rambus
Another article hits the “memory wall” currently haunting hyperscalers. What is it, and how can data center companies confront such memory bottlenecks? The article explains the role of high-bandwidth memory (HBM) in addressing this phenomenon and offers a peek into future memory needs.
Interconnect is another key building block in AI silicon. Here, automation is becoming a critical ingredient in generating and refining interconnect topologies to meet system-level performance goals. Then, there are physically aware algorithms that recognize layout constraints and minimize routing congestion. An article will show how the phenomena work while also showing how AI workloads have made existing chip interconnect design impractical.

Figure 2 The AI content in interconnect designs facilitates intelligent automation, which in turn, enables a new class of AI chips. Source: Arteris
No design story is complete without EDA tools, and AI systems are no exception. An EDA industry veteran writes a piece for this special section to show how AI workloads are forcing a paradigm shift in chip development. He zeroes in on the energy efficiency of AI chips and explains how next-generation design tools can help design chips that maximize performance for every watt consumed.
On the applications front, edge AI finally came of age in 2025 and is likely to make further inroads during 2026. A guide on edge AI for industrial applications encompasses the key stages of the design value chain. That includes data collection and preprocessing, hardware-accelerated processing, model training, and model compression. It also explains deployment frameworks and tools, as well as design testing and validation.

Figure 3 Edge AI addresses the high-performance and low-latency requirements of industrial applications by embedding intelligence into devices. Source: Infineon
There will be more. For instance, semiconductor fabs are incorporating AI content to modernize their fabrication processes. Take the case of GlobalFoundries joining hands with Siemens EDA for fab automation; GF is deploying advanced AI-enabled software, sensors, and real-time control systems for fab automation and predictive maintenance.
Finally, and more importantly, this special section will take a closer look at the state of training and inference technologies. Nvidia’s recent acquisition of Groq is a stark reminder of how quickly inference technology is evolving. While training hardware has captured much of the limelight in 2025, 2026 could be a year of inference.
Stay tuned for more!
Related Content
- The network-on-chip interconnect is the SoC
- An edge AI processor’s pivot to the open-source world
- Edge AI powers the next wave of industrial intelligence
- Four tie-ups uncover the emerging AI chip design models
- HBM memory chips: The unsung hero of the AI revolution
The post The AI design world in 2026: What you need to know appeared first on EDN.
5 octave linear(ish)-in-pitch power VCO

A few months back, frequent DI contributor Nick Cornford showed us some clever circuits using the TDA7052A audio amplifier as a power oscillator. His designs also demonstrate the utility of the 7052’s nifty DC antilog gain control input:
- Power amplifiers that oscillate—deliberately. Part 1: A simple start.
- Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion.
Eventually, the temptation to have a go at using this tricky chip in a (sort of) similar venue became irresistible. So here it is. See Figure 1.
Figure 1 A2 feedback and TDA7052A’s antilog Vc gain control create a ~300-mW, 5-octave linear-in-pitch VCO. More or less…
The 5-V square wave from comparator A2 is AC-coupled by C1 and integrated by R1C2 to produce an (approximate) triangular waveshape on U1 pin 2. This is boosted by A1 by a gain factor of 0dB to 30dB (1 to 32) according to the Vcon gain control input to become complementary speaker drive signals on pins 5 and 8.
A2 compares the speaker signals to its own 5-V square wave to complete the oscillation-driven feedback loop thusly. Its 5-V square wave is summed with the inverted -1.7-Vpp U1 pin 8 signal, divided by 2 by the R2R3 divider, then compared to the noninverted +1.7-Vpp U1 pin 5 signal. The result is to force A2 to toggle at the peaks of the tri-wave when the tri-wave’s amplitude just touches 1.7 Vpp. This causes the triangle to promptly reverse direction. The action is sketched in Figure 2.

Figure 2 The signal at the A2+ (red) and A2- (green) inputs.
This results in (fairly) accurate regulation of the tri-wave’s amplitude at a constant 1.7 Vpp. But how does that allow Vcon to control oscillation frequency?
Here’s how.
The slope of the tri-wave on A1’s input pin 2 is fixed at 2.5v/(R1C2), or 340 v/s. Therefore, the slopes of the tri-waves on A1 output pins 5 and 8 equal ±U1gain*340 v/s. This means the time required for those tri-waves to ramp through each 1.7-V half-cycle = 1.7/(U1gain*340v/s) = 5ms/U1gain.
Thus, the full cycle time = 2*(5ms/U1gain) = 10ms/U1gain, making Fosc = 100Hz*A1gain.
A1 gain is controlled by the 0- to 2-V Vc input. The Vc input is internally biased to 1 V with a 14-kΩ equivalent impedance as illustrated in Figure 3.

Figure 3 R4 works with the 14 kΩ internal Vc bias to make a 5:1 voltage divider, converting 0 to 2 V into 1±0.2 V.
R4 works into this, making a 5:1 voltage division that converts the 0 to 2 V suggested Vc excursion to the 0.8 to 1.2 V range at pin 4. Figure 4 shows the 0dB to 30dB gain range this translates into.

Figure 4 Vc’s 0 to 2 V antilog gain control span programs A1 pin 4 from 0.8 V to 1.2 V for 1x to 32x gain and Fosc = 100HzA1gain = 100Hz(5.66Vc) = 100 to 3200Hz
The resulting balanced tri-wave output can make a satisfyingly loud ~300 mW warble into 8 Ω without sounding too obnoxiously raucous. A basic ~50-Ω rheostat in series with a speaker lead can, of course, make it more compatible with noise-sensitive environments. If you use this dodge, be sure to place the rheostat on the speaker side of the connections to A2.
Meanwhile, note (no pun) that the 7052 data sheet makes no promises about tempco compensation nor any other provision for precision gain programming. So neither do I. Figure 1’s utility in precision applications (e.g., music synthesis) is therefore definitely dubious.
Just in case anyone’s wondering, R5 was an afterthought intended to establish an inverting DC feedback loop from output to input to promote initial oscillation startup. This being much preferable to a deafening (and embarrassing!) silence.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Power amplifiers that oscillate—deliberately. Part 1: A simple start.
- Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion.
- A pitch-linear VCO, part 1: Getting it going
- A pitch-linear VCO, part 2: taking it further
- Seven-octave linear-in-pitch VCO
The post 5 octave linear(ish)-in-pitch power VCO appeared first on EDN.
Fundamentals in motion: Accelerometers demystified

Accelerometers turn motion into measurable signals. From tilt and vibration to g-forces, they underpin countless designs. In this “Fun with Fundamentals” entry, we demystify their operation and take a quick look at the practical side of moving from datasheet to design.
From free fall to felt force: Accelerometer basics
Accelerometer is a device that measures the acceleration of an object relative to an observer in free fall. What it records is proper acceleration—the acceleration actually experienced—rather than coordinate acceleration, which is defined with respect to a chosen coordinate system that may itself be accelerating. Put simply, an accelerometer captures the acceleration felt by people and objects, the deviation from free fall that makes gravity and motion perceptible.
An accelerometer—also referred to as accelerometer sensor or acceleration sensor—operates by sensing changes in motion through the displacement of an internal proof mass. At its core, it’s an electromechanical device that measures acceleration forces. These forces can be static, like the constant pull of gravity, or dynamic, caused by movement or vibrations.
When the device experiences acceleration, this mass shifts relative to its housing, and the movement is converted into electrical signals. These signals are measured along one, two, or three axes, enabling detection of direction, vibration, and orientation. Gravity also acts on the proof mass, allowing the sensor to register tilt and position.
The electrical output is then amplified, filtered, and processed by internal circuitry before reaching a control system or processor. Once conditioned, the signal provides electronic systems with accurate data to monitor motion, detect vibration, and respond to variations in speed or direction across real-world applications.
In a nutshell, a typical accelerometer uses an electromechanical sensor to detect acceleration by tracking the displacement of an internal proof mass. When the device experiences either static acceleration—such as the constant pull of gravity—or dynamic acceleration—such as vibration, shock, or sudden impact—the proof mass shifts relative to its housing.
This movement alters the sensor’s electrical characteristics, producing a signal that is then amplified, filtered, and processed. The conditioned output allows electronic systems to quantify motion, distinguish between steady forces and abrupt changes, and respond accurately to variations in speed, orientation, or vibration.

Figure 1 Pencil rendering illustrates the suspended proof mass—the core sensing element—inside an accelerometer. Source: Author
The provided illustration hopefully serves as a useful conceptual model for an inertial accelerometer. It demonstrates the fundamental principle of inertial sensing, specifically showing how a suspended proof mass shifts in response to gravitational vectors and external acceleration. This mechanical displacement is the foundation for the capacitive or piezoresistive sensing used in modern MEMS devices to calculate precise changes in motion and orientation.
Accelerometer families and sensing principles
Moving to the common types of accelerometers, designs range from piezoelectric units that generate charge under mechanical stress—ideal for vibration and shock sensing but unable to register static acceleration—to piezoresistive devices that vary resistance with strain, enabling both static and low-frequency measurements.
Capacitive sensors detect proof-mass displacement through changing capacitance, a method that balances sensitivity with low power consumption and supports tilt and orientation detection. Triaxial versions extend these principles across three orthogonal axes, delivering full spatial motion data for navigation and vibration analysis.
MEMS accelerometers, meanwhile, miniaturize these mechanisms into silicon-based structures, integrating low-power circuitry with high precision, and now dominate both consumer electronics and industrial monitoring.
It’s worth noting that some advanced accelerometers depart from the classic proof-mass model, adopting optical or thermal sensing techniques instead. In thermal designs, a heated bubble of gas shifts within the sensor cavity under acceleration, and its displacement is tracked to infer orientation.
A representative example is the Memsic 2125 dual-axis accelerometer, which applies this thermal principle to deliver compact, low-power motion data. According to its datasheet, Memsic 2125 is a low-cost device capable of measuring tilt, collision, static and dynamic acceleration, rotation, and vibration, with a ±3 g range across two axes.
In practice, the core device—formally designated MXD2125 in Memsic datasheets and often referred to as Memsic 2125 in educational kits—employs a sealed gas chamber with a central heating element and four temperature sensors arranged around its perimeter. When the device is level, the heated gas pocket stabilizes at the chamber’s center, producing equal readings across all sensors.
Tilting or accelerating the device shifts the gas bubble toward specific sensors, creating measurable temperature differences. By comparing these values, the sensor resolves both static acceleration (gravity and tilt) and dynamic acceleration (motion such as vehicle travel). MXD2125 then translates the differential temperature data into pulse-duration signals, a format readily handled by microcontrollers for orientation and motion analysis.

Figure 2 Memsic 2125 module hosts the 2125 chip on a breakout PCB, exposing all I/O pins. Source: Parallax Inc.
A side note: the Memsic 2125 dual-axis thermal accelerometer is now obsolete, yet it remains a valuable reference point. Its distinctive thermal bubble principle—tracking the displacement of heated gas rather than a suspended proof mass—illustrates an alternative sensing approach that broadened the taxonomy of accelerometer designs.
The device’s simple pulse-duration output made it accessible in educational kits and embedded projects, ensuring its continued presence in documentation and hobbyist literature. I include it here because it underscores the historical branching of accelerometer technology prior to MEMS capacitive adoption.
Turning to the true mechanical force-balance accelerometer, recall that the classic mechanical accelerometer—often called a G-meter—embodies the elegance of direct inertial transduction. These instruments convert acceleration into deflection through mass-spring dynamics, a principle that long predates MEMS yet remains instructive.
The force-balance variant advances this idea by applying active servo feedback to restore the proof mass to equilibrium, delivering improved linearity, bandwidth, and stability across wide operating ranges. From cockpit gauges to rugged industrial monitors, such designs underscore that precision can be achieved through mechanical transduction refined by servo electronics—rather than relying solely on silicon MEMS.

Figure 3 The LTFB-160 true mechanical force-balance accelerometer achieves high dynamic range and stability by restoring its proof mass with servo feedback. Source: Lunitek
From sensitivity to power: Key specs in accelerometer selection
When selecting an accelerometer, makers and engineers must weigh a spectrum of performance parameters. Sensitivity and measurement range balance fine motion detection against tolerance for shock or dynamic loads. Output type (analog vs. digital) shapes interface and signal conditioning requirements, while resolution defines the smallest detectable change in acceleration.
Frequency response governs usable bandwidth, ensuring capture of low-frequency tilt or high-frequency vibration. Equally important are power demands, which dictate suitability for battery-operated devices versus mains-powered systems; low-power sensors extend portable lifetimes, while higher-draw devices may be justified in precision or high-speed contexts.
Supporting specifications—such as noise density, linearity, cross-axis sensitivity, and temperature stability—further determine fidelity in real-world environments. Taken together, these criteria guide selection, ensuring the chosen accelerometer aligns with both design intent and operational constraints.
Accelerometers in action: Translating fundamentals into real-world life
Although hiding significant complexities, accelerometers are not too distant from the hands of hobbyists and makers. Prewired and easily available accelerometer modules like ADXL345, MPU6050, or LIS3DH ease up breadboard experiments and enable quick thru-hole prototypes, while high-precision analog sensors like ADXL1002 enable a leap into advanced industrial vibration analysis.
Now it’s your turn—move your next step from fundamentals to practical applications, starting from handhelds and wearables to vehicles and machines, and extending further into robotics, drones, and predictive maintenance systems. Beyond engineering labs, accelerometers are already shaping households, medical devices, agriculture practices, security systems, and even structural monitoring, quietly embedding motion awareness into the fabric of everyday life.
So, pick up a module, wire it to your breadboard, and let motion sensing spark your next prototype—because accelerometers are waiting to translate your ideas into action.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- A Guide to Accelerometer Specifications
- NEMS tunes the ‘most sensitive’ accelerometer
- Designer’s guide to accelerometers: choices abound
- Optimizing high precision tilt/angle sensing: Accelerometer fundamentals
- One accelerometer interrupt pin for both wakeup and non-motion detection
The post Fundamentals in motion: Accelerometers demystified appeared first on EDN.
A failed switch in a wall plate = A garbage disposal that no longer masticates

How do single-pole wall switches work, and how can they fail? Read on for all the details.
Speaking of misbehaving power toggles, a few weeks back (as I’m writing this in mid-December), the kitchen wall switch that controls power going to our garbage disposal started flaking out. Flipping it to the “on” position sometimes still worked, as had reliably been the case previously, but other times didn’t.
Over only a few days’ time, the percentage of garbage disposal power-on failures increased to near-100%, although I found I could still coax it to fire up if I then pressed down firmly on the center of the switch. Clearly, it was time to visit the local Home Depot and buy-then-install a replacement. And then, because I’d never taken a wall switch apart before, it was teardown education time for me, using the original failed unit as my dissection candidate!
Diagnosing in the darkAs background, our home was originally built in the mid-1980s. We’re the third owners; we’ve never tried to track down the folks who originally built it, and who may or may not still be alive, but the second owner is definitely deceased. So, there’s really nobody we can turn to for answers to any residential electrical, plumbing, or other questions we have; we’re on our own.
Some of the wall switches scattered throughout the house are the traditional “toggle” style:

But many of them are the more modern decorator “rocker” design:

For example, here’s a Leviton Decora (which the company started selling way back in 1973, I learned while researching this piece) dual single-pole switch cluster in one of the bathrooms:

It looks just like the two-switch cluster originally in the kitchen, although you’ll have to take my word on this as I didn’t think to snap a photo until after replacing the misbehaving switch there.
In the cabinet underneath the sink is a dual AC outlet set. The bottom outlet is always “hot” and powers the dishwasher to the left of the sink. The top outlet (the one we particularly care about today) connects to the garbage disposal’s power cord and is controlled by the aforementioned wall switch. I also learned when visiting the circuit breaker box prior to doing the switch swap that the garbage disposal has its own dedicated breaker and electricity feed (which, it turns out, is a recommended and common approach).
A beefier successorEven prior to removing the wall plate and extracting the failed switch, I had a sneaking suspicion it was a standard ~15A model like the one next to it, which controls the light above the sink. I theorized that this power handling spec shortcoming might explain its eventual failure, so I selected a heavier-duty 20A successor. Here’s the new switch’s packaging, beginning with the front panel (as usual, and as with successive photos, accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes). Note the claimed “Light Almond” color, which would seemingly match the two-switch cluster color you saw earlier. Hold that thought:

And here are the remainder of the box sides:





Installation instructions were printed on the inside of the box.
The only slight (and surprising) complication was that (as with the original) while the line and load connections were both still on one side, with ground on the other, the connection sides were swapped versus the original switch. After a bit of colorful language, I managed. Voila:

The remaining original switch on the left, again controlling the above-sink light, is “Light Almond” (or at least something close to that tint). The new one on the right, however, is not “Light Almond” as claimed (and no, I didn’t think to take a full set of photos before installing it, either; this is all I’ve got). And yes, I twitch inside every time I notice the disparity. Eventually, I’ll yank it back out of the wall and return it for a correct-color replacement. But for now, it works, and I’d like to take a break from further colorful language (or worse), so I just grin and bear it.
Analyzing an antiqueAs for the original, now-malfunctioning right-side switch, on the other hand…plenty of photos of that. Let’s start with some overview shots:


As I’d suspected, this was a conventional 15A-spec’d switch (at first, I’d thought it said 5A but the leading “1” is there, just faintly stamped):
Backside next:

Those two screws originally mounted the switch to the box that surrounded it. The replacement switch came with a brand-new set that I used for re-installation purposes instead:

Another set of marking closeups:
And now for the right side:

I have no clue what the brown goo is that’s deposited at the top, nor do I either want to know what it is or take any responsibility for it. Did I mention that we’re the third owners, and that this switch dated from the original construction 40+ years and two owners ago?

I’m guessing maybe this is what happens when you turn on the garbage disposal with hands still wet and sudsy from hand-washing dishes (or maybe those are food remnants)? Regardless, the goop didn’t seemingly seep down to the switch contacts, so although I originally suspected otherwise, I eventually concluded that it likely ended up not being the failure root cause.
The bottom’s thankfully more pristine:

Those upper and lower metal tabs, it turns out, are our pathway inside. Bend ‘em out:


And the rear black plastic piece pulls away straightaway:

Here’s a basic wall switch functional primer, as I’ve gathered from research on conceptually similar (albeit differing-implementation) Leviton Decora units dissected by others:
along with my own potentially flawed hypothesizing; reader feedback is as always welcomed in the comments!).
The front spring-augmented assembly, with the spring there to hold it in place in one of two possible positions, fits into the grooves of the larger of the two metal pieces in the rear assembly. Line current routes from the screw attached to the larger lower rear-assembly piece and to the front assembly through that same spring-assisted metal-to-metal press-together. And when the switch is in the “on” position, the current then further passes on to the smaller rear-assembly piece, and from there onward to the load via the other attached screw.

However, you’ve undoubtedly already noticed the significant degradation of the contact at the end of the front assembly, which you’ll see more clearly shortly. And if you peer inside the rear assembly, there’s similar degradation at the smaller “load” metal piece’s contact, too:
Let’s take a closer look; the two metal pieces pull right out of the black plastic surroundings:





Now for a couple of closeups of the smaller, degraded-contact piece (yes, that’s a piece of single-sided transparent adhesive tape holding the penny upright and in place!):

Next, let’s look at what it originally mated with when the toggle was in the “on” position:

Jeepers:
Another black plastic plate also thankfully detached absent any drama:




And where did all the scorched metal that got burned off both contacts end up? Coating the remainder of the assembly, that’s where, most of it toward the bottom (gravity, don’cha know):

Including all over the back of the switch plate itself, along with the surrounding frame:




Our garbage disposal is a 3/4 HP InSinkErator Badger 5XP, with a specified current draw of 9.5A. Note, however, that this is also documented as an “average load” rating; the surge current on motor turn-on, for example, is likely much higher, as well as not managed by any start capacitors inside the appliance, which would be first-time charging up in parallel in such a scenario (in contrast, by the way, the dishwasher next to it, a Kenmore 66513409N410, specs 8.1A of “total current”, again presumably average, and 1.2A of which is pulled by the motor). So, given that this was only a 15A switch, I’m surprised it lasted as long as it did. Agree or disagree, readers? Share your thoughts on this and anything else that caught your attention in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit
- Heavy Duty Limit Switch
- Top 10 electromechanical switches
- Product Roundup: Electromechanical switches
- Selecting a switch
The post A failed switch in a wall plate = A garbage disposal that no longer masticates appeared first on EDN.
Using a single MCU port pin to drive a multi-digit display
When we design a microcontroller (MCU) project, we normally leave a few port lines unused, so that last-minute requirements can be met. Invariably, even those lines also get utilized as the project progresses.
Imagine a situation where you have only one port line left out, and you are suddenly required to add a four-digit display. (Normally, you need 16 output port lines to drive four-digit displays or 8 port lines to drive multiplexed four-digit displays). In such a critical situation, the Figure 1 circuit will come in handy.
Figure 1 A MCU single port pin outputs a reset pulse first and then a number of count pulses equal to the number to be displayed.
Figure 1’s top left portion is a long pulse detector circuit, a Design Idea (DI) of mine published in October 2023. For the components selected, this circuit outputs a pulse only when its input pulse width is more than 1 millisecond (ms). For smaller pulses, its output is LOW.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1’s circuit can be made as an add-on module to your MCU project. When a display update is needed, the MCU should send a 2-ms ON and 2-ms OFF reset pulse once. This long pulse resets the counter/ decoders.
Then, it sends 0.1-ms ON and 0.1-ms OFF count pulses, whose number equals to the four-digit number to be displayed. For example, if a number 4950 is to be displayed, the MCU will send one reset pulse followed by 4950 count pulses once. Then, the MCU can continue its other functions
The long pulse detector circuit with Q1, Q2, and U1A outputs a pulse for every input pulse, whose ON width is more than 1 ms. At the start, the MCU outputs a LOW. This turns Q1 OFF and allows Q2 to saturate, discharging C1.
When a 2-ms pulse comes, Q1 gets saturated, and Q2 turns OFF. During this period, C1 starts charging through R3, and its voltage goes to around 1.8 V. This is then sent to the positive input of the U1A comparator. Its negative input is kept at 1-V as decided by the R4, R5 divider. Hence, U1A comparator outputs HIGH, which resets all the counters.
For smaller pulses, this output remains LOW. So, when the MCU sends one reset pulse, U1A outputs a HIGH, which resets the U2,U3,U4, and U5 counter/decoders.
Then, these counters count the number of count pulses sent next and display it. U2 -U5 are counter / 7-segment decoders to drive common cathode LED seven-segment displays.
For a maximum count of 9999, the display update may take around 2 seconds. This time can be reduced by reducing the count pulse duration, depending upon the MCU and clock frequency selected.
I have used one resistor for each display for brightness control (R7, R8, R9, and R10). This will not give an equal brightness to all seven segments. Instead, you may use seven resistors per display or a resistor network per display to have equal brightness.
This idea can be extended to any number of displays driven by a single MCU port line. For more information, watch my video explaining this design:
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- A long pulse detector circuit
- How to design LED signage and LED matrix displays, Part 1
- DIY LED display provides extra functions and PWM
- Implementing Adaptive Brightness Control to Seven Segment LED Displays
- An LED display adapted for DIY projects
The post Using a single MCU port pin to drive a multi-digit display appeared first on EDN.
The high-speed data link to Mars faces a unique timing challenge

Experienced network designers know that the performance achievable of a data link depends on many factors, including the quality and consistency of the inherently analog medium between the two endpoints. Whether it’s air, copper, fiber, or even vacuum, that link sets a basic operating constraint on the speed and bit error rate (BER) of the link.
Any short- or longer-term perturbation in the link—including external and internal noise, distortion, phase shifts, media shifts, and other imperfections—will result in a lower effective data rate, need for more data encoding, error detection, correction, and re-transmissions.
A critical element in high-speed, low-BER data recovery is the advanced clock recovery and re-clocking for synchronization accomplished using phase-locked loops (analog or digital) and other arrangements. The unspoken assumption is that the fundamental measurement of “time” is the same at both ends of the link. This can be established by use of atomic and laser-optical clocks of outstanding precision and performance, if crystal or resonator-based won’t suffice.
But that endpoint equivalence is not necessarily the case. If we want to establish a long-term robotic or even human presence on our neighbor Mars, and set up a robust high-speed data link, we need to know the answer to a basic question: What time is it on Mars?
It turns out that it’s not a trivial question to answer. As Einstein showed in his classic 1905 paper on special relativity “On the Electrodynamics of Moving Bodies,” and subsequent work on general relativity, clocks don’t tick at the same rate across the universe. They will run slightly faster or slower depending on the strength of gravity in their environment, as well as their relative velocity with respect to other clocks.
This time dilation is not a fanciful theory, as it has been measured and verified in many experiments. It even points to a correction factor that must be applied to satellites orbiting the Earth. Without those adjustments, GPS signal timing would be “off” and its accuracy seriously degraded. It’s a phenomenon that is often, and quite correctly, summarized simply as “moving clocks run slow.”
The general problem of time-dilation, objects in motion, and gravity’s effects have been known for many years, and it can be a problem for non-orbiting space vehicles as well. To manage the problem, Barycentric Coordinate Time—known as TCB, from the French name—is a coordinate time standard defined in 1991 by the International Astronomical Union.
TCB is intended to be used as the independent variable of time for all calculations related to orbits of planets, asteroids, comets, and interplanetary spacecraft in the solar system, and defines time as experienced by a clock at rest in a coordinate frame co-moving with the barycenter (center of mass) of the solar system.
What does this have to do with Mars and data links? As shown in Figure 1, the magnitude of the dilation-induced time “slippage” between Earth and Mars is one factor that affects maintaining a high-speed link between these two planets.

Figure 1 In addition to “hard” data from landed rover and orbiting science packages, Mars—also known as “the red planet”—presents a complicated time-dilation scenario. Source: NIST
Now, a team of physicists at the National Institute of Standards and Technology (NIST) has calculated a fairly precise answer for the first time. The problem is complicated as there are four primary “players” to consider: Mars, Earth, Sun, and even our Moon (and the two small moons of Mars also have an effect, though much smaller).
Why the complication? It’s been known since the 1800s that the three-body problem has no general closed-form solution, and the four-body problem is worse. That means there is no explicit formula that can resolve the positions of the bodies in the dilation analysis. Consequently, number-crunching numerical calculations must be used, and it’s even more challenging with four and more bodies.
The researchers’ work is based not only on theory but also measurements from the various “rovers” that have landed on Mars as well as Mars orbiters. The team chose a point on the Martian surface as a reference, somewhat like sea level at the equator on Earth, and used years of data collected from Mars missions to estimate gravity on the surface of the planet, which is five times weaker than Earth’s.
I won’t even try to explain the mathematics of the analysis; all I will say it’s the most “intense” set of equations I have even seen, even compared to solid-state physics.
They determined that on average, clocks on Mars will tick 477 microseconds faster than those on Earth per day (Figure 2). However, Mars’ eccentric orbit and the gravity from its celestial neighbors can increase or decrease this amount by as much as 226 microseconds a day over the course of the Martian year.

Figure 2 Plots of the clock-rate offsets between a clock on Mars compared to clocks on the Earth and the Moon for ∼40 years starting from modified Julian date (MJD) 52275 (January 1, 2003), using DE440 data. DE440 is a highly accurate planetary and lunar ephemeris (a table of positions) from NASA’s Jet Propulsion Laboratory, representing precise orbital data for the Sun, Moon, planets, and Pluto. Source: NIST
The clock is not only “squeezed” with respect to Earth, but the amount of squeeze varies in a non-periodic way. In contrast, they note that the Earth and Moon orbits are relatively constant; time on the Moon is consistently 56 microseconds faster than time on Earth.
If you want the details, check out their open-access paper “A Comparative Study of Time on Mars with Lunar and Terrestrial Clocks” published in The Astronomical Journal of the American Astronomical Society. Don’t worry: a readable summary and overview is also posted at the NIST site, “What Time Is It on Mars? NIST Physicists Have the Answer.”
How engineers will deal with these results is another story, but timing is an important piece of the data link signal chain. Perhaps they will have to build an equivalent of the tide-predicting machine designed by William Thomson (later known as Lord Kelvin) shown in Figure 3.

Figure 3 This analog all-mechanical computer design by William Thomson was designed to predict tides, which are determined by cyclic motion of the Earth, Moon, and many other factors. Source: Science Museum London via IEEE Spectrum
This analog mechanical computer on display at the Science Museum in London was designed for one purpose only: combining 10 cyclic oscillations linked to the periodic motions of the Earth, Sun, and Moon and other bodies to trace the tidal curve for a given location.
Have you ever had to synchronize a data link with a nominally accurate clock on each end, but with clocks that actually had significant differences as well as cyclic and unknown shifting of their frequencies?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- JPL & NASA’s latest clock: almost perfect
- “Digital” Sundial: Ancient Clock Gets Clever Upgrade
- Precision metrology redefines analog calibration strategy
The post The high-speed data link to Mars faces a unique timing challenge appeared first on EDN.
Vision SoC powers 8K multi-stream AI

Ambarella’s CV7 SoC leverages the CVflow computer vision architecture to bring 8K image processing and advanced AI inference to the edge. It supports simultaneous processing of multiple video streams at up to 8K at 60 Hz, making it well suited for a wide range of consumer and industrial AI perception applications, as well as multi-stream automotive systems—particularly those running CNNs and transformer-based networks.

Built on a 4-nm process, the CV7 delivers low power consumption, reducing thermal management requirements and extending battery life. Compared to its predecessor, it consumes 20% less power while integrating a quad-core Arm Cortex-A73 CPU that doubles general-purpose processing performance. A 64-bit DRAM interface further improves memory bandwidth.
The highly integrated CV7 SoC includes a third-generation CVflow AI accelerator, delivering more than 2.5× the AI performance of the previous CV5 SoC. It also integrates an image signal processor and hardware-accelerated video encoding for H.264, H.265, and MJPEG formats.
CV7 SoC samples are now available, along with a CNN toolkit for porting neural networks developed using Caffe, TensorFlow, PyTorch, and ONNX frameworks.
The post Vision SoC powers 8K multi-stream AI appeared first on EDN.
Processors centralize vehicle intelligence

NXP has introduced the S32N7 super-integration processor series for centralized vehicle computing across propulsion, vehicle dynamics, body, gateway, and safety domains. The 5-nm series replaces distributed electronic control units with a single processing hub at the vehicle core, providing a foundation for software-defined vehicles.

By consolidating software and data, the S32N7 simplifies vehicle architectures and reduces system complexity, lowering total cost of ownership by up to 20% through fewer hardware modules and more efficient wiring, electronics, and software integration. The processors are designed to meet automotive safety, security, and real-time requirements.
With 32 compatible variants, the S32N7 series provides a scalable platform for AI-enabled vehicle functions. Its high-performance data backbone supports future AI upgrades without re-architecting the vehicle, enabling long-term software development and differentiation across vehicle platforms.
Bosch is the first company to deploy the S32N7 in its vehicle integration platform. Together, NXP and Bosch have co-developed reference designs, safety frameworks, hardware integration guidelines, and an expert enablement program for early adopters.
The S32N79, the superset of the series, is sampling now.
The post Processors centralize vehicle intelligence appeared first on EDN.
SoC unlocks 20-MHz Wi-Fi 7 for smart IoT

According to Infineon, the AIROC ACW741x family of tri-radio SoCs features the first 20-MHz Wi-Fi 7 device designed for IoT applications. The device also integrates Bluetooth LE 6.0 with channel sounding, IEEE 802.15.4 Thread connectivity, and support for the Matter ecosystem—all in a compact QFN package.

Wi-Fi 7’s support for 20-MHz channel widths represents a meaningful expansion beyond conventional high-speed applications, especially for IoT devices. This enables lower power consumption, smaller form factors, and more reliable connectivity across a wider range of devices.
“With the recent extension of Wi-Fi Certified 7 capabilities to 20 MHz-only devices, Wi-Fi Alliance will deliver the benefits of Wi-Fi 7 for new device categories, enabling the next wave of IoT innovation across smart home, industrial, and healthcare settings,” said Kevin Robinson, CEO, Wi-Fi Alliance. The introduction of 20-MHz Wi-Fi 7 IoT solutions, such as those being introduced by Infineon, will unlock widespread Wi-Fi 7 adoption across the IoT market.”
The ACW741x supports Wi-Fi 7 multi-link operation (MLO), which enhances link reliability through adaptive band switching to reduce congestion and interference. By maintaining concurrent connections across 2.4-GHz, 5-GHz, and 6-GHz bands, Wi-Fi 7 multi-link for IoT provides a more consistent, always-connected experience for devices such as security cameras, video doorbells, alarm systems, medical equipment, and HVAC systems.
Integrated wireless sensing capabilities give smart IoT devices greater contextual awareness and allow them to share intelligence with other devices on the same network. Compared with other IoT Wi-Fi products, the ACW741x delivers up to 15× lower standby power consumption, extending battery life.
The ACW741x family is sampling now, along with hardware and software development kits.
The post SoC unlocks 20-MHz Wi-Fi 7 for smart IoT appeared first on EDN.
Software proves AI behavior in high-risk systems

The Keysight AI Software Integrity Builder enables rigorous validation and lifecycle maintenance of AI-enabled systems to ensure trustworthiness. As AI development grows in complexity, the software delivers transparent, adaptable, and data-driven assurance tailored for safety-critical applications, including automotive systems.

The decision-making behavior of AI models, especially deep neural networks, is difficult to interpret, complicating the identification of dataset limitations and model failure modes. Regulatory frameworks, including ISO/PAS 8800 for automotive AI and the EU AI Act, require demonstrable explainability and validation without defining clear implementation methods.
AI Software Integrity Builder delivers a unified, lifecycle-based framework that provides regulatory evidence and supports continuous AI model improvement. Unlike fragmented toolchains, it integrates dataset analysis, model validation, real-world inference testing, and continuous monitoring. This enables validation of both learned behavior and operational performance for high-risk applications such as autonomous driving.
To learn more about the Keysight AI Software Integrity Builder (AX1000A) and request a quote, visit the product page linked below.
The post Software proves AI behavior in high-risk systems appeared first on EDN.
Transmissive sensors increase vertical headroom

Two transmissive sensors from Vishay—the single-channel VT171P and dual-channel VT172U—feature a dome height that is 42% greater than that of previous-generation industrial devices. Housed in a 5.5×4×5.7 mm surface-mount package, the sensors increase mechanical design flexibility and provide additional vertical headroom for large code wheels in turn-and-push configurations.

The VT171P integrates an infrared emitter and phototransistor detector for motion and speed sensing, while the VT172U adds a second phototransistor to also enable direction detection. Both sensors operate at a wavelength of 950 nm and deliver a typical output current of 1.5 mA, with typical rise and fall times of 14 µs and 21 µs, respectively. They feature a 3-mm gap width and 0.3-mm apertures.
With a moisture sensitivity level (MSL) of 1, the VT171P and VT172U offer unlimited floor life. The sensors are suited for latches, simple encoders, and switches in industrial, consumer, telecommunication, and healthcare applications.
Samples and production quantities are available now, with lead times of 10 weeks.
The post Transmissive sensors increase vertical headroom appeared first on EDN.
Sonic excellence: Music (and other audio sources) in the office, part 1

This engineer could have just stuck with the Gateway 2000-branded, Altec Lansing-designed powered speaker set long plugged into his laptop’s headphone jack. But where’s the fun in that?
Having editorially teased my recent home office audio system upgrade several times now, beginning back in mid-August and repeatedly accompanied by promises to share full details “soon”, I figured I’d better get to writing “now” before I ended up with a reader riot on my hands. Let’s start with the “stack” to the right of my laptop, a photo of which I’ve shared before:

At the bottom is a Schiit Modi Multibit 1 DAC, my teardown of which was published just last month:


Above it is Schiit’s first-generation Loki Mini four-band equalizer (versus the second-generation Loki Mini+ successor shown below, which looks identical from the outside save for altered verbiage on the back panel sticker). I decided to include it versus relying solely on software EQ since I intended to use the setup to listen to more than just computer-based audio sources.


Above it is a passive (unpowered) switch, the Schiit Sys, that enables me to select between two inputs prior to routing the audio to the Rekkr power amplifier set connected to the speakers:


And at the very top is a Schiit Vali 2++ (PDF) tube-based headphone amplifier, identical to the Vali 2+ precursor (introduced in 2000 and shown below) save for a supply constraint-compelled transition to a different tube family:


And the rack? It’s a stacked combo of two (to give me the necessary number of shelves) Topping Acrylic Racks, available both directly from the China-based manufacturer (mind the tariffs!) and from retailers such as Apos in the United States. A little pricey ($39 each), but it makes me smile every time I look at it, which is priceless…or at least that’s how I rationalized the purchase!

As you’ve likely already noticed, this setup uses mainstream unbalanced (i.e., single-ended) RCA cabling. To detail the inter-device connections, let’s start with the device at the end of the chain, the Sys switch. I didn’t initially include it in the stack but then realized I didn’t want to have to turn on the Vali 2++ each time I wanted to listen to music over the speakers (whenever the headphone jack isn’t in use, the Vali 2++ passes input audio directly through to its back panel outputs), given that tubes have limited operating life and replacements are challenging at best to source. As such, while one Sys input set comes from the Vali 2++, the other is directly sourced from the analog “headphone jack” audio output built into my docking station, which is tethered to the laptop (an Intel-based 2020 13” Apple MacBook Pro) over a Thunderbolt 3 connection:

Headphone outputs have passably comparable power specs to the line-level outputs that would normally connect to the Sys switch inputs (and from there to an audio power amplifier’s inputs), with two key qualifiers:
- They’re intended to drive comparatively low-impedance headphones, not high-impedance audio inputs, and
- Given that they integrate a modest audio amplifier circuit, you need to be restrained in your use of the volume setting controlling that audio amplifier to avoid overdriving whatever non-headphone input set they’re connected to in this alternative case.
The only other downside is that since the Sys is at the end of the chain, audio sourced from the docking station’s headphone jack also bypasses the Loki Mini’s hardware EQ facilities, although since it’s always computer-originated in this particular situation, software-based tone controls such those built into Rogue Amoeba’s SoundSource utility for Macs or the open-source Equalizer EPO for Windows systems can provide a passable substitute.
Speaking of EQ, and working backwards in the chain, the Vali 2++ audio inputs are connected to the Loki Mini equalizer outputs, and the Loki Mini inputs are connected to the Modi Multibit 1 DAC outputs. And what of the DAC’s inputs? There are three available possibilities, one of which (optical S/PDIF) is currently unused.
It’s a shame that Apple phased out integrated optical S/PDIF output facilities after 2016; otherwise, I’d use them to tether the DAC to the 2018 Intel-based Apple Mac mini to the right of this stack. Unsurprisingly to you, likely, the USB input is also connected to the laptop, again via the Thunderbolt 3 docking station intermediary (albeit digitally this time). And what about the DAC’s coaxial (RCA) digital input? I’ll save that for part two next time.
The balanced alternativeNow, let’s look to the left of the laptop:

You’ve actually already seen one of the three members of this particular stack a couple of times before, albeit in a dustier and generally more disorganized fashion:

It’s now tidied up with an even pricier ($219) multi-shelf (and aluminum-based this time) rack, the Topping SR2 (here again are manufacturer and retail-partner links):

As before, the headphone amplifier is still the Drop + THX AAA 789:


But I’ve subsequently swapped out Topping’s D10 Balanced DAC:

for a Drop + Grace Design Standard DAC Balanced to assemble a Drop-branded duo:


The Topping D10 Balanced DAC is back in storage for now; I plan to eventually pair it with a S.M.S.L. SO200 THX AAA-888 Balanced Headphone Amplifier (yes, it really is slanted in shape):


And yes, I realize how abundantly blessed I am to have access to all this audio tech toy excess!
As you’ve likely already ascertained from the images (and if not that, the “Balanced” portion of the second product’s name), this particular setup instead leverages balanced interconnect, both XLR- and TRS-implemented. As such, I couldn’t merge another Schiit Loki Mini or Mini+ equalizer into the mix. Instead, I went with the balanced, six-band Schiit Lokius bigger sibling:


The Lokius EQ sits between the DAC and the headphone amplifier. The DAC’s USB input can connect to one of several nearby computers. On the one hand, this is convenient because the DAC is self-powered by that same USB connection. On the other, I’ve noticed that it sometimes picks up audible albeit low-level interference from the USB outputs of my Microsoft Surface Pro 7+ laptop (that said, no such similar issues exist with my Apple M2 Pro Mac Studio).
And what of the DAC’s optical S/PDIF input? Again, you’ll need to wait until next time for the reveal. Finally, in this case, the headphone amplifier doesn’t have pass-through outputs for direct connection to a stereo power amplifier (or, in this case, monoblock pair), so I’m instead (again, sparingly) leveraging its unbalanced headphone output.
The rest of the storySo far, we’ve covered the two stacks’ details. But what does each’s remaining S/PDIF DAC input connect to? And to what do they connect on the output end, and how? Stay tuned for part 2 to come next for the answers to these questions, along with other coverage topics. And until then, please share your so-far thoughts with your fellow readers and me in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Class D: Audio amplifier ascendancy
- A holiday shopping guide for engineers: 2025 edition
- The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit
- Audio amplifiers: How much power (and at what tradeoffs) is really required?
The post Sonic excellence: Music (and other audio sources) in the office, part 1 appeared first on EDN.
Why gold-plated tactile switches matter for reliability

In electronic product design, the smallest components often have the biggest impact on system reliability. Tactile switches—used in control panels, wearables, medical devices, instrumentation, and industrial automation—are a prime example. These compact electromechanical devices must deliver a precise tactile response, stable contact resistance, and long service life despite millions of actuations and a wide range of operating conditions.
For design engineers, one of the most critical choices influencing tactile switch reliability is contact plating. Among available materials, gold plating offers unmatched advantages in conductivity, corrosion resistance, and mechanical stability. While its cost is higher than silver plating—and tin when used for terminal finishes—gold’s performance characteristics make it indispensable for mission-critical applications in which failure is not an option.
Understanding the role of plating in switch performanceThe function of a tactile switch relies on momentary metal-to-metal contact closure. Over-repeated actuation, environmental exposure and mechanical wear can increase contact resistance or even lead to intermittent operation. Plating serves as a barrier layer, protecting the base metal (often copper, brass, or stainless steel) from corrosion and wear while also influencing the switch’s electrical behavior.
Different plating materials exhibit markedly different behaviors:
- Tin (used only for terminal plating) offers low cost and good solderability but oxidizes quickly, raising contact resistance in low-current circuits.
- Silver provides excellent conductivity, but it tarnishes in the presence of sulfur or humidity, forming insulating silver sulfide films.
- Gold, though softer and more expensive, is chemically inert and does not oxidize or tarnish. It maintains stable, low contact resistance even under micro-ampere currents where other metals fail.
This property is crucial for tactile switches used in low-level signal applications, such as microcontroller input circuits, communication modules, or medical sensors, in which switching currents may be in the microamp to milliamp range. At such levels, even a thin oxide film can impede electron flow, creating unreliable or noisy signals.
The science behind gold’s stabilityGold’s chemical stability stems from its electronic configuration: Its filled d-orbitals make it resistant to oxidation and most chemical reactions. Its noble nature prevents formation of insulating oxides or sulfides, meaning the surface remains metallic and conductive throughout the switch’s service life.
From a materials engineering standpoint, plating thickness and uniformity are key. Gold layers used in tactile switches typically range from 0.1 to 1.0 µm, depending on required durability and environmental conditions. Thicker plating layers provide greater wear resistance but increase cost. Engineers should verify that the plating process, often electrolytic or autocatalytic, ensures full coverage on complex contact geometries to avoid thin spots that could expose the base metal.
Many switch manufacturers, such as C&K Switches, use gold-over-nickel systems. The nickel layer acts as a diffusion barrier, preventing copper migration into the gold and preserving long-term contact integrity. Without this barrier, copper atoms could diffuse to the surface over time, leading to porosity and surface discoloration that undermine conductivity.
|
When to specify gold plating Selecting the right contact material for your tactile switch can make or break long-term reliability. Gold plating isn’t always necessary, but in the right applications, it’s indispensable.
Choose gold-plated tactile switches when reliability, environmental resistance, or low-current signal integrity outweighs incremental cost. In these cases, gold is not a luxury; it’s engineering insurance. |
Gold plating’s reliability benefits become evident under extreme environmental or electrical conditions.
Medical devices and sterilization environmentsSurgical and diagnostic instruments often undergo repeated steam autoclaving or chemical sterilization cycles. Moisture and elevated temperatures accelerate corrosion in conventional materials. Gold’s nonreactive surface resists degradation, ensuring consistent actuation force and electrical performance across hundreds of sterilization cycles. This reliability directly impacts patient safety and device regulatory compliance.
Outdoor telecommunications and IoTField-mounted communication hardware—base stations, gateways, or outdoor routers—encounters moisture, pollution, and temperature fluctuations. In such applications, tin or silver plating can oxidize within months, leading to noisy signals or switch failure. Gold-plated tactile switches preserve contact integrity, maintaining low and stable resistance even after prolonged environmental exposure.
Industrial automation and controlIndustrial environments expose components to dust, vibration, and cleaning solvents. Gold’s smooth, ductile surface resists micro-pitting and fretting corrosion, while its low coefficient of friction contributes to predictable mechanical wear. As a result, switches maintain consistent tactile feedback over millions of actuations, a vital factor in HMI panels in which operator confidence depends on feel and repeatability.
Aerospace, defense, and safety-critical systemsIn avionics and safety systems, even transient failures are unacceptable. Gold’s resistance to oxidation and its stable performance across −40°C to 125°C enable designers to meet MIL-spec and IPC reliability standards. The material’s immunity to metal whisker formation, common in tin coatings, eliminates one of the most insidious causes of short-circuits in mission-critical electronics.
Automation and robotics equipment benefit from gold-plated tactile switches that deliver long electrical life and immunity to oxidation in high-cycle production environments. (Source: Shutterstock)
Tackling common mechanical and electrical issues
Contact bounce reduction
Mechanical contacts inherently produce bounce, a rapid, undesired make-or-break sequence that occurs as the metal contacts settle. Bounce introduces signal noise and may require software or hardware debouncing. Gold’s micro-smooth surface reduces surface asperities, shortening bounce duration and producing cleaner signal transitions. This improves response time and may simplify firmware filtering or eliminate RC snubber circuits.
Metal whisker mitigationTin and zinc surfaces can spontaneously grow metallic whiskers under stress, causing shorts or leakage currents. Gold plating’s crystalline structure is stable and does not support whisker growth, a key reliability advantage in fine-pitch or high-density electronics.
Thermal and mechanical stabilityGold has a low coefficient of thermal expansion mismatch with typical nickel underplates, minimizing stress during thermal cycling. It does not harden or crack under high temperatures, allowing switches to function consistently from cold-storage conditions (−55°C) to high-heat appliance environments (>125°C surface temperature).
Electrical characteristics: low-level signal switchingMany engineers underestimate how contact material impacts performance in low-current circuits. When switching below approximately 100 mA, oxide film resistance dominates contact behavior. Non-noble metals can form surface barriers that block electron tunneling, leading to contact resistance in the tens or hundreds of ohms. Gold’s stable surface keeps contact resistance in the 10- to 50-mΩ range throughout the product’s life.
Additionally, gold’s low and stable contact resistance minimizes contact noise, which can be especially important in digital logic and analog sensing circuits. For instance, in a patient monitoring device using microvolt-level signals, a transient resistance increase of just a few ohms can cause erroneous readings or false triggers. Gold plating ensures clean signal transmission even at the lowest currents.
Balancing cost and performanceIt’s true that gold plating adds material and process costs. However, lifecycle analysis often reveals a compelling return on investment. In applications in which switch replacement or failure results in downtime, service calls, or warranty claims, the incremental cost of gold plating is negligible compared with the total system value.
Manufacturers help designers manage cost by offering hybrid switch portfolios. For example, C&K’s KMR, KSC, and KSR tactile switch families include both silver-plated and gold-plated versions. This allows designers to standardize on a footprint while selecting the appropriate contact material for each function: gold for logic-level or safety-critical inputs, silver for higher-current or less demanding tasks.
KSC2 Series tactile switches, available with gold-plated contacts, combine long electrical life and stable actuation in compact footprints for HVAC, security, and home automation applications. (Source: C&K Switches)
Design considerations and best practices
When specifying gold-plated tactile switches, engineers should evaluate both electrical and environmental parameters to ensure the plating delivers full value:
- Current rating and load type: Gold excels in “dry circuit” switching below 100 mA. For higher currents (>200 mA), arcing can erode gold surfaces; mixed or dual plating (gold plus silver) may be more appropriate.
- Environmental sealing: Use sealed switch constructions (IP67 or higher) when exposure to fluids or contaminants is expected. This complements gold plating and extends operating life.
- Plating thickness: For harsh environments or long lifecycles (>1 million actuations), specify a thicker gold layer (≥0.5 µm). Thinner flash layers (0.1 µm) are adequate for indoor or low-stress use.
- Base metal compatibility: Always ensure the plating stack includes a nickel diffusion barrier to prevent copper migration.
- Mating surface design: Gold-to-gold contacts perform best. Avoid mixing gold with tin on the mating side, which can cause galvanic corrosion.
- Actuation force and feel: Gold’s lubricity affects tactile response slightly; designers should verify that chosen switches maintain the desired haptic feel across temperature and wear cycles.
By integrating these considerations early in the design process, engineers can prevent many reliability issues that otherwise surface late in validation or field deployment.
Lifecycle testing and qualification standardsHigh-reliability applications frequently require validation under standards such as:
- IEC 60512 (electromechanical component testing)
- MIL-DTL-83731F (for aerospace-grade switches)
- AEC-Q200 (automotive passive component qualification)
Gold-plated tactile switches often exceed these standards, maintaining consistent contact resistance after 105 to 106 mechanical actuations, temperature cycling, humidity exposure, and vibration. Some miniature switch series, such as the C&K KSC2 and KSC4 families, can endure as many as 5 million actuations, highlighting how material selection plays a critical role in overall system durability.
Practical benefits: From design efficiency to end-user experienceFor engineers, specifying gold-plated tactile switches yields several tangible advantages:
- Reduced maintenance: Longer life and fewer field failures minimize warranty and service costs.
- Simplified circuit design: Low and stable contact resistance can eliminate the need for additional filtering or conditioning circuits.
- Enhanced system reliability: Predictable behavior across temperature, humidity, and lifecycle improves compliance with functional-safety standards such as ISO 26262 or IEC 60601.
- Improved user experience: Consistent tactile feel and reliable operation translate to higher perceived quality and brand reputation.
For the end user, these benefits manifest as confidence—buttons that always respond, equipment that lasts, and interfaces that feel precise even after years of use.
Designing for a connected, reliable futureAs electronic systems become smarter, smaller, and more interconnected, tolerance for failure continues to shrink. A single faulty switch can disable a medical device, interrupt a network node, or halt an industrial process. Choosing gold-plated tactile switches is therefore not simply a materials decision; it’s a reliability strategy.
Gold’s unique combination of chemical inertness, electrical stability, and mechanical durability ensures consistent performance across millions of cycles and the harshest conditions. For design engineers striving to deliver long-lived, premium-quality products, gold plating provides both a technical safeguard and a competitive edge.
In the end, reliability begins at the contact surface—and when that surface is gold, the connection is built to last.
About the author
Michaela Schnelle is a senior associate product manager at Littelfuse, based in Bremen, Germany, covering the C&K tactile switches portfolio. She joined Littelfuse 16 years ago and works with customers and distributors worldwide to support design activities and new product introductions. She focuses on product positioning, training, and collaboration to help customers bring reliable designs to market.
The post Why gold-plated tactile switches matter for reliability appeared first on EDN.
CES 2026: Multi-link, 20-MHz IoT boost Wi-Fi 7 prospects

Wi-Fi 7 enters 2026 with a crucial announcement made at the CES 2026 in Las Vegas, Nevada. The Wi-Fi Alliance is introducing the 20-MHz device category for Wi-Fi 7, aimed at addressing the needs of the broader Internet of Things (IoT) ecosystem. Add Wi-Fi 7’s multi-link IoT capability to this, and you have a more consistent, always‑connected experience for applications such as security cameras, video doorbells, alarm systems, medical devices, and HVAC systems.
The 802.11be standard, widely known as Wi-Fi 7, was drafted in 2024, and the formal standard followed in 2025. From Wi-Fi 1 to Wi-Fi 5, the focus was on increasing the connection’s data rate. But then the industry realized that a mere increase in speed wasn’t beneficial.
“The challenge shifted to managing traffic on the network as more devices were coming onto the network,” said Sivaram Trikutam, senior VP of wireless products at Infineon Technologies. “So, the focus in Wi-Fi 6 shifted toward increasing the efficiency of the network.”
The industry then took Wi-Fi 7 to the next level in terms of efficiency over the past two years, especially with the emergence of high-performance applications. The challenge shifted to how multiple devices on the network could share spectrum efficiently so they could all achieve a useful data rate.
The quest to support multiple devices, at the heart of Wi-Fi 7 design, eventually led to the Wi-Fi Alliance’s announcement that even a 20 MHz IoT device can now be certified as a Wi-Fi 7 device. The Wi-Fi 7 certification program, expanded to include 20-MHz IoT devices, could have a profound impact on this wireless technology’s future.

Figure 1 Wi-Fi 7 in access points and routers is expected to overtake Wi-Fi 6/6E in 2028. Source: Infineon
20-MHz IoT in Wi-Fi 7’s fold
Unlike notebooks and smartphones, 20-MHz devices don’t require a high data rate. IoT applications like door locks, thermostats, security cameras, and robotic vacuum cleaners need to be connected, but they don’t require gigabit data rates; they typically need 15 Mbps. What they demand is high-quality, reliable connectivity, as these devices sit at difficult locations from a wireless perspective.
At CES 2026, Infineon unveiled what it calls the industry’s first 20-MHz Wi-Fi 7 device for IoT applications. ACW741x, part of Infineon’s AIROC family of multi-protocol wireless chips, integrates a tri-radio encompassing Wi-Fi 7, Bluetooth LE 6.0 with channel sounding, and IEEE 802.15.4 Thread with Matter ecosystem support in a single device.

Figure 2 ACW741x integrates radios for Wi-Fi 7, Bluetooth LE 6.0, and IEEE 802.15.4 Thread in a single chip. Source: Infineon
The ACW741x tri-radio chip also integrates wireless sensing capabilities, adding contextual awareness to IoT devices and facilitating home automation and personalization applications. Here, Wi-Fi Channel State Information (CSI) based on the 802.11bf standard enables enhanced Wi-Fi sensing with intelligence sharing between same-network devices. Next, channel sounding delivers accurate, secure, and low-power ranging with centimeter-level accuracy.
ACW741x is optimized for a 20-MHz design to support battery-operated applications such as security cameras, door locks, and thermostats that require ultra-low Wi-Fi-connected standby power. It bolsters link reliability with adaptive band switching to mitigate congestion and interference.
Adaptive band switching without disconnecting from the network opens the door to Wi-Fi 7 multi-link for IoT devices while maintaining concurrent links across 2.4 GHz, 5 GHz, and 6 GHz frequency bands. ACW741x supports Wi-Fi 7 multi-link for IoT, enhancing robustness in congested environments.
Multi-link for IoT devices
Wi-Fi operates in three bands—2.4 GHz, 5 GHz, and 6 GHz—and when a device connects to an access point, it must choose a band. Once connected, it cannot change it, even if that band gets congested. That will change in Wi-Fi 7, which connects virtually to all three bands with a single RF chain at no extra system cost.
Wi-Fi 7 operates in the best frequency band, enhancing robustness in congestion in home networks and interference across neighboring networks. “Multi-link for IoT allows establishing connections at all bands, and a device can dynamically select which band to use at a given point via active band switching without disconnecting from the networking,” said Trikutam. “And you can move from one band to another by disconnecting and reconnecting within 7 to 10 seconds.”
That’s crucial because the number of connected devices in a home is growing rapidly, from 10 to 15 devices after pandemic to more than 50 devices in 2025 in a U.S. and European home. Add this to the introduction of 20-MHz IoT devices in Wi-Fi 7’s fold, and you have a rosy picture for this wireless technology’s future.

Figure 3 Multi-link for IoT enables wireless connections across all three frequency bands. Source: Infineon
According to the Wi-Fi Alliance, shipments of access points supporting the standard rose from 26.3 million in 2024 to a projected 66.5 million in 2025. And ABI Research projects that the transition to Wi-Fi 7 will accelerate further in 2026, with a forecast annual shipment number of Wi-Fi 7 access points at 117.9 million.
Related Content
- Broadcom delivers Wi-Fi 8 chips for AI
- CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch
- Exploring the superior capabilities of Wi-Fi 7 over Wi-Fi 6
- Chipsets brings Wi-Fi 7 to a broad range of wireless applications
- Europe Focuses on 6GHz Regulation, While Wi-Fi 7 Looms Beyond
The post CES 2026: Multi-link, 20-MHz IoT boost Wi-Fi 7 prospects appeared first on EDN.
LiDAR’s power and size problem

Awareness of LiDAR and advanced laser technologies has grown significantly in recent years. This is in no small part due to their use in autonomous vehicles such as those from Waymo, Nuro, and Cruise, plus those from traditional brands such as Volvo, Mercedes, and Toyota. It’s also making its way into consumer applications; for example, the iPhone Pro (12 and up) includes a LiDAR scanner for time-of-flight (ToF) distance calculations.
The potential of LiDAR technologies extends beyond cars, including applications such as range-finding in golf and hunting sights. However, the nature of the technology used to power all these systems means that solutions currently on the market tend to be bulkier and more power-intensive than is ideal. Even within automotive, the cost, power consumption, and size of LiDAR modules continue to limit adoption.
Tesla, for example, has chosen to leave out LiDAR completely and rely primarily on vision cameras. Waymo does use LiDAR, but has reduced the number of sensors in its sixth-generation vehicles: from five to four.
Overcoming the known power and size limitations in LiDAR design is critical to enabling scalable, cost-effective adoption across markets. Doing so also creates the potential to develop new application sectors, such as bicycle traffic or blind-spot alerts.
In this article, we’ll examine the core technical challenges facing laser drivers that have tended to restrict wider use. We’ll also explore a new class of laser driver that is both smaller and significantly more power efficient, helping to address these issues.
Powering ToF laser driversThe main power demand within a LiDAR module comes from the combination of the laser diode and its associated driver that together generate pulsed emissions in the visible or near-infrared spectrum. Depending on the application, the LiDAR may need to measure distances up to several hundred meters, which can require optical power of 100-200 W. Since the efficiency of the laser diodes is typically 20-30%, the peak driving power delivered to the laser must be around 1 kW.
On the other hand, the pulse duration must be short to ensure accuracy and adequate resolution, particularly for objects at close distances. In addition, since the peak optical power is high, limiting the pulse duration is critical to ensure the total energy conforms to health guidelines for eye safety. Fulfilling all these requirements typically calls for pulses of 5 ns or less.
Operating the laser thus requires the driver to switch a high current at extremely high speed. Standing in the designer’s way, the inductance associated with circuit connections, board parasitics, and bondwires of IC packages is enough to prevent the current from changing instantaneously.
These small parasitic inductances are intrinsic to the circuit and cannot be eliminated. However, by introducing a parallel capacitance, it is possible to create a resonant circuit that takes advantage of this inductance to achieve a short pulse duration. If the overall parasitic inductance is about 1 nH and the pulse duration is to be a few nanoseconds, the capacitance can be only a few nano Farads or less. With such a low value of capacitance, the applied voltage must be on the order of 100 V to achieve the desired peak power in the laser. This must be provided by boosting the available supply voltage.
Discrete laser driverFigure 1 shows the circuit diagram for a resonant laser-diode driver, including the resonant capacitor (Csupply) and effective circuit inductance (Lbond). A boost regulator provides the high voltage needed to operate the resonant circuit.

Figure 1 Resonant gate driver and boost regulator, including the resonant capacitor (Csupply) and effective circuit inductance (Lbond). (Source: Silanna Semiconductor)
The circuit requires a boost voltage regulator, depicted as Boost voltage regulator (VR) in the diagram, to provide the high voltage needed at Csupply to deliver the required energy. The circuit as shown contains a discrete gate driver for the main switching transistor (FET), which must be controlled separately to generate the desired switching signals.
In addition, isolation resistance is needed between Cfilter and Csupply, shown in the diagram, to ensure the resonant circuit can operate properly. This is relatively inefficient, as no more than 50% of the energy is transferred from the filter side to Csupply.
Handheld equipment limitationsIn smaller equipment types, such as handheld ranging devices and action cameras, the high voltage must be derived from a small battery of low nominal voltage—typically a 3-V CR2 or a 3.7-V (nominal voltage, up to 4.2 V) lithium battery—which is usually the main power source.
Figure 2 shows a comparable schematic for a laser-diode driver powered from a 3.7-V rechargeable lithium battery. Achieving the required voltage using a discrete boost VR and laser-diode driver is complex, and designers need to be very careful about efficiency.
Multiple step-up converters are often used, but efficiency drops rapidly. If two stages are used, each with an efficiency of 90%, the combined efficiency across the two stages is only 81%.

Figure 2 A laser driver operated from a rechargeable lithium battery, two stages are used for a combined efficiency of 80%. (Source: Silanna Semiconductor)
In addition, there are stringent constraints on enclosure size, and the devices are often sealed to prevent dust or water ingress. On the other hand, sealing also prevents cooling airflow, thereby making thermal management more difficult. In addition, high overall efficiency is essential to maximize battery life while ensuring the high optical power needed for long range and high accuracy.
Circuit layout and sizeThe high speeds and slew rates involved in making the LiDAR transmitter work call for proper consideration of circuit layout and component selection. A gallium nitride (GaN) transistor is typically preferred for its ability to support fast switching at high voltage compared to an ordinary silicon MOSFET. Careful attention to ground connections is also required to prevent voltage overshoots and ground bounce from disrupting proper transistor switching and potentially damaging the transistor.
Also, a compact module design is difficult to achieve due to efficiency limitations and thermal management challenges. The inefficiencies in the discrete circuit implementation mean operating at high power produces high losses and increased self-heating that can cause the operating temperature to rise. However, while short pulses can reduce the average thermal load, current slew rates must be extremely high. If this cannot be maintained consistently, extra losses, more heat, and degraded performance can result.
A heatsink is the preferred thermal management solution, although a large heatsink can be needed, leading to a larger overall module size and increased bill of materials cost. In addition, ensuring eye safety calls for a fast shutdown in the event of a circuit fault.
Bringing the boost stage, isolation, GaN FET driver, and control logic into a single compact IC (see Figure 3) achieves greater functional integration and offers a route to higher efficiency, smaller form factors, and enhanced safety through nanosecond-level fault response.

Figure 3 An integrated driver designed for resonant capacitor charging combines short pulse width with high power and efficiency. This circuit was implemented with Silanna SL2001 dual-output driver. (Source: Silanna Semiconductor)
While leveraging resonant-capacitor charging to achieve short, tightly controlled pulse duration, this integration avoids the energy losses incurred in the capacitor-to-capacitor transfer circuitry. The fault sensing and reporting can be brought on-chip, alongside these timing and control features.
This approach is seen in LiDAR driver ICs like the Silanna FirePower family, which integrate all the functions needed for charging and firing edge-emitting laser (EEL) or vertical-cavity surface-emitting laser (VCSEL) resonant-mode laser diodes at sub-3-ns pulse width. Figure 4 shows how an experimental setup produced a 400-W pulse of 2.94 ns, operating with a capacitor voltage boosted to 120 V with a resonant capacitor value of 2.48 nF.

Figure 4 Test pulse produced using integrated driver and circuit configuration as in Figure 3. (Source: Silanna Semiconductor)
The driver maintains control of the resonant capacitor energy and eliminates any effects of input voltage fluctuations, while on-chip logic sets the output power and performs fault monitoring to ensure eye safety. The combined effects of advanced integration and accurate logic-based control can save 90% of charging power losses compared to a discrete implementation and realize an overall charging efficiency of 85%. The control logic and fault monitoring are configured through an I2C connection.
Of the two devices in this family, the SL2001 works with a supply voltage from 3 V to 24 V and provides a dual GaN/MOS drive that enables peak laser power greater than 1000 W with a pulse-repetition frequency up to several MHz. The second device, the SL2002, is a single-channel driver targeted for lower power applications and is optimized for low input voltage (3 V-6 V) operation. Working off a low supply voltage, this driver’s 80-V laser diode voltage and 1 MHz repetition rate are suited to handheld applications such as rangefinders and 3D mapping devices. Figure 5 shows how the SL2002 can simplify the driving circuit for a battery-operated ranging device powered from a 3.7 V lithium battery.

Figure 5 Simplified circuit diagram for low-voltage battery-operated ranging. (Source: Silanna Semiconductor)
Shrinking LiDAR modulesLiDAR has been a key component in the success of automated driving, working in conjunction with other sensors, including radar, cameras, and ultrasonic detectors, to complete the vehicle’s perception system. However, LiDAR modules must become smaller and more energy-efficient to earn their place in future vehicle generations and fulfil opportunities beyond the automotive sphere.
Focusing innovation on the laser-driving circuitry unlocks the path to next-generation LiDAR that is smaller, faster, and more energy-efficient than before. New, single-chip drivers that deliver high optical output power with tightly controlled, nanosecond pulse width enable LiDAR to address tomorrow’s cars as well as handheld devices such as rangefinders.
Ahsan Zaman is Director of Marketing at Silanna Semiconductor, Inc. for the FirePowerTM Laser Drivers line of products. He joined the company in 2018 through the acquisition of Appulse Power, a Toronto, Canada-based Startup company for AC-DC power supplies, where he was a co-founder and VP of Engineering. Prior to that, Ahsan received his B.A.Sc., M.A.Sc., and Ph.D. degrees in Electrical Engineering from the University of Toronto, Canada, in 2009, 2012, and 2015, respectively. He has more than a decade of experience in power converter architectures, mixed-signal IC design, low-volume and high-efficiency power management solutions for portable electronic devices, and advanced control methods for high-frequency switch-mode power supplies. Ahsan has previously collaborated with industry-leading semiconductor companies such as Qualcomm, TI, NXP, EXAR etc., and co-authored more than 20 IEEE conference and journal publications, and holds several patents in this field
Related Content
- What is automotive LIDAR and how does it work
- LiDAR’s second wind
- FMCW LiDAR sensor addresses accuracy and cost
The post LiDAR’s power and size problem appeared first on EDN.
CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch

While Wi-Fi 7 adoption is accelerating among enterprises, Wi-Fi 8 routers and mesh systems could arrive as early as summer 2026. It’s important to note that the IEEE 802.11bn standard, widely known as Wi-Fi 8, is expected to be ratified in 2028. So, the gap between Wi-Fi 7’s launch and the potential availability of Wi-Fi 8 products in mid-2026 could shorten the typical cycle between Wi-Fi generations.
At CES 2026 in Las Vegas, Nevada, wireless chip vendors like Broadcom and MediaTek are unveiling their Wi-Fi silicon offerings. ASUS is also conducting real-world throughput tests of its Wi-Fi 8 concept routers at CES 2026.

Figure 1 Wi-Fi 8 aims to deliver a system-wide upgrade across speed, capacity, reach, and reliability. Source: Broadcom
Wi-Fi 8—aimed at boosting reliability and reducing latency in dense, interference-prone environments—marks a shift in Wi-Fi evolution. While Wi-Fi 8 maintains the same theoretical maximum data rate as Wi-Fi 7, it aims to improve effective throughput, reduce packet loss, and decrease latency for time-sensitive applications.
Another notable feature of Wi-Fi 8 designs is the incorporation of AI ingredients. Below is a short profile of an AI accelerator chip that claims to facilitate real-time agentic applications for residential consumers.
AI accelerator for Wi-Fi 8
Wi-Fi 8 proponents are quick to point out that it connects the wireless world with the AI future through highly reliable connectivity and low-latency responsiveness. Real-time, latency-sensitive applications are increasingly seeking to employ agentic AI, and for that, Wi-Fi 8 aims to prioritize consistent performance under challenging conditions.
Broadcom’s new accelerated processing unit (APU), unveiled at CES 2026, combines compute and networking ingredients with AI acceleration in a single silicon device. BCM4918—a system-on-chip (SoC) device blending compute acceleration, advanced networking, and security—aims to deliver high throughput, low latency, and intelligent optimization needed for the emerging AI-driven connected ecosystem.
The new AI accelerator for Wi-Fi 8 integrates a neural engine for on-device AI/ML inference and acceleration. It also incorporates networking engines to offload both wired and wireless data paths, enabling complete CPU bypass of all networking traffic. For built-in security, cryptographic protocol acceleration ensures end-to-end data protection without performance compromise.
“Our new BCM4918 APU, along with our full portfolio of Wi-Fi 8 chipsets, form the foundation of an AI-ready platform that not only enables immersive, intelligent user experiences but also does so with efficiency, security, and sustainability at its core,” said Mark Gonikberg, senior VP and GM of Broadcom’s Wireless and Broadband Communications Division.

Figure 2 When paired with BCM6714 and BCM6719 dual-band radios, BCM4918 APU allows designers to develop a unified compute-and-connectivity architecture. Source: Broadcom
AI compute plus connectivity
The BCM4918 APU is paired with two new dual-band Wi-Fi 8 radio devices: BCM6714 and BCM6719. While combining 2.4 GHz and 5 GHz operation into a single piece of silicon, these Wi-Fi 8 radios also feature on-chip 2.4-GHz power amplifiers, reducing external components and improving RF efficiency.
These dual-band radios, when paired with the BCM4918 APU, allow design engineers to quickly develop a unified compute-and-connectivity architecture that enables edge-AI processing, real-time optimization, and adaptive intelligence. The APU and dual-band radios for Wi-Fi 8 are now available to early access customers and partners.
Broadcom’s Gonikberg says that Wi-Fi 8 represents a turning point where broadband, connectivity, compute, and intelligence truly converge. The fact that it’s arriving ahead of schedule is a testament to its convergence merits, and that it’s more than a speed upgrade and could transform connection stability and responsiveness.
Related Content
- Broadcom delivers Wi-Fi 8 chips for AI
- Exploring the superior capabilities of Wi-Fi 7 over Wi-Fi 6
- Understanding the Differences Between Wi-Fi HaLow and Wi-Fi
- Chipsets brings Wi-Fi 7 to a broad range of wireless applications
- Europe Focuses on 6GHz Regulation, While Wi-Fi 7 Looms Beyond
The post CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch appeared first on EDN.
Simple speedy single-slope ADC

Ages ago, humankind crawled out of the primordial analog ooze and began to do digital. They soon noticed and quantified a fundamental need to interconnect their new quantized numerical novelties with the classic continuum of the ancestral engineer’s world. Thus arose the ADC.
Of course, there were (and are) an abundance of ADC schemes and schematics. One of the earliest and simplest of these was the single-slope type.
Single slope ADCs come in two savory flavors. In one, a linear analog voltage ramp is generated and compared to the input signal. The time required for the ramp to rise from zero (or near) to equality with the input is proportional to the input’s amplitude and taken as its digital conversion.
We recently saw an example contributed by Dr. Jordan Dimitrov to our own friendly Design Idea (DI) corner in “Voltage-to-period converter offers high linearity and fast operation.”
In a different cultivar of the single sloper, a capacitor is charged to the input voltage, then linearly ramped down to zero. The time required to do that is proportional to Vin and counts (pun!) as the conversion result. An (extremely!) simple and cheap example of this type was published here about two and a half years ago in “A “free” ADC.”
Wow the engineering world with your unique design: Design Ideas Submission Guide
While simple and cheap are undeniably good things, too much of a good thing is sometimes not such a good thing. The circuit in Figure 1 adds a few refinements (and a bit more cost) to that basic design in pursuit of an order of magnitude (or two) better accuracy and perhaps a bit more speed.
Figure 1 Simple speedy single-slope (SSSS) ADC biphasic conversion cycle.
Here’s how it works:
- (CONVERT = 1) switch U1 charges C1 to Vin
- (CONVERT = 0) C1 is linearly discharged by 100 µA current sourced by Z1Q1
Note: Z1, C1, and R2 should be precision types.
Conversion occurs in two phases, selected by one GPIO bit configured for output (CONVERT/ACQUIRE).
During the ACQUIRE (1) interval SPDT switch U1 connects integrator capacitor C1 to the input source, charging it to Vin. The acquisition time constant of the charging is:
C1(R sZ1+ U1 Ron, + Q2’s input impedance) = ~10 µs
To complete the charge to ½-lsb-precision at 12-bit resolution, this needs an ACQUIRE interval of:
10µs*loge(2(12+1)) = 90µs
The controlling microcontroller can then return CONVERT to zero, which switches the input side of C1 to ground, driving the base of the comparator transistor negative for a voltage step of –Vin, plus a “smidgen” (~12 mV).
This last is contributed by C2 to compensate for the zero offset that would otherwise accrue from Q2’s finite voltage gain and storage time.
Q1’s emergence from saturation drives INTEGRATE positive. Here it remains until the discharge of C1 is complete and Q1 turns back ON. This interval is:
Vin*C1 / 100µA = 200µs/v = 1-ms maximum
If the connected counter/peripheral runs at 20 MHz, then the max-count accumulation and conversion resolution will be 4000, or 11.97 bits.
This 1-ms, or ~12-bit, conversion cycle is sketched in Figure 2. Note that good integral nonlinearity (INL) and differential nonlinearity (DNL) are inherent.

Figure 2 The SSSS ADC waveshapes. The ACQUIRE duration (12 bits) is 90 µs. The INTEGRATE duration is 1ms max (Vin C1 / Iq1 = 200 µs/V). Amplitude is 5 Vpp.
Of course, not all signal sources will gracefully tolerate the loading imposed by this conversion sequence, and not all applications will find the tolerance of available LM4041 references and R1C1 adequately precise.
Figure 3 shows fixes for both of these limitations. A typical RRIO CMOS amplifier for A1 eliminates the input loading problem, and the R5 trim provides a convenient means for improving conversion calibration.

Figure 3 A1 input buffer unloads Vin, and R5 calibration trim improves accuracy.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Voltage-to-period converter offers high linearity and fast operation
- A “free” ADC
- Another weird 555 ADC
- 15-bit voltage-to-time ADC for “Proper Function” anemometer linearization
The post Simple speedy single-slope ADC appeared first on EDN.
Amazon’s Smart Plug: Getting inside requires more than just a tug

Amazon wisely doesn’t want naïve consumers poking around inside its high-voltage AC-switching devices. This engineer was also thwarted in his exploratory efforts…initially, at least.
Early last month, within a post detailing my forced-by-phaseout transition from Belkin’s Wemo smart plugs to TP-Link’s Kasa and Tapo devices, I mentioned that I’d originally considered a different successor:
Amazon was the first name that came to mind, but although its branded Smart Plug is highly rated, it’s only controllable via Alexa. I was looking for an ecosystem that, like Wemo, could be broadly managed, not only by the hardware supplier’s own app and cloud services but also by other smart home standards…

Even though I ended up going elsewhere, I still had a model #HD34BX Amazon Smart Plug sitting on my shelf. I’d bought it back in late November 2020 on sale for $4.99, 80% off the usual $24.99 price (and in response to, I’m guessing, per the purchase date, a Black Friday promotion). Regular readers already know what comes next: it’s teardown time!
Let’s start with some outer box shots, as usual (as with subsequent images), accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:




Note that, per my prior writeup’s “specific hardware requirement that needed to be addressed,” it supports (or at least claims to) up to 15A of current:
- Input: 100-120V, 60 Hz, 15A Max
- Output:
- 120V, 60 Hz, 15A, resistive load
- 120V, 60 Hz, 10A, inductive load
- 120V, 60 Hz, 1/2 HP, motor load
- 120V, 60 Hz, TV-5, incandescent
- Operating Temperature: 0-35°C
- IP Rating: IP30
thereby being capable of power-controlling not only low-wattage lamps but also coffee makers, curling irons, and the like:


See that translucent strip of tape at the upper right?

Wave buh-bye to it; it’s time to look inside:


Nifty cardboard-based device-retention mechanism left over at the bottom:

The bottom left literature snippet is the usual warranty, regulatory and other gobbledygook:

The one at right is a wisp of a quick-start guide:


But neither of them, trust me I already realize, is the fundamental motivation for why you’re here today. Instead, it’s our dissection subject (why was I having flashbacks to the recently viewed and greatly enjoyed 2025 version of Frankenstein as I wrote those prior words?):


Underneath the hole at far left is an activity-and-status LED. And rotating the smart plug 90°:

there’s the companion switch, which not only allows for manual power control of whatever’s plugged into it but also initiates a factory reset when pressed and held for an extended period.
Around back are specs-and-such, including the always-insightful FCC ID (2ALBG-2017), along with the line (“hot”) and neutral source blades and ground pin (Type B NEMA 5-15 in this case):
In contrast to its left-side sibling, the right side is comparatively bland (i.e., to clarify, there’s nothing under the penny):

as are the bottom:

and the top, for that matter, unless you’re into faintly embossed Amazon logos:

My first (few…seeming few dozen…) attempts to get inside via the visible seam around the backside edges, trying out various implements of destruction in the process, were for naught:

Though the efforts weren’t completely wasted, as they motivated me to finally break out the Dremel set that had been sitting around unused and collecting dust since…yikes…mid-2005, my Amazon order history just informed me:

and which delivered ugly but effective results (albeit leaving the smart plug headed for nowhere but the landfill afterwards):


First step: unscrew and disconnect the wire going from the front panel socket’s load (“hot”) slot to the PCB (where it’s soldered):
Like I said before…ugly but effective:

At the top (in this photo, to the left when originally assembled) are the light pipe that routes the LED (yet to be seen but presumably on the PCB) output to the front panel, along with the mechanical assembly for the left-side switch:

You’ve already seen one top view of the insides, three photos ago. Here’s another, this time standalone and rotated:
And here are four of the five other perspectives; the back view will come later. Front:
Left side, showing the PCB-mounted portion of the switch assembly:
Right behind the switch is the outward-pointing LED whose location I’d just prognosticated:
Right side:
And bottom:
Electron routing and switchingOnward. The ground pin from the back panel routes directly to the front panel socket’s ground slot, not interacting with any intermediary circuitry en route:

You’ve probably already noticed that the “PCB” is actually a three-PCB assembly: smaller ones at top and bottom, both 90°-connected to the main one at the back. To detach the latter from the back chassis panel requires removal of another screw:

Houston, we have liftoff:

This is interesting, at least to me. The neutral wire is attached to its corresponding back-panel blade with a screw, albeit also to the PCB at other end with solder:

but the line (“hot”) wire is soldered at both ends:
This seemingly inconsistent approach likely makes complete sense to those of you more versed in power electronics than me; please share your thoughts in the comments. For now…snip:

Assuming, per my earlier comments, that you’ve already noticed the three-PCB assembly, you might have also noticed some white tape on both sides of the mini-PCB located at the bottom. Wondering what’s underneath it? Me too:
The answer: not much of anything!
What’s the frequency, Kenneth?(At least) one more mystery to go. We’ve already seen plenty of predictable AC switching and AC-to-DC conversion circuitry, but where’s all the digital and RF stuff that controls the AC switching, along with wirelessly communicating with the outside world? For the answer, I’ll direct your attention to the mini-PCB at the top, which you may recall initially glimpsing earlier:
What you’re looking at on the other side is the WCBN4520R, a Wi-Fi-plus-Bluetooth Low Energy module discussed in-depth in an informative Home Assistant forum thread I found.
Forum participants had identified the PCB containing the module as the WN4520L from LITE-ON Technology, with Realtek’s RTL8821CSH single-chip wireless controller and Rockchip Electronics’ RKNanoD dual Arm Cortex-M3 microcontroller supposedly inside the module. But a different teardown I found right before finalizing this piece instead shows MediaTek’s MT7697N:
A highly integrated single chip offering an application processor, low power 1T1R 802.11 b/g/n Wi‑Fi, Bluetooth subsystem and power management unit. The application processor subsystem contains an ARM Cortex‑M4 with floating point unit. It also supports a range of interfaces including UART, I2C, SPI, I2S, PWM, IrDA, and auxiliary ADC. Plus, it includes embedded SRAM/ROM.
as the main IC inside the module, accompanied by a Macronix 25L3233F (PDF) 32 Mbit serial flash memory. I’m going with the latter chip inventory take. Regardless, to the left of the module is a visible silhouette of the PCB-embedded antenna, and there’s also a SMA connector on the board for tethering to an optional external antenna, not used in this particular design.
And there you have it! As always, sound off with your thoughts in the comments, please!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Teardown: Smart plug adds energy consumption monitoring
- This smart plug automatically resets your router when your Internet goes out
- Limping Into the 21st Century with Smart Technology
- Teardown: A Wi-Fi smart plug for home automation
The post Amazon’s Smart Plug: Getting inside requires more than just a tug appeared first on EDN.












































