Українською
  In English
EDN Network
Real-time AI fuels faster, smarter defect detection

TDK SensEI’s edgeRX Vision system, powered by advanced AI, accurately detects defects in components as small as 1.0×0.5 mm in real time. Operating at speeds up to 2000 parts per minute, it reduces false positives and enhances efficiency in high-throughput manufacturing.
AI-driven vision systems now offer real-time processing, improved label efficiency, and multi-modal interaction through integration with language models. With transformer-based models like DINOv2 and SAM enabling versatile vision tasks without retraining, edge-based solutions are more scalable and cost-effective than ever—making this a timely entry point for edgeRX Vision in high-volume manufacturing.
edgeRX Vision integrates with the company’s edgeRX sensors and industrial machine health monitoring platform. By enhancing existing hardware infrastructure, it helps minimize unnecessary machine stoppages. Together, the system offers manufacturers a smart, integrated approach to demanding production challenges.
Request a demonstration of the edgeRX Vision defect detection system via the product page link below.
The post Real-time AI fuels faster, smarter defect detection appeared first on EDN.
Open-source plugin streamlines edge AI deployment

Analog Devices and Antmicro have released AutoML for Embedded, a tool that simplifies AI deployment on edge devices. Part of Antmicro’s hardware-agnostic, open-source Kenning framework, it automates model selection and optimization for resource-constrained systems. The tool helps users deploy models more easily without deep expertise in AI or embedded development.
AutoML for Embedded is a Visual Studio Code plugin designed to integrate seamlessly into existing development workflows. It works with CodeFusion Studio and supports direct deployment to ADI’s MAX78002 AI accelerator MCU and MAX32690 ultra-low power MCU. The tool also enables rapid prototyping and testing through Renode-based simulation and Zephyr RTOS workflows. Its support for general-purpose, open-source tools allows flexible model optimization without locking developers into a specific platform.
With step-by-step tutorials, reproducible pipelines, and example datasets, users can move from raw data to edge AI deployment quickly without needing data science expertise. AutoML for Embedded is available now on the Visual Studio Code Marketplace and GitHub. Additional resources are available on the ADI developer portal.
AutoML for Embedded product page
The post Open-source plugin streamlines edge AI deployment appeared first on EDN.
Foundry PDK drives reliable automotive chip design

SK keyfoundry, in collaboration with Siemens EDA Korea, has introduced a 130-nm automotive process design kit (PDK) compatible with Calibre PERC software. The process node supports both schematic and layout verification, including interconnect reliability checks. With this PDK, fabless companies in Korea and abroad can optimize automotive power semiconductor designs while performing detailed reliability verification.
According to Siemens, while the 130-nm process has been a reliable choice for analog and power semiconductor designs, growing design complexity has made it harder to meet performance targets. The new PDK from SK keyfoundry enables designers to use Siemens’ Calibre PERC with the foundry’s process technology, supporting layout-level verification that accounts for manufacturing constraints.
SK keyfoundry aims to deepen collaboration with Siemens through optimized design solutions, enhanced manufacturing reliability, and a stronger foundry market position.
To learn more about Siemen’s Calibre PERC reliability verification software, click here.
Siemens Digital Industries Software
The post Foundry PDK drives reliable automotive chip design appeared first on EDN.
SiC diodes maintain stable, efficient switching

Nexperia’s 1200-V, 20-A SiC Schottky diodes contribute to high-efficiency power conversion in AI server infrastructure and solar inverters. The PSC20120J comes in a D2PAK Real-2-Pin (TO-263-2) surface-mount package, while the PSC20120L uses a TO-247 Real-2-Pin (TO-247-2) through-hole package. Both thermally stable plastic packages ensure reliable operation up to +175°C.
These Schottky diodes offer temperature-independent capacitive switching and virtually zero reverse recovery, resulting in a low figure of merit (QC×VF). Their switching performance remains consistent across varying current levels and switching speeds.
Built on a merged PiN Schottky (MPS) structure, the diodes also provide strong surge current handling, as shown by their high peak forward current (IFSM). This robustness reduces the need for external protection circuitry, helping engineers simplify designs, improve efficiency, and shrink system size in high-voltage, harsh-environment applications.
Use the product page links below to view datasheets and check availability for the PSC20120J and PSC20120L SiC Schottky diodes.
The post SiC diodes maintain stable, efficient switching appeared first on EDN.
Electronic water softener design ideas to transform hard water

If you are tired of scale buildup, scratchy laundry, or cloudy glassware, it’s probably time to take hard water into your own hands, literally. This blog delves into inventive, affordable, and unexpectedly easy design concepts for building your own electronic water softener.
Whether you are an engineer armed with blueprints or a hands-on do-it-yourself enthusiast ready to roll up your sleeves, the pointers shared here will help you transform a persistent plumbing issue into a smooth-flowing success.
So, what’s an electronic water softener (descaler)? It’s a simple oscillator circuit tailored to create a magnetic field around a water pipe to reduce the chances of smaller deposits sticking to the inside of the pipes.
Not new, the concept of water conditioning dates back to the 1930s. Hard water has a high concentration of minerals, the most abundant of which is calcium particles. The makeup of deposits leads to the term hard water and reduces the effectiveness of soaps and detergents. Over time, these tiny deposits can stick to the inside of pipes, clog filters, faucets and shower heads, and leave residue on kettles.
The idea behind the electronic/electromagnetic water softener is that a magnetic field around the water pipe causes calcium particles to clump together. Such a system consists of two coils wound around the water pipe with a gap between them.
The circuit driving them is often a high frequency oscillator that generates pulses of 15 kHz or so. As a result, large particles are formed, which pass through the water pipe and do not cling to the inside.
Thus, the electronic water softener operates by wrapping coils of wire around the incoming water main to pass a magnetic field through the water. This causes the calcium in the water to stay in solution, thereby bottling it up from clinging to taps and kettles. Also, the impact of electromagnetic flux makes the water physically soft as the magnetic flux breaks the hard molecules and makes it soft by nature.
Below is a visual summary of the process.
Figure 1 The original image was sourced from Google Images and has been retouched by author for visual clarity.
Most electronic descalers typically operate with two coils to increases the time for which the water is exposed to the electromagnetic waveform, but a few use only one coil.
Figure 2 Here is how electronic descalers operate with two coils or one coil. Source: Author
A quick inspection of the most common water softener circuits found on the web shows that the drive frequency is about 2 to 20 kHz in the 5- to 15-V amplitude range. The coils to be wound outside the pipe are just about 20- to 30-turn inductors made of 18 to 24 SWG insulated or copper wire.
It has also been noted that neither the material of the water pipe (PVC or metal) nor its diameter has a significant effect on the efficiency of the lime solver.
When I stumbled upon a blogpost from 2013, it felt like the perfect moment to explore the idea more deeply. This marks the beginning of a hands-on learning journey—less of a formal project and more of a series of small, practical experiments and functional blueprints.
The focus is not on making a polished product, but on picking up new skills and exploring where the process leads. So, after learning from several sources about how electronic water softeners work, I decided to give it a try.
The first step in my process involved developing a universal (and exploratory) driver circuit for the pipe coil(s). The outcome is shown below.
Figure 3 The schematic shows a driver circuit for the pipe coil. Source: Author
Below is the list of parts.
- C1 and C2: 470 uF/25 V
- C3: 1,000 uF/25 V
- D1: 1N4007
- L1: 470 uH/1 A
- IC1: MC34151
Note that the single-layer coil L2 on the 20-mm diameter PVC water pipe is made of around 60 turns of 18AWG insulated wire. The single-layer coil on pipe has an inductance of about 20 uH when measured with an LCR meter. The 470 uH drum core inductor L1 (empirically selected part) throttles the peak current through the pipe coil L2.
A single-channel MOSFET gate driver is adequate for IC1 in this setup; however, I opted for the MC34151 gate driver during prototyping as it was readily on hand. Next comes a bit different blueprint for the pipe coil driver.
Figure 4 Arduino Uno was used to drive the pulse input of the pipe coil driver circuitry. Source: Author
To drive the pulse input of the pipe coil driver circuitry, an Arduino Uno was used (just for convenience) to generate a sweeping frequency between 500 Hz and 5 kHz (the adapted code is available upon request). Although selected without a specific technical justification, this empirically optimized range has demonstrated enhanced performance in some targeted zones.
At this stage, opting for a microcontroller-based oscillator or pulse generator is advisable to ensure scalability and facilitate future enhancements. That said, a solution using discrete components continues to be a valid choice (an adaptable textbook pointer is provided below).
Figure 5 An adaptable textbook pointer highlights the above solution. Source: Author
Nevertheless, the setup ought to be capable of delivering a pulsed current that generates time-varying magnetic fields within the water pipe, thereby inducing an internal electric field. For optimal induction efficiency, a square-wave pulsed current is always advocated.
The experiment is still ongoing, and I am drawing a tentative conclusion at this stage. But for now, it’s your chance to dive in, experiment, and truly make it your own.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
The post Electronic water softener design ideas to transform hard water appeared first on EDN.
What makes today’s design debugging so complex

Why does the task of circuit debugging keep getting complex year by year? It’s no longer looking at the schematic diagram and sorting out the signal flow path from input to output. Here is a sneak peek at the factors leading to a steady increase in challenges in debugging electronics circuits. It shows how the intermingled software/hardware approach has made prototyping electronic designs so complex.
Read the full blog on EDN’s sister publication, Planet Analog.
Related content
- Poker has lessons for circuit design, debug
- The double-fault is the debugging challenge
- Debugging: Skill, persistence, luck, and discipline
- Debugging Tactics: 4 Ways to Increase Integration
The post What makes today’s design debugging so complex appeared first on EDN.
Headlights In Massachusetts

From January 5, 2024, please see: “The dangers of light glare from high-brightness LEDs.”
I have just become aware that at least one state has wisely chosen to address the safety issue of automotive headlight glare. As to the remaining forty-nine states, I have not yet seen any indication(s) of similar statutes. Now please see the following screenshots and links:
One question at hand of course is how well the Massachusetts statute will be enforced. What may be on the books is one thing but what will happen on the road remains to be seen.
I am hopeful.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- The headlights and turn signal design blunder
- The dangers of light glare from high-brightness LEDs
- Headlights and turn signals, part two
- LED headlights: Thank goodness for the bright(nes)s
The post Headlights In Massachusetts appeared first on EDN.
Another weird 555 ADC

Integrating ADCs that provide accurate results without requiring a precision integrator capacitor has been around for a long time. A venerable example is that multimeter favorite, the dual-slope ADC. That classic topology uses just one integrator to alternately accumulate both incoming signal and complementary voltage references with the same RC time constant. It thus automatically ratios out time constant tolerance. Slick.
This Design Idea (DI) will describe a (possibly) new integrating converter that reaches a similar goal of accurate conversions without needing an accurate capacitor. But it gets there via a significantly different route. Along the route, it picks up some advantageous wrinkles.
Wow the engineering world with your unique design: Design Ideas Submission Guide
As Figure 1 shows, the design starts off with an old friend, the 555-analog timer.
Figure 1 Op-amp A1 continuously integrates the incoming Vin signal, thus minimizing noise. Conversion occurs in alternating phases, T- and T+. The T-/T+ phase duration ratio is independent of the RC time constant, is therefore insensitive to C1 tolerance, and contains both Vin magnitude and polarity information.
Incoming signal Vin is summed with the voltage at node X and accumulated by differential integrator A1. A conversion cycle begins when A1’s output (node Y) reaches 4.096 V and lifts timer U1’s threshold pin (Thr) through the R2/R3 divider to the 2.048-V reference supplied by voltage reference Z1. This switches on U1’s Dch pin, grounding A1’s noninverting input through the R4/R5 divider, outputs a zero to the GPIO bit (node Z), and begins the T- phase as A1’s output ramps down. The duration of this T- phase is given by:
T- = R1C1/(1 + Vin/Vfullscale)
Vfullscale = ±2.048v(R1/R6) = ±0.683v
The T- phase ends when A1’s output reaches U1’s trigger (Trg) voltage set to 1.024 V by Z1 and U1’s internal 2:1 divider. See the LMC555 datasheet for the gritty details.
This starts the T+ conversion phase with an output of one on the GPIO bit, and the release of Dch by U1, which drives A1’s noninverting input to 1.024 V, set by Z1 and the R4/R5 divider. The T+ positive-going ramp continues until A1’s output reaches the 4.096 VThr threshold described above and initiates the next conversion cycle.
T+ phase duration is:
T+ = R1C1/(1 – Vin/Vfullscale)
This frenetic frenzy of activity is summarized in Figure 2.
Figure 2 Various conversion signals found at circuit nodes X, Y, and Z.
Meanwhile, the GPIO pin is assumed to be connected to a suitable microcontroller counter/time peripheral that is accumulating T- and T+ durations for a chosen resolution and conversion rate. Something between 1 µs and 100 ns should work for the subsequent Vin calculation. This brings up that claim of immunity to integrator capacitor tolerance you might be wondering about.
The durations of the T+ and T- ramps are proportional to C1, as shown in Figure 3.
Figure 3 Black = Vin, Red = T+ duration in ms, Blue = T- duration, C1 = 0.001 µF.
However, software arithmetic saves the day (and maybe even my reputation!) because recovery of Vin from the raw phase duration timeouts involves a bit of divide-and-conquer.
Vin = Vfullscale ((1 – (T-/T+))/(1 + (T-/T+)))
And, of course, when T- is divided by T+, the R1C1 terms conveniently disappear, taking sensitivity to C1 tolerance away with them!
A final word about Vfullscale. The ±0.683 V figure derived above is a minimum value, but any larger span can be easily accommodated by adding one resistor (R8) and changing another (R1). Here’s the scale-changing arithmetic:
R1 = 1M * Vfullscale/0.683
R8 = 1/(1/1M – 1/R1)
For example, ±10 V is illustrated in Figure 4.
Figure 4 A ±10-V Vin span is easily accommodated – if you can find a 15 MΩ precision resistor.
Note that R1 would probably need to be a series string to get to 15 MΩ using OTS resistors.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- 15-bit voltage-to-time ADC for “Proper Function” anemometer linearization
- Inductor-based astable 555 timer circuit
- Gated 555 astable hits the ground running
- EDN Access–05.08.97 Digital position encoder does away with ADC
- Adding one resistor improves anemometer analog linearity to better than +/-0.5%
- Voltage-to-period converter improves speed, cost, and linearity of A-D conversion
The post Another weird 555 ADC appeared first on EDN.
Circuits to help verify matched resistors

Analog designers often need matched resistors for their circuits [1]. The best solution is to buy integrated resistor networks [2], but what can you do if the parts vendors do not offer the desired values or matching grade?
Wow the engineering world with your unique design: Design Ideas Submission Guide
The circuit in Figure 1 can help. It is made of two voltage dividers (a Wheatstone bridge) followed by an instrumentation amplifier, IA, with a gain of 160. R3 is the reference resistor, and R4 is its match. The circuit subtracts the voltages coming out of the two dividers and amplifies the difference.
Figure 1 The intuitive solution is a circuit made of a Wheatstone bridge and an instrumentation amplifier.
Calculations show that the circuit provides a perfectly linear response between output voltage and resistor mismatch (see Figure 2). The slope of the line is 1 V per 1% of resistor mismatch; for example, a Vout of -1 V means -1% deviation between R3 and R4.
Figure 2 Circuit response is perfectly linear with a 1:1 ratio between output voltage and resistor mismatch.
A possible drawback is the price: instrumentation amplifiers with a power supply of ±5 V and more start at about 6.20 USD. Figure 3 shows another circuit using a dual op-amp, which is 2.6 times cheaper than the cheapest instrumentation amplifier.
Figure 3 This circuit also provides a perfect 1:1 response, but at a lower cost.
The transfer function is:
Assuming,
converts the transfer function into the form,
If the term within the brackets equals unity and R5 equals R6, the transfer function becomesIn other words, the output voltage equals the percentage deviation of R4 with respect to R3. This voltage can be positive, negative, or, in the case of a perfect match between R3 and R4, zero.
The circuit is tested for R3 = 10.001 kΩ and R4 = 10 kΩ ±1%. As Figure 4 shows, the transfer function is perfectly linear (the R2 factor equals unity) and provides a one-to-one relation between output voltage and resistor mismatch. The slope of the line is adjusted to unity using potentiometer R2 and the two end values of R4. A minor offset is present due to the imperfect match between R5 and R6 and the offset voltage VIO of the op-amps.
Figure 4 The transfer function provides a convenient one-to-one reading.
A funny detail is that the circuit can be used to find a pair of matched resistors, R5 and R6, for itself. As mentioned before, it is better to buy a network of matched resistors. It may look expensive, but it is worth the money.
Equation 3 shows that circuit sensitivity can be increased by increasing R7 and/or VREF. For example, if R7 goes up to 402 kΩ, the slope of the response line will increase to 10 V per 1% of resistor mismatch. A mismatch of 0.01% will generate an output voltage of 100 mV, which can be measured with high confidence.
Watch the current capacity of VREF and op-amps when you deal with small resistors. A reference resistor of 100 Ω, for example, will draw 25 mA from VREF into the output of the first op-amp. Another 2.5 mA will flow through R5.
Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.
Related Content
- Design Notes: Matched Resistor Networks for Precision Amplifier Applications
- Peculiar precision full-wave rectifier needs no matched resistors
- The Effects of Resistor Matching on Common Mode Rejection
- Getting an audio signal with a THD < 0.0002% made easy
- RMS stands for: Remember, RMS measurements are slippery
References
- Bill Schweber. The why and how of matched resistors (a two part series). https://www.powerelectronictips.com/the-why-and-how-of-matched-resistors-part-1/.
- Art Kay. Should you use discrete resistors or a resistor network? https://www.planetanalog.com/should-you-use-discrete-resistors-or-a-resistor-network/ .
The post Circuits to help verify matched resistors appeared first on EDN.
Real-time motor control for robotics with neuromorphic chips

Robotic controls started with simplistic direct-current motors. Engineers had limited mobility because they had few feedback mechanisms. Now, neuromorphic chips are entering the field, mimicking the way the human brain functions. Their relevance in future robotic endeavors is unprecedented, especially as electronic design engineers persist through and surpass Industry 4.0.
Here is how to explore real-time controllers and create better robots.
Robotics is a resource-intensive field, especially when depending on antiquated hardware. As corporations aim for greater sustainability, neuromorphic technologies promise better energy efficiency. Studies are proving the value of adjusting mapping algorithms to lower electrical needs.
Implementing these chips at scale could yield substantial power cuts, saving operations countless dollars in waste heat and energy. Some are so successful because of their lightweight materials that they lower usage by 99% with only 180 kilobytes of memory.
The real-time capabilities are also vital. The chips react to event-specific triggers; that’s crucial because facilities managing high demand with complex processes require responsive motor controls. Every interaction is a chance for the chip to learn and adapt to the next situation. This includes recognizing patterns, experiencing sensory stimuli, and altering range of motion.
How neuromorphic chips enable real-time motor control
Neuromorphic models change operations by encouraging greater trust on human operators. Because of their event-driven processing, they move from task to task with lower latency than conventional microcontrollers. Engineers could also potentially communicate with technology using brain-computer interfaces to monitor activity or refine algorithms.
Parallelism is also an inherent aspect of these neural networks that allows robots to understand several informational streams simultaneously. In production or testing settings, understanding spatial or sensory cues makes neuromorphic chips superior because they make decision-making more likely to produce outcomes like a human.
Case studies of the SpiNNaker neural hardware demonstrated how a multicore neuromorphic platform can delegate tasks to different units such as synaptic processing. It validated how well these models achieve load balancing to optimize computational power and output.
Chips with robust parallelism are less likely to produce faulty results because the computations are delegated to separate parts, collating into a more reasonable action. Compared to traditional robotics, this also lowers the risk of system failure because the spiking neurons will not overload the equipment.
Design considerations for engineers
Neuromorphic chips are advantageous, but interoperability concerns may arise with existing motor drivers and sensors. Engineers can also encounter problems as they program the models and toolchains. They may not conventionally operate with spiking neural networks, commonly found in machinery replicating neuron activity. The chips could render some software or coding obsolete or fail to communicate signals effectively.
Experts will need to tinker with signal timing to ensure information processes promptly in response to specific events. They will also need to use tools and data to predict trends to stay ahead of the competition. Companies will be exploring the scalability of neuromorphic equipment and new applications rapidly, so determining various industries’ needs can inform an organization about the features to prioritize.
Some early applications that could expand include:
- Swarm robotics
- Autonomous vehicles
- Cobots
- Brain-computer interfaces
Engineers must feel inspired and encouraged to continue developing real-time motor controls with neuromorphic solutions. Doing so will craft self-driven, capable machinery that will change everything from construction sites to production lines. The applications will be as endless as their versatility, which becomes nearly infinite, considering how robots function with a humanlike brain.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- Neuromorphic Chips Mimic the Human Brain
- Can Analog Chips Pave the Way for Sustainable AI?
- Neuromorphic computing gives AI a real-time boost
- MCUs specialize in motor control and power conversion systems
- Field-oriented-control algorithm enhances motor control in EV designs
The post Real-time motor control for robotics with neuromorphic chips appeared first on EDN.
Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch

As the latest entry in my “electronics devices that died in the latest summer-2024 lightning storm” series, I present to you v3.22 (the company’s currently up to v8) of TP-Link’s TL-SG1005D five-port GbE switch, the diminutive alternative to the two eight-port switches I tore down last month. Here’s a box shot to start, taken from a cool hacking project on it that I came across (and will shortly further discuss) during my online research:
WikiDevi says that the TL-GS1005D v3.22 dates from 2009 (here’s the list of all TP-Link TL-SG series variants there), which sounds about right; my email archive indicates that I bought it from Newegg on December 14, 2010, on sale for $16.99 (along with two $19.99 Xbox Live 1600 point cards, then minus a $10 promo code, a discount which you can allocate among the three items as you wish). Nearly 15 years later, I feel comfortable in saying I got my money’s worth out of it!
Here’s what mine looks like, from various perspectives and as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the switch has approximate dimensions, per my tape measure, of 6.5”x4.25”x1.125”):
I didn’t need to bother taping over which specific port had gone bad this time, because the switch was completely dead!
along with a close-up of the underside label:
Speaking of the “Power Supply” notated on the label, here it is:
In contrast, before continuing, here’s what the latest-gen TP-Link TL-SG1005D v8 looks like:
Usually, from my experience, redesigns like this are prompted by IC-supplier phaseouts that compel PCB redesigns. Clearly, in this case, TP-Link has tinkered with the case cosmetics, too!
Before diving in, I confirmed that a dead wall wart wasn’t the root cause of the device’s demise (it’s happened before). Nope, still seems to be functional:
Granted, while its measured output voltage is as expected, its output current may be degraded (that’s also happened before). But I’m sticking with my theory that the switch itself is expired.
Time to get inside. Unlike other devices like this that I’ve dissected in the past, the screws aren’t under the four rubber “feet” shown in the earlier underside photo. Instead, you’ll find them within the holes that are in proximity to the upper two “feet”:
We have liftoff (snapping a couple of plastic retaining clips in the process, but this device is destined only for the landfill, so no huge loss):
Mission (so far, at least) accomplished:
And at this point, the PCB simply lifts away from the top-half remainder of the plastic shell:
No light guides in this design; the LEDs shine directly on the enclosure’s front panel:
Here’s a PCB backside closeup of the cluster of passives, presumably location-associated with a processor on the other side of the circuit board:
And turning the PCB around:
I’m guessing I’m right, and it’s hiding underneath that honkin’ big passive heatsink.
Let’s start with close-ups of the two labels stuck to this side of the PCB:
And here’s what I assume (due to plug proximity, if nothing else) is the power subsystem:
So, what caused this switch to irrevocably glitch? The brown blobs on the corners of both choke coils were the first thing that caught my eye:
but upon further reflection, I think they’re just adhesive, intended to hold the coils in place.
Next up for demise-source candidacy was the scorch mark atop the 25 MHz crystal oscillator:
Again, though, I bet this happened during initial assembly, not in reaction to the lightning EMP.
Nothing else obvious caught my eye. Last, but not least, then, was to pry off that heatsink:
It was glued stubbornly in place, but the combination of a hair dryer, a slotted screwdriver and some elbow grease (accompanied by colorful commentary) ultimately popped it off:
revealing the IC underneath, with plenty of marking-obscuring glue still stuck to the top of it:
You’re going to have to take my word (not to mention my belated realization that the info was also on WikiDevi, which concurred with my magnifying glass-augmented squinting) that it’s a Realtek RTL8366SB (here’s a datasheet). Note the long scorch mark on the right edge, toward the bottom. While it might result from extended exposure to my hair dryer’s heat, I’m instead betting that it’s smoking-gun (or is that smoking-glue?) evidence of the switch’s point of failure.
I’ll conclude the teardown analysis with a few PCB side views:
leaving me only a few related bits of editorial cleanup to tackle before I wrap up. First off, what’s with the “sibling-comparing” bit in this writeup’s title? While doing preparatory research, I came across a Reddit discussion thread that compared the TL-SG1005D to a notably less expensive TP-Link five-port GbE switch alternative, the TL-LS1005G. More generally, TP-Link’s five-port switch series for “home networking” currently encompasses five products, all supporting Gigabit Ethernet speeds. What’s the difference between them?
Two variations are obvious; four of the five ports in the TL-SG105MPE also support power-over-Ethernet (PoE), and both it and the TL-SG605 have metal cases, versus the plastic enclosures of the other three devices (reminiscent of last month’s metal-vs-plastic product differentiation).
But what about those other three? TP-Link’s website comparison facility fortunately came through…sorta. The low-end “LS” variant is, surprisingly, the only one that publicly documents its performance specs:
- Switching Capacity: 10 Gbps
- Packet Forwarding Rate: 7.4 Mpps
- MAC Address Table: 2K
- Packet Buffer Memory: 1.5 Mb
- Jumbo Frame: 16KB
This data is missing for the others, although I trust that they also support jumbo frame sizes of some sort, for example (the v3.22 TL-SG1005D jumbo frame size is apparently 4KB, by the way). That said, the LS1005G has nearly twice the power consumption of the TL-SF1005D; 3.7 V vs 1.9 W. And what about the latest v8 version of the TL-SG1005D? Its power draw—2.4 W—is in-between the other two. But it’s the only one of the three that supports (in a documented fashion, at least) 802.1p and DSCP QoS.
The ”support” is a bit deceptive, though. Like its siblings, it’s an unmanaged switch, versus a higher-end “smart” switch, so you can’t actually configure any of its port-and-protocol prioritization settings. But it will honor and pass along any QoS packet parameters that are already in place. And now, returning to my other bit of cleanup, per the aforementioned hacking project, it can actually transform into a “smart” switch in its own right:
On a hunch, I decided to crack open the switch and look at the internals. Hmm, seemed there was a RTL8366SB GBit switch IC in there. I managed to download the datasheet of the RTL8366, and whaddayaknow, it actually contains all the logic a managed switch has too! Vlan, port mirroring, you name it, and chances are the little critter can do it. It didn’t have a user-interface though; you have to send the config to it over I2C, as cryptic hexadecimal register settings…but that’s nothing an AVR can’t fix.
How friggin’ cool is that?
There’s one more bit of cleanup left, actually. If you’ve already read either last month’s teardown or my initial post in this particular series, you might have noticed that I mentioned the demise of two five-port GbE switches. Where’s the other one? Well, when I re-plugged it (a TRENDnet TEG-S50g v4.0R, whose $17.99 acquisition dated back to August 2014) in the other day prior to taking it apart, it fired right up. I reconnected it to the LAN and it’s working fine.
I guess not all glitches are irrevocable, eh? That’s all I’ve got for today. Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Lightning strikes…thrice???!!!
- A teardown tale of two not-so-different switches
- Devices fall victim to lightning strike, again
- Lightning strike becomes EMP weapon
The post Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch appeared first on EDN.
The design anatomy of a photodetector

A typical photodetector integrates a photodiode with a transimpedance amplifier (TIA). The photodiode converts light into an electrical current, which the transimpedance amplifier then converts into a voltage. So, while a photodiode alone produces a current output, a photodetector delivers a voltage output using sensing devices like LEDs.
Read the full post at EDN’s sister publication, Planet Analog.
Related Content
- Photodiode Design Note
- LEDs Are Photodiodes Too
- The world’s first spin photo detector
- Photodetector Startup Raises €3M to Meet Wearable Demand
- The Evolution of Photonic Integrated Circuits and Silicon Photonics
The post The design anatomy of a photodetector appeared first on EDN.
Why the transceiver “arms race” is turning network engineers toward versatility

The last few years have seen a tremendous amount of change in the mobile data world. Both in the United States and around the globe, data consumption is growing faster than ever before.
The number of internet users continues to rise, from 5.35 billion users in 2024 to an estimated 7.9 billion users in 2029—a 47% increase in just five years, according to Forbes. This has created an explosion in global mobile data traffic set to exceed 403 exabytes per month by 2029, up from an estimated 130 exabytes monthly at the end of 2023, according to the Ericsson Mobility Report. For context, in 2014, that amount was a mere 2.5 exabytes per month (Figure 1).
Figure 1 While some of the skyrocketing demand for data is associated with video conferencing, the vast majority is related to the increased usage of large language models (LLMs) like ChatGPT. Source: Infinite Electronics
A variety of simultaneous technological changes are also helping to drive this rapid increase in data consumption. Video-chat technology that went into wider usage during the pandemic has become a mainstay in office life, while autonomous vehicles and IoT devices continue to grow in variety and prevalence. The biggest sea change, however, has been the rapid integration of generative AI into mainstream culture since the introduction of ChatGPT4 in late 2023. Combined with state-of-the-art technology like Nvidia’s recently announced AI chips, these new innovations are placing an enormous strain on networks to keep up and maintain efficient data transfer.
Transceivers in high-speed data transferIn response, internet service providers and data centers are hurriedly seeking solutions that enable the most efficient data transfer possible. Transceivers, which essentially provide the bridge between compute/storage systems and the network infrastructure, serve a critical but sometimes overlooked role in enabling high-speed data transfer over fiber or copper cables.
Driven by the need for increased data transfer capabilities, the window between transceiver data-rate upgrades continues to shorten. The 2023 introduction of 800G came roughly six years after its predecessor, 400G, and barely two years later, the latest iteration of optical transceivers, 1.6TB, could take place as soon as Q3 of this year. This so-called “arms race” of technology shifts and data growth creates various layers of concerns for network engineers that include validating new technology, maintaining quality, ensuring interoperability, speeding up implementation to maximize ROI and increasing network uptime. Network upgrades to boost speeds and bandwidth are crucial for staying ahead of competitors and driving new customer acquisition.
Data center power crunchUnlike telecom sites, the massive power demands on data centers are a major consideration when evaluating upgrades. According to Goldman Sachs, the power demand from data centers is expected to grow 160% by 2030 due to the increased electricity needed to power AI usage. The massive power demands on data centers are even motivating some to build their own electrical substation facilities.
This push for data center upgrades doesn’t just include transceivers and other components, but full rack-level equipment changes as well, and the cost of making these upgrades can be significant. Hyperscalers like Google, Amazon, Microsoft and Facebook are continuously investing in cutting-edge infrastructure to support cloud services, AI, advertising and digital platforms. Despite the high cost, these companies feel compelled to invest in cutting-edge technology to ensure strong user experiences and avoid falling behind competitors. Similarly, enterprise data centers like those run by Equifax or Bloomberg often run their own infrastructure to support specific business operations and invest heavily in technology upgrades.
But in smaller data centers not built by hyperscalers or large enterprises—such as colocation providers, regional service providers, universities, or mid-sized businesses—the cost of transceivers can account for a significant portion of total network hardware spending, sometimes in excess of 50%, according to Cisco. Because these organizations may not upgrade transceivers as frequently, often skipping a generation, each purchasing decision is made with the goal of balancing performance, longevity, and cost.
Additional factors like uptime, reliability, and time to market are also shifting network engineers’ priorities, with a heavy focus on quality products that offer operational flexibility. Some engineers are aligning with vendors that have a strong track record of quality, technical support teams that can be leveraged, and strong financials to ensure that the vendors will be capable of supporting warranties in the future and have parts in inventory to support urgent needs. Network engineers know that lowering the cost of network equipment is crucial for maintaining ROI for their businesses, but they also understand that quality and reliability are vital for business operations by eliminating failures and liabilities due to outages.
Transceiver procurementThese considerations are leading engineers toward the choice of purchasing transceivers from original equipment manufacturers (OEMs) or from third-party vendors. While each option offers its own benefits, as shown in Table 1, there are meaningful differences between the two.
Table 1 The major differences between OEM transceivers and third-party transceivers in key categories. Source: Infinite Electronics
Transceivers from reputable third-party vendors are built to the same MSA (multi-source agreement) standards followed by optics from OEMs, ensuring they have the same electrical and optical capabilities. However, OEM transceivers often carry much higher costs (frequently between 2x and 5x) than equivalent third-party optics. In a data center with thousands of ports, the difference in cost can be significant, reaching hundreds of thousands of dollars.
Transceivers from OEMs come hard-coded to run on one specific platform: Cisco, Sienna, IBM or any of hundreds of others on the market. It’s common for a fiber-optic network to include multiple installations of different OEM equipment, but additional complexity can be created through the acquisition of a company that used transceivers from an entirely different set of vendors. This often forces organizations to maintain separate inventories of backup transceivers coded to each platform in current use. In addition, using optics from one OEM can tie an organization to it indefinitely, reducing its flexibility for future upgrades.
Vendor agnostic functionalityThird-party vendors often offer a wider variety of form factors, connector types, and reach options than brand-name vendors. It’s also possible to get custom-programmed optics for multi-vendor environments where compatibility is an issue. Some vendors are able to code or recode transceivers out in the field in minutes, effectively allowing organizations to cover the same range of operations with less inventory.
Whereas OEM optics tend to have long procurement cycles due to internal processes, certifications, or global supply chain issues, third-party suppliers often offer the ability to ship same day or within days, which can be crucial given the time constraints on maintenance windows and rapid expansion plans.
With data demands forecasted to continue escalating to the end of the decade, data providers will have to make a substantial investment to manage the shifts in technology and keep up with customer needs. To maintain network uptime, it will be increasingly critical to partner with vendors that can provide technical support as well as competitive products that maintain high quality and reliable performance.
Third-party transceiver benefitsFor hospitals, banking, retail, and other businesses with employees working from home, connectivity will be essential for executing even the simplest daily tasks. Maintaining a business’s reputation and customer loyalty depends on limiting liability, making it critical to maintain a robust network that is built on uptime.
By providing versatility through shorter lead times and broader compatibility, third-party transceiver solutions help ensure that infrastructure upgrades can keep up with the pace of business needs. In a landscape defined by rapid change, having access to reliable, standards-compliant alternatives can offer organizations a crucial strategic advantage.
For organizations navigating the challenges of scaling their networks while managing costs, third-party transceivers offer a practical path forward, helping ensure that networks remain both resilient and future-ready.
Jason Koshy is Infinite Electronics’ global VP of sales and business development, leading its outside sales team and installations. He brings to this position more than 28 years of experience covering all facets of the business. His previous roles include applications engineer, quality and manufacturing engineer, new acquisition evaluations, regional sales manager, director of sales for North America and, most recently, VP of sales for the Americas and ROW. Jason also participated in the integration of Integra, PolyPhaser and Transtector into the Infinite Electronics brand family. He holds a Bachelor of Science in electrical engineering from the University of South Florida.
Related Content
- Data center power meets rising energy demands amid AI boom
- Data center solutions take center stage at APEC 2025
- Data center power in 2019
- Power Tips #140: Designing a data center power architecture with supply and processor rail-monitoring solutions
- Cloud data center server power and optical transceivers: a dynamic duo
- Designing with 10GBase-T transceivers
The post Why the transceiver “arms race” is turning network engineers toward versatility appeared first on EDN.
MCUs power single-motor systems

With features optimized for motor control, Renesas’ RA2T1 MCUs drive fans, power tools, home appliances, and other single-motor systems. The MCUs integrate a 32-bit Arm Cortex-M23 processor running at 64 MHz and a 12-bit ADC with a 3-channel sample-and-hold function that simultaneously captures the 3-phase currents of BLDC motors for precise control.
A PWM timer supports automatic dead-time insertion and asymmetric PWM generation, features tailored for inverter drive and control algorithm implementation. Safety functions include PWM forced shutdown, SRAM parity check, ADC self-diagnosis, clock accuracy measurement, and unauthorized memory access detection.
Renesas’ Flexible Software Package (FSP) for the RA2T1 microcontroller streamlines development with middleware stacks for Azure RTOS and FreeRTOS, peripheral drivers, and connectivity, networking, and security components. It also provides reference software for AI, motor control, and cloud-based applications.
The RA2T1 series of MCUs is available now, along with the FSP software.
The post MCUs power single-motor systems appeared first on EDN.
Cadence debuts LPDDR6 IP for high-bandwidth AI

Cadence taped out an LPDDR6/5X memory IP system running at 14.4 Gbps—up to 50% faster than previous-generation LPDDR DRAM. The complete PHY and controller system optimizes power, performance, and area, while supporting both LPDDR6 and LPDDR5X protocols. Cadence expects the IP to help AI infrastructure meet the memory bandwidth and capacity demands of large language models (LLMs), agentic AI, and other compute-heavy workloads.
The memory system features a scalable, adaptable architecture that draws on Cadence’s DDR5 (12.8 Gbps), LPDDR5X (10.7 Gbps), and GDDR7 (36 Gbps) IP lines. As the first offering in the LPDDR6 IP portfolio, it supports native integration into monolithic SoCs and enables heterogeneous chiplet integration through the Cadence chiplet framework for multi-die system designs.
Customizable for various package and system topologies, the LPDDR6/5X PHY is offered as a drop-in hardened macro. The LPDDR6/5X controller, provided as a soft RTL macro, includes a full set of industry-standard and advanced memory interface features, such as support for the Arm AMBA AXI bus.
The LPDDR6/5X memory IP system is now available customer engagements.
The post Cadence debuts LPDDR6 IP for high-bandwidth AI appeared first on EDN.
PMIC suits tiny wearables with low standby drain

The nPM1304 PMIC from Nordic offers precise fuel gauging for products with small rechargeable batteries and strict energy budgets. It charges single-cell Li-ion, Li-poly, and LiFePO4 batteries—intended for devices like smart rings and trackers—with charging currents as low as 4 mA and a programmable termination voltage from 3.5 V to 4.65 V.
Estimating battery state of charge, the nPM1304 applies an algorithm-based fuel gauging method that tracks voltage, current, and temperature alongside a mathematical battery model. This approach reportedly achieves accuracy comparable to dedicated fuel gauge ICs, without their added power consumption or error accumulation.
According to Nordic, dedicated fuel gauge ICs can draw up to 50 µA during active operation and 7 µA in sleep mode—significant figures for products averaging just 200 µA. In contrast, the nPM1304 consumes only 8 µA when active and zero in sleep, delivering accurate state-of-charge estimates without noticeably impacting battery life.
In addition to battery charging (4 mA to 100 mA) and fuel gauging, the nPM1304 provides two 200-mA buck regulators and two configurable 100-mA load switches or 50-mA LDOs.
Contact Nordic Sales to join the early sampling program.
The post PMIC suits tiny wearables with low standby drain appeared first on EDN.
Secure token controller targets USB and NFC

Infineon’s ID Key S USB integrates a security controller and USB bridge controller in one package, supporting a range of USB and USB/NFC token applications. Built on a high-assurance security architecture, the device supports authentication, cryptographic functions, access control, and crypto hardware wallets.
The ID Key S USB includes a 32-bit CPU running at 100 MHz and 24 kB of RAM, enabling fast and secure application execution. It has up to 800 kB of non-volatile memory for storing data, cryptographic keys, software, and multiple applications.
Certified to CC EAL 6+ and compliant with FIPS 140-3 hardware requirements, the device satisfies robust security needs and allows customers to pursue FIPS certification. Its compact VQFN28 package eases integration into space-constrained token devices.
The ID Key S USB secure token controller is now available for early access customers.
The post Secure token controller targets USB and NFC appeared first on EDN.
SiC diodes boost isolation and efficiency

Three Gen 3 SiC Schottky diodes from Vishay come in low-profile SlimSMA HV (DO-221AC) packages with a minimum creepage distance of 3.2 mm. The 1200-V/1- A VS-3C01EJ12-M3, 650-V/2-A VS-3C02EJ07-M3, and 1200-V/2-A VS-3C02EJ12-M3 use a merged PIN Schottky structure that supports high-speed operation. They combine low capacitive charge with temperature-stable switching behavior, helping improve efficiency in hard-switching power designs.
In high-voltage applications, the diodes’ extended creepage distance enhances electrical isolation. Their SlimSMA HV package uses a molding compound with a CTI of ≥600 for strong insulation. The package also has a low profile of just 0.95 mm—significantly thinner than the 2.3 mm height of standard SMA and SMB packages with a similar footprint.
All three diodes operate reliably up to +175°C and feature negligible reverse recovery, making them well-suited for bootstrap, anti-parallel, and PFC circuits in DC/DC and AC/DC converters used in server power supplies, energy systems, and industrial drives.
Samples and production quantities of the Gen 3 SiC diodes are available now, with lead times of 14 weeks.
The post SiC diodes boost isolation and efficiency appeared first on EDN.
Design digital input modules with parallel interface using industrial digital inputs

Industrial digital input chips provide serialized data by default. However, in systems that require real time, low latency, or higher speed, it may be preferable to provide level-translated, real-time logic signals for each industrial digital input channel.
So, some industrial digital inputs sample and serialize the state of eight 24-V current sinking inputs under SPI or pin-based (LATCH) timing control, allowing for readout of the eight states via SPI. A serial interface is used to minimize the number of logic signals requiring isolation, which is particularly beneficial in high channel count digital input modules.
Serialization of logic signals uses simultaneous sampling of the signals so that the signals become time quantized. This means that real-time information content is lost, which can be of concern in certain systems. Examples are applications where timing differences between switching signals are of concern, such as incremental encoders or counters.
These applications either necessitate the use of high-speed sampling with high-speed serial readout or the use of non-serialized parallel data, as provided by the MAX22195, an industrial digital input with parallel output. Using the MAX22190/MAX22199 industrial digital input devices with parallel operation provides the benefit of diagnostics and configurability.
This article delves into the characteristics, limitations, and design considerations regarding techniques for generating parallel logic outputs with industrial digital inputs.
Design details
The technique is based on repurposing the eight LED outputs to function as logic signals. LEDs serve to provide a visual indication of the digital input’s state—useful for installation, maintenance, and in service. The characteristics and specifications of industrial inputs are clearly defined in the IEC 61131-2 standard, with the output state being binary in nature: either on or off.
The MAX22190/MAX22199 chips feature energyless LED drivers that power the LEDs from the sensor/switch in the field, not drawing current/power from a power supply in the digital input module. These devices limit the input current to a level settable by the REFDI resistor. This is done to achieve the lowest power dissipation in the module.
For the common Type 1/Type 3 digital inputs, the input current is typically set to a level of ~2.3 mA (typ) to be larger than the 2.0 mA minimum required by the IEC standard. The ICs channel most of the ~2.3 mA field input (IN) current to the LED output pins, and only ~160 µA are consumed by the chip.
With the LED drivers being current outputs, not voltage, the current needs to be converted to voltage for interfacing with other logic devices like digital isolators and microcontrollers. Resistors are the simplest trans-resistance element for this purpose, as shown in Figure 1.
Figure 1 LED pins are used as voltage-based logic outputs. Source: Analog Devices Inc.
Using the LED output pins in this manner is not documented in the product datasheets. This article investigates the characteristics and possible limitations.
LED pin characteristics
When using ground-connected resistors on the LED pins to create voltage outputs, the following needs to be considered:
- What is the maximum voltage allowed on the LED pins?
- Is there interaction/feedback from the LED_ pin to the IN_ pin?
- Specifically, does voltage on the LED pins result in a change of the input current, as minimum current levels are mandated by the IEC standards?
- Do the LED output currents show undesired transient behavior, such as overshoots or slow rise/fall times?
- Are the LED outputs suitable for use as high-speed logic signals when the inputs switch at high rates?
- Are the LED outputs filtered (as programmable by SPI)?
The MAX22190/MAX22199 datasheets’ absolute maximum ratings specify the maximum allowed LED pin voltages as +6 V. This indicates that the LED pins are suitable for use as 5 V (and 3.3 V) logic outputs, with the caveat that the voltage may not be higher than 6 V.
The impact of the LED pin voltage on other critical characteristics needs to be evaluated. Of particular concern is the change of the input current with the presence of high LED pin voltages, as the current is specified by the standards. The critical case is with the field voltage close to the 11 V on-state threshold voltage, as defined for Type 3 digital inputs.
Figure 2 shows the measured field input current dependence on the LED pin voltage for three field input voltages close to the 11-V level: 9 V, 10 V, and 11 V. The 10-V and 9-V levels were chosen as these are within the transition region for Type 3 inputs, and their input currents have no defined minimum, while the minimum for the 11 V input case is 2 mA.
Figure 2 Field input current is dependent on the LED pin voltage. Source: Analog Devices Inc.
With the field voltage at the 11-V threshold, the blue curve shows that the input current starts decreasing when the LED voltage is higher than ~5.8 V. The current decrease is only 0.6% at 6 V. For cases of 9 V and 10 V, which are in the transition where the currents are not defined, the measurements show that the input current is still above 2 mA for up to 5.5-V inputs.
In conclusion, this shows that the MAX22190/MAX22199 will produce 5-V LED logic outputs (as well as lower voltage logic like 3.3 V) and still be compatible with Type 3 digital inputs. For Type 1 digital inputs, the case is trivial since the on-threshold is much higher at 15 V, meaning that the LED pins will also provide 5-V logic levels without any impact on the field input current.
Parallel operation example
Figure 3 shows a 10-kHz field input (yellow curve) with the resulting LED output voltage in blue. A 1.5-kΩ resistor was used on the LED output, which provides a 3.3 V logic signal. Glitch filtering was disabled (default bypass mode).
Figure 3 In 10-kHz switching, Channel 1 has field input and Channel 2 has LED output. Source: Analog Devices Inc.
Regarding the transient behavior of the LED output current under switching conditions, Figure 3 shows a case of 10-kHz switching. A 1.5-kΩ resistor was used to convert current to voltage. The scope shot illustrates that the LED outputs do not produce transient overshoots or undershoots that could damage logic input devices. The rise and fall times are fast and do not lead to signal distortion.
Using the SPI interface
The MAX22190/MAX22199 devices feature SPI-programmable filters to enable per-channel glitch/noise filtering. Eight filter time constants up to the 20-ms level are available as well as a filter bypass for high-speed applications. The selected noise filtering also applies to the LED outputs to make the visual representation consistent with the electrical signals.
Diagnostics are provided via SPI, like low power supply voltage alarms, overtemperature warnings, short-circuit detection on the REFDI and REFWB pins, and as wire-break detection of the field inputs.
The power-up default state of the register bits is:
- All eight inputs are enabled
- All input filters are bypassed
- Wire-break detection is disabled
- Short-circuit detection of the REFDI and REFWB (only MAX22199) pins is disabled
Hence, the SPI interface does not need to be used in applications that do not require glitch filtering (for example, for high-speed signals) and diagnostics. In cases where the per-channel selectable glitch/noise filtering is needed or diagnostic detection is wanted, SPI can be used.
The LED output waveform does not show overshoots or other undesired irregularities such as varying voltage in the on-state. This illustrates that the LED outputs can be used as voltage outputs. Its characteristics and limitations are investigated.
Glitch filtering
The MAX22190 and MAX22199 devices provide per-channel selectable glitch filtering. The following content demonstrates the effect of the glitch filters on the LED outputs by example of a 200-Hz switching signal with filter time set to 800 µs. Defined glitch widths were emulated by changing the duty cycle. Both positive and negative glitches were investigated.
Figure 4 shows an example of 750-µs positive pulses being filtered out by the 800-µs glitch filter. So, positive glitch filtering works both for the LED outputs as well as the SPI data.
Figure 4 Here is an example of positive glitch filtering. Source: Analog Devices Inc.
Negative glitches are, however, not filtered out at the LED outputs, as shown in Figure 5, where a 750-µs falling pulse propagates to the LED output. This differs from using the SPI readout, for which both positive and negative glitches are successfully filtered.
Figure 5 This image shows negative glitch filtering. Source: Analog Devices Inc.
Figure 6 shows the LED output signal with an 800-µs glitch filter enabled and input switching with a 50% duty cycle. The rising edges are delayed by ~770 µs while the falling edges show no delay. This illustrates that the filters do not work properly with the LED outputs.
Figure 6 This image highlights the filtering effect on LED output. Source: Analog Devices Inc.
High frequency switching
For applications with high switching frequencies, low propagation, or low skew requirements, glitch filtering would be disabled. In bypass mode (glitch filters) and 100-kHz input, the LED output results in the waveforms shown in Figure 7.
Figure 7 The 100-kHz input switching is shown with filter bypass. Source: Analog Devices Inc.
While the falling edges show low propagation delay of ~60 ns, the rising edges have significant propagation delay as well as jitter. The rising edge jitter is in the range of ±0.5 µs with an average propagation delay of ~1 µs. The rising delay and jitter are due to the ~1 MHz sampling documented in the datasheet. Sampling does not occur on the falling edges, hence the fast response.
This illustrates that the LED outputs have rise time/fall time skews of up to ~1.5 µs with jitter. Channel-to-channel skew is low on the falling edges but much higher on the rising edges. This could limit the use of the LED outputs in some applications.
Design considerations
This section discusses some considerations required when using the LED output pins as voltage outputs.
Ensure that the MAX22190/MAX22199 current-drive LED outputs are voltage limited to not exceed the safe levels of the logic inputs that they drive. While the REFDI resistor sets the field input current to a typical current level, the actual input current has a tolerance of ±10.6%, as specified in the datasheets. Thus, the voltage across the resistor will be in the ±10.6% range.
Logic inputs typically have tightly specified absolute maximum ratings, like VL + 0.3 V, where VL is the logic supply voltage. When interfacing two logic signals, a common VL supply is often used to ensure matching as standard logic outputs have push-pull or open-drain outputs whose maximum output voltage is defined/limited by a logic supply, VL.
One can make the typical LED pin’s output voltage lower to ensure that absolute maximum ratings are not exceeded for the input. Alternatively, one can consider that the LED pin’s ~2.3 mA output current will not damage a logic input, as these are commonly specified for tolerating much higher latch-up currents, in the 50 mA to 100 mA range. This needs to be verified for the device under consideration. The third, less attractive, option is to limit the voltage by clamping.
Standard logic outputs are push-pull and thus low impedance, providing high flexibility in driving logic inputs. In contrast, the LED outputs are open-drain outputs where the pull-down resistor with parasitic capacitance determines the switching speeds.
Without additional capacitors, switching rates of 100 kHz and higher are feasible.
The MAX22190/MAX22199 industrial digital inputs can be used as an octal input having eight parallel outputs, despite being documented for serialized data operation. To this purpose, the LED drivers, originally intended for visual state indication, are repurposed as voltage-based or current-based logic outputs. When using parallel operation in this manner, the use of the SPI interface is optional and provides all the diagnostics as well as device configurability with some limitations.
Wei Shi is an applications engineer manager in the Industrial Automation business unit of Analog Devices based in San Jose, California. She joined Maxim Integrated (now part of Analog Devices) in 2012 as an applications engineer.
Reinhardt Wagner was a distinguished engineer with Analog Devices in Munich, Germany. His 21-year tenure primarily involved the product definition of new industrial chips in the areas of communication and input/output devices.
Editor’s Note
This article was written in cooperation with Chin Chia Leong, senior staff engineer for hardware at Rockwell Automation.
Related Content
- Analog outputs for industrial applications
- Isolating analog signals using a digital isolator
- Isolated PLC digital inputs for industrial control
- Audio chip moves machine learning from digital to analog
- Using Smart I/Os to build pin-level digital logic functionality and reduce CPU loading
The post Design digital input modules with parallel interface using industrial digital inputs appeared first on EDN.
Converting pulses to a sawtooth waveform

There are multiple means of generating analog sawtooth waveforms. Here’s a method that employs a single supply voltage rail and is not finnicky about passive component values. Figure 1 shows a pair of circuits that use a single 3.3-V supply rail, one producing a ground-referenced sawtooth and the other a supply voltage-referenced one.
Figure 1 The circuitry to the left of the 3.3 V supply implements a ground-referenced sawtooth labeled “LO”, while that to the right forms a 3.3V-referenced one labeled “HI”.
Wow the engineering world with your unique design: Design Ideas Submission Guide
For the LO signal, R1 supplies adequate current to operate U1. This IC enforces a constant voltage Vref between its V+ and FB pins. Q1 is a high beta NPN transistor which passes virtually all of R2’s current (Vref/R2) through its collector to charge C1 with a constant current, producing the linear ramp portion of this ground-referenced sawtooth. (U1’s FB current is typically less than 100 nA over temperature.) M1 is a MOSFET that is activated for 100 ns every T seconds to rapidly discharge C1 to ground. Its “on” resistance is less than 1 Ω and so yields a discharge that lasts more than 10 time constants.
The sawtooth’s peak amplitude A is Vref × T / (R2 × C1) volts, where Vref for U1 is 1.225 V. For a 3.3-V rail, the amplitude (A) should be less than an Amax of 2.1 V, which requires T to be less than a Tmax of R2 × C1 × 2.1V / Vref. With the availability of a U1 Vref tolerance of 0.2% and a 0.1% tolerance for R2, the circuit’s overall amplitude tolerance is mostly limited by an at best 1% C1 combined with the parasitic capacitance of M1.
M2, C2, Q2, R3, R4 and U2 work much like the circuit just described, except that they produce an “upside-down” 3.3-V supply-referenced sawtooth. Both waveforms can be seen in Figure 2. With the exception of U2, the tolerance contributions of these components are those previously mentioned for the “right side-up” design respectively. U2’s reference current is typically less than 250 nA over temperature, but its Vref of 1.24 V has at best a 1% tolerance. Figure 2 depicts both sawtooth waveforms.
Figure 2 The waveforms shown have peak values which are slightly less than the largest recommended. The period T is 34 µs.
These circuits do not require any precision or matched-value passive components. And there is no need to coordinate these component values with any active component’s parametric values or with the switching period T, as long as T is kept less than Tmax. The only effect that the non-zero tolerances of the passive components and of certain active parameters has been on the peak-to-peak amplitude of the sawtooth waveforms.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Converting square-waves into saw-teeth
- Squashed triangles: sines, but with teeth?
- DAC (PWM) Controlled Triangle/Sawtooth Generator
- Voltage-controlled triangle wave generator
The post Converting pulses to a sawtooth waveform appeared first on EDN.