Українською
  In English
EDN Network
“Flip ON Flop OFF” for 48-VDC systems with high-side switching

My Design Idea (DI), “Flip ON Flop OFF for 48-VDC systems,“ was published and referenced Stephen Woodward’s earlier “Flip ON Flop OFF” circuit. Other DIs published on this subject matter were for voltages less than 15 V, which is the voltage limit for CMOS ICs, while my DI was intended for higher DC voltages, typically 48 VDC. In this earlier DI, the ground line is switched, which means the input and output grounds are different. This is acceptable to many applications since the voltage is small and will not require earthing.
However, some readers in the comments section wanted a scheme to switch the high side, keeping the ground the same. To satisfy such a requirement, I modified the circuit as shown in Figure 1, where input and output grounds are kept the same and switching is done on the positive line side.

Figure 1 VCC is around 5 V and should be connected to the VCC of the ICs U1 and U2. The grounds of ICs U1 and U2 should also be connected to ground (connection not shown in the circuit). Switching is done in the high side, and the ground is the same for the input and output. Note, it is necessary for U1 to have a heat sink.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In this circuit, the voltage dividers R5 and R7 set the voltage at around 5 V at the emitter of Q2 (at VCC). This voltage is applied to ICs U1 and U2. A precise setting is not important, as these ICs can operate from 3 to 15 V. R2 and C2 are for the power ON reset of U1. R1 and C1 are for the push button (PB) switch debounce.
When you momentarily push PB once, the Q1-output of the U1 counter (not the Q1 FET) goes HIGH, saturating the Q3 transistor. Hence, the gate of Q1 (PMOSFET, IRF 9530N, VDSS=-100 V, IDS=-14 A, RDS=0.2 Ω) is pulled to ground. Q1 then conducts, and its output goes near 48 VDC.
Due to the 0.2-Ω RDS of Q1, there will be a small voltage drop depending on load current. When you push PB again, transistor Q3 turns OFF and Q1 stops conducting, and the voltage at the output becomes zero. Here, switching is done at the high side, and the ground is kept the same for the input and output sides.
If galvanic isolation is required (this may not always be the case), you may connect an ON/OFF mechanical switch prior to the input. In this topology, on-load switching is taken care of by the PB-operated circuit, and the ON/OFF switch switches zero current only, so it does not need to be bulky. You can select a switch that passes the required load current. While switching ON, first close the ON/OFF switch and then operate PB to connect. While switching OFF, first push PB to disconnect and operate the ON/OFF switch.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- Flip ON Flop OFF
- To press ON or hold OFF? This does both for AC voltages
- Flip ON Flop OFF without a Flip/Flop
- Elaborations of yet another Flip-On Flop-Off circuit
- Another simple flip ON flop OFF circuit
- Flip ON Flop OFF for 48-VDC systems
The post “Flip ON Flop OFF” for 48-VDC systems with high-side switching appeared first on EDN.
A logically correct SoC design isn’t an optimized design

The shift from manual design to AI-driven, physically aware automation of network-on-chip (NoC) design can be compared to the evolution of navigation technology. Early GPS systems revolutionized road travel by automating route planning. These systems allowed users to specify a starting point and destination, aiming for the shortest travel time or distance, but they had a limited understanding of real-world conditions such as accidents, construction, or congestion.
The result was often a path that was correct, and minimized time or distance under ideal conditions, but not necessarily the most efficient in the real world. Similarly, early NoC design approaches automated connectivity, yet without awareness of physical floorplans or workloads as inputs for topology generation, they usually fell well short of delivering optimal performance.

Figure 1 The evolution of NoC design has many similarities with GPS navigation technology. Source: Arteris
Modern GPS platforms such as Waze or Google Maps go further by factoring in live traffic data, road closures, and other obstacles to guide travelers along faster, less costly routes. In much the same way, automation in system-on-chip (SoC) interconnects now applies algorithms that minimize wire length, manage pipeline insertion, and optimize switch placement based on a physical awareness of the SoC floorplan. This ensures that designs not only function correctly but are also efficient in terms of power, area, latency, and throughput.
The hidden cost of “logically correct”
As SoC complexity increases, the gap between correctness and optimization has become more pronounced. Designs that pass verification can still hide inefficiencies that consume power, increase area, and slow down performance. Just because a design is logically correct doesn’t mean it is optimized. While there are many tools to validate that a design is logically correct, both at the RTL and physical design stages, what tools are there to check for design optimization?
Traditional NoC implementations depend on experienced NoC design experts to manually determine switch locations and route the connections between the switches and all the IP blocks that the NoC needs to connect. Design verification (DV) tools can verify that these designs meet functional requirements, but subtle inefficiencies will remain undetected.
Wires may take unnecessarily long detours around blocks of IP, redundant switches may persist after design changes, and piecemeal edits often accumulate into suboptimal paths. None of these are logical errors that many of today’s EDA tools can detect. They are inefficiencies that impact area, power, and latency while remaining invisible to standard checks.
Manually designing an NoC is also both tedious and fragmented. A large design may take several days to complete. Expert designers must decide where to place switches, how to connect them, and when to insert pipeline stages to enable timing closure.
While they may succeed in producing a workable solution, the process is vulnerable to oversights. When engineers return to partially completed work, they may not recall every earlier decision, especially for work done by someone else on the team. As changes accumulate, inefficiencies mount.
The challenge compounds when SoC requirements shift. Adding or removing IP blocks is routine, yet in manual flows, such changes often force large-scale rework. Wires and switches tied to outdated connections often linger because edits rarely capture every dependency.
Correcting these issues requires yet more intervention, increasing both cost and time. Automating NoC topology generation eliminates these repetitive and error-prone tasks, ensuring that interconnects are optimized from the start.
Scaling with complexity
The need for automation grows as SoC architectures expand. Connecting 20 IP blocks is already challenging. At 50, the task becomes overwhelming. At 500, it’s practically impossible to optimize without advanced algorithms. Each block introduces new paths, bandwidth requirements, and physical constraints. Attempting this manually is no longer realistic.
Simplified diagrams of interconnects often give the impression of manageable scale. Reality is far more daunting, where a single logical connection may consist of 512, 1024, or even 2048 individual wires. Achieving optimized connectivity across hundreds of blocks requires careful balancing of wire length, congestion, and throughput all at once.
Another area where automation adds value is in regular topology generation. Different regions of a chip may benefit from different structures such as meshes, rings, or trees. Traditionally, designers had to decide these configurations in advance, relying on experience and intuition. This is much like selecting a fixed route on your GPS, without knowing how conditions may change.
Automation changes the approach. By analyzing workload and physical layout, the system can propose or directly implement the topology best suited for each region. Designers can choose to either guide these choices or leave the system to determine the optimal configuration. Over time, this flexibility may make rigid topologies less relevant, as interconnects evolve into hybrids tailored to the unique needs of each design.
In addition to initial optimization, adaptability during the design process is essential. As new requirements emerge, interconnects must be updated without requiring a complete rebuild. Incremental automation preserves earlier work while incorporating new elements efficiently, removing elements that are no longer required. This ability mirrors modern navigation systems, which reroute travelers seamlessly when conditions change rather than responding to the evolving conditions once the journey has started.
For SoC teams, the value is clear. Incremental optimization saves time, avoids unnecessary rework, and ensures consistency throughout the design cycle.

Figure 2 FlexGen smart NoC IP unlocks new performance and efficiency advantages. Source: Arteris
Closing the gap with smart interconnects
SoC development has benefited from decades of investment in design automation. Power analysis, functional safety, and workload profiling are well-established. However, until now, the complexity of manually designing and updating NoCs left teams vulnerable to inefficiencies that consumed resources and slowed progress. Interconnect designs were often logically correct, but rarely optimal.
Suboptimal wire length is one of the few classes of design challenges that some EDA tools still may not detect. NoC automation has bridged the gap, eliminating them at the source, delivering a correct wire length optimized to meet the throughput constraints of the design specification. By embedding intelligence into the interconnect backbone, design teams achieve solutions that are both correct and efficient, while reducing or even eliminating reliance on scarce engineering expertise.
NoCs have long been essential for connecting IP blocks in modern complex SoC design, and often the cause of schedule delays and throughput bottlenecks. Smart NoC automation now transforms interconnect design by reducing risk for both the project schedule and its ultimate performance.
At the forefront of this change is smart interconnect IP created to address precisely these challenges. By automating topology generation, minimizing wire lengths, and enabling incremental updates, a smart interconnect IP like FlexGen closes the gap between correctness and optimization. As a result, engineering groups under pressure to deliver complex designs quickly gain a path to higher performance with less effort.
There is a difference between finding a path and finding the best path. In SoC design, that difference determines competitiveness in performance, power, and time-to-market, and smart NoC automation is what makes it possible.
Rick Bye is Director of Product Management and Marketing at Arteris, overseeing the FlexNoC family of non-coherent NoC IP products. Previously, he was a senior product manager at Arm, responsible for a demonstration SoC and compression IP. Rick has extensive product management and marketing experience in semiconductors and embedded software.
Related Content
- SoC Interconnect: Don’t DIY!
- The network-on-chip interconnect is the SoC
- SoC interconnect architecture considerations
- SoCs Get a Helping Hand from AI Platform FlexGen
- Smarter SoC Design for Agile Teams and Tight Deadlines
The post A logically correct SoC design isn’t an optimized design appeared first on EDN.
AI Ethernet NIC drives trillion-parameter AI workloads

Broadcom Inc. introduces Thor Ultra, claiming the industry’s first 800G AI Ethernet network interface card (NIC). The Ethernet NIC, adopting the open Ultra Ethernet Consortium (UEC) specification, can interconnect hundreds of thousands of XPUs to drive trillion-parameter AI workloads.
The UEC modernized remote direct memory access (RDMA) for large AI clusters, which the Thor Ultra leverages, offering several RDMA innovations. These include packet-level multipathing for efficient load balancing, out-of-order packet delivery directly to XPU memory for maximizing fabric utilization, selective retransmission for efficient data transfer, and programmable receiver-based and sender-based congestion control algorithms.
By providing these advanced RDMA capabilities in an open ecosystem, it allows customers to connect to XPUs, optics, or switches and to reduce dependency on proprietary, vertically integrated solutions, Broadcom said.
(Source: Broadcom Inc.)
The Thor Ultra joins Broadcom’s Ethernet AI networking portfolio, including Tomahawk 6, Tomahawk 6-Davisson, Tomahawk Ultra, Jericho 4, and Scale-Up Ethernet (SUE), as part of an open ecosystem for large scale high-performance XPU deployments.
The Thor Ultra Ethernet NIC is available in standard PCIe CEM and OCP 3.0 form factors. It offers 200G or 100G PAM4 SerDes with support for long-reach passive copper, and claims the industry’s lowest bit error rate SerDes, reducing link flaps and accelerating job completion time.
Other features include a PCI Express Gen 6 ×16 host interface, programmable congestion control pipeline, secure boot with signed firmware and device attestation, and line-rate encryption and decryption with PSP offload, which relieves the host/XPU of compute-intensive tasks.
The Ethernet NIC also provides packet trimming and congestion signaling support with Tomahawk 5, Tomahawk 6, or any UEC compliant switch. Thor Ultra is now sampling.
The post AI Ethernet NIC drives trillion-parameter AI workloads appeared first on EDN.
Power design tools ease system development

Analog Devices, Inc. (ADI) launches its ADI Power Studio, a family of products that offers advanced modeling, component recommendations, and efficiency analysis with simulation to help streamline power management design and optimization. ADI also offers early versions of two new web-based tools as part of Power Studio.
The web-based ADI Power Studio Planner and ADI Power Studio Designer tools, together with the full ADI Power Studio portfolio, are designed to streamline the entire power system design process from initial concept through measurement and evaluation. The Power Studio portfolio also features ADI’s existing desktop and web-based power management tools, including LTspice, SIMPLIS, LTpowerCAD, LTpowerPlanner, EE-Sim, LTpowerPlay, and LTpowerAnalyzer.
(Source: Analog Devices Inc.)
The Power Studio tools address key challenges in designing electronic systems with dozens of power rails and interdependent voltage domains, which creates greater design complexity. These bottlenecks require rework during architecture decisions, component selection, and validation, ADI said.
Power Studio addresses these challenges by providing a workflow that helps engineering teams make better decisions earlier by simulating real-world performance with accurate models and automating key outputs such as bill of materials and report generation, helping to reduce rework.
The ADI Power Studio Planner web-based tool targets system-level power tree planning. It provides an interactive view of the system architecture, making it easier to model power distribution, calculate power loss, and analyze system efficiency. Key features include intelligent parametric search and tradeoff comparisons.
The ADI Power Studio Designer is a web-based tool for IC-level power supply design. It provides optimized component recommendations, performance estimates, and tailored efficiency analysis. Built on the ADI power design architecture, Power Studio Designer offers guided workflows so engineers can set key parameters to build accurate models to simulate real-world performance, with support for both LTspice and SIMPLIS schematics, before moving to hardware.
Power Studio Planner and Power Studio Designer are available now as part of the ADI Power Studio. These tools are the first products released under ADI’s vision to deliver a fully connected power design workflow for customers. ADI plans to introduce ongoing updates and product announcements in the months ahead.
The post Power design tools ease system development appeared first on EDN.
Broadcom delivers Wi-Fi 8 chips for AI

Broadcom Inc. claims the industry’s first Wi-Fi 8 silicon solutions for the broadband wireless edge, including residential gateways, enterprise access points, and smart mobile clients. The company also announced the availability of its Wi-Fi 8 IP for license in IoT, automotive, and mobile device applications.
Designed for AI-era edge networks, the new Wi-Fi 8 chips include the BCM6718 for residential and operator access applications, the BCM43840 and BCM43820 for enterprise access applications, and the BCM43109 for edge wireless clients such as smartphones, laptops, tablets and automotive. These new chips also include a hardware-accelerated telemetry engine, targeting AI-driven network optimization. This engine collects real-time data on network performance, device behavior, and environmental conditions.
(Source: Broadcom Inc.)
The engine is a critical input for AI models and can be used by customers to train and run inference on the edge or in the cloud for use cases such as measuring and optimizing quality of experience (QoE), strengthening Wi-Fi network security and anomaly detection, and lowering the total cost of ownership through predictive maintenance and automated optimization, Broadcom said.
Wi-Fi 8 silicon chipsThe BCM6718 residential Wi-Fi access point chip features advanced eco modes for up to 30% greater energy efficiency and third-generation digital pre-distortion, which reduces peak power by 25%. Other features include a four-stream Wi-Fi 8 radio, receiver sensitivity enhancements enabling faster uploads, BroadStream wireless telemetry engine for AI training/inference, and BroadStream intelligent packet scheduler to maximize QoE. It also provides full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications.
The BCM43840 (four-stream Wi-Fi 8 radio) and BCM43820 (two-stream scanning and analytics Wi-Fi 8 radio) enterprise Wi-Fi access point chips also feature advanced eco modes and third-generation digital pre-distortion, a BroadStream wireless telemetry engine for AI training/inference, and full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications. They also provide an advanced location tracking capability.
The highly-integrated BCM43109 dual-core Wi-Fi 8, high-bandwidth Bluetooth, and 802.15.4 combo chip is optimized for mobile handset applications. The combo chip offers non-primary channel access for latency reduction and improved low-density parity check coding to extend gigabit coverage. It also provides full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications, along with 802.15.4 support including Thread V1.4 and Zigbee Pro, and Bluetooth 6.0 high data throughput and higher-bands support. Other key features include a two-stream Wi-Fi 8 radio with 320-MHz channel support, enhanced long range Wi-Fi, and sensing and secure ranging.
The Wi-Fi 8 silicon is currently sampling to select partners. The Wi-Fi IP is currently available for licensing, manufacture, and use in edge client devices.
The post Broadcom delivers Wi-Fi 8 chips for AI appeared first on EDN.
Microchip launches PCIe Gen 6 switches

Microchip Technology Inc. expands its Switchtec PCIe family with its next-generation Switchtec Gen 6 PCIe fanout switches, supporting up to 160 lanes for high-density AI systems. Claiming the industry’s first PCIe Gen 6 switches manufactured using a 3-nm process, the Switchtec Gen 6 family features lower power consumption and advanced security features, including a hardware root of trust and secure boot with post-quantum-safe cryptography compliant with the Commercial National Security Algorithm Suite (CNSA) 2.0.
The PCIe 6.0 standard doubles the bandwidth of PCIe 5.0 to 64 GT/s per lane, making it suited for AI workloads and high-performance computing applications that need faster data transmission and lower latency. It also adds flow control unit (FLIT) mode, a lightweight forward-error-correction (FEC) system, and dynamic resource allocation, enabling more efficient and reliable data transfer, particularly for small packets in AI workloads.
As a high-performance interconnect, the Switchtec Gen 6 PCIe switches, Microchip’s third-generation PCIe switch, enable high-speed connectivity between CPUs, GPUs, SoCs, AI accelerators, and storage devices, reducing signal loss and maintaining the low latency required by AI fabrics, Microchip said.
Though there are no production CPUs with PCIe Gen 6 support on the market, Microchip wanted to make sure that they had all of the infrastructure components in advance of PCIe Gen 6 servers.
“This breakthrough is monumental for Microchip, establishing us once again as a leader in data center connectivity and broad infrastructure solutions,” said Brian McCarson, corporate vice president of Microchip’s data center solutions business unit.
Offering full PCIe Gen 6 compliance, which includes FLIT, FET, 64-Gbits/s PAM4 signaling, deferrable memory, and 14-bit tag, the Switchtec Gen 6 PCIe switches feature 160 lanes, 20 ports, and 10 stacks with each port featuring hot- and surprise-plug controllers. Also available are 144-lane variants. These switches support non-transparent bridging to connect and isolate multiple host domains and multicast for one-to-many data distribution within a single domain. They are suited for high-performance compute, cloud computing, and hyperscale data centers.
(Source: Microchip Technology Inc.)
Multicast support is a key feature of the next-generation switch. Not all switch providers have multicast capability, McCarson said.
“Without multicast, if a CPU needs to communicate to two drives because you want to have backup storage, it has to cast to one drive and then cast to the second drive,” McCarson said. “With multicast, you can send a signal once and have it cast to multiple drives.
“Or if the GPU and CPU have to communicate but you need to have all of your GPUs networked together, the CPU can communicate to an entire bank of GPUs or vice versa if you’re operating through a switch with multicast capability,” he added. “Think about the power savings from not having a GPU or CPU do the same thing multiple times day in, day out.”
McCarson said customers are interested in PCIe Gen 6 because they can double the data rate, but when they look at the benefits of multicast, it could be even bigger than doubling the data rates in terms of efficient utilization of their CPU and GPU assets.
Other features include advanced error containment and comprehensive diagnostics and debug capabilities, several I/O interfaces, and an integrated MIPS processor with bifurcation options at x8 and x16. Input and output reference clocks are based on PCIe stacks with four input clocks per stack.
Higher performanceThe Switchtec Gen 6 product delivers on performance in signal integrity, advanced security, and power consumption.
PCIe 6.0 uses PAM4 signaling, which enables the doubling of the data rate, but it can also reduce the signal-to-noise ratio, causing signal integrity issues. “Signal integrity is one of the key factors when you’re running this higher data rate,” said Tam Do, technical engineer, product marketing for Microchip’s Data Center Solutions business unit
Signal loss, or insertion loss, set by the PCIe 6 spec is 32 dB. The new switch meets the spec thanks in part to its SerDes design and Microchip’s recommended layout of the pinout and package, according to Do.
In addition, Microchip added post-quantum cryptography to the new chip, which is not part of the PCIe standard, to meet customer requirements for a higher level of security, Do said.
The PCIe switch also offers lower power consumption, thanks to the 3-nm process, than competing PCIe Gen 6 devices built on older technology nodes.
Development tools include Microchip’s ChipLink diagnostic tools, which provide debug, diagnostics, configuration, and analysis through an intuitive graphical user interface. ChipLink connects via in-band PCIe or sideband signals such as UART, TWI, and EJTAG. Also available is the PM61160-KIT Switchtec Gen 6 PCIe switch evaluation kit with multiple interfaces.
Switchtec Gen 6 PCIe switches (x8 and x16 bifurcation) and an evaluation kit are available for sampling to qualified customers. A low-lane-count version with 64 and 48 lanes with x2, x4, x8, x16 bifurcation for storage and general enterprise use cases will also be available in the second quarter of 2026.
The post Microchip launches PCIe Gen 6 switches appeared first on EDN.
Amps x Volts = Watts
Analog topologies abound for converting current to voltage, voltage to current, voltage to frequency, and frequency to voltage, among other conversions.
Figure 1 joins the flock while singing a somewhat different tune. This current, voltage, and power (IVW) DC power converter multiplies current by voltage to sense wattage. Here’s how it gets off the ground.

Figure 1 The “I*V = W” converter comprises voltage-to-frequency conversion (U1ab & A1a) with frequency (F) of 2000 * Vload, followed by frequency-to-voltage conversion (U1c & A1b) with Vw = Iload * F / 20000 = (Iload * Vload) / 10 = Watts / 10 where Vload < 33 V and Iload < 1.5 A.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The basic topology of the IVW converter comprises a voltage-to-frequency converter (VFC) cascaded with a frequency-to-voltage converter (FVC). U1ab and A1a, combined with the surrounding discretes (Q1, Q2, Q3, etc.), make a VFC similar to the one described in this previous Design Idea, “Voltage inverter design idea transmogrifies into a 1MHz VFC”
The U1ab, A1a, C2, etc., VFC forms an inverting charge pump feedback loop that actively balances the 1 µA/V current through R2. Each cycle of the VFC deposits a charge of 5v * C2, or 500 picocoulombs (pC), onto integrator capacitor C3 to produce an F of 2 kHz * Vload (= 1 µA / 500 pC) for the control signal input of the FVC switch U1c.
The other input to the U1c FVC is the -100 mV/A current-sense signal from R1. This combo forces U1c to pump F * -0.1 V/amp * 500 pF = -2 kHz * Vload * 50 pC * Iload into the input of the A1b inverting integrator.
The melodious result is:
Vw = R1 * Iload * 2000 * Vload * R6 * C6
or,
Vw = Iload * Vload * 0.1 * 2000 * 1 MΩ * 500 pF = 100 mV/W.
The R6C5 = 100-ms integrator time constant provides >60-dB of ripple attenuation for Vload > 1-V and a low noise 0- to 5-V output suitable for consumption by a typical 8- to 10-bit resolution ADC input. Diode D1 provides fire insurance for U1 in case Vload gets shorted to ground.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Voltage inverter design idea transmogrifies into a 1MHz VFC
- A simulated 100-MHz VFC
- A simple, accurate, and efficient charge pump voltage inverter for $1 (in singles)
- 100-MHz VFC with TBH current pump
The post Amps x Volts = Watts appeared first on EDN.
Inside Walmart’s onn. 4K Plus: A streaming device with a hidden bonus
Walmart onn. coverage
Walmart’s onn. (or is it now just “onn”?) line of streaming media boxes and sticks are regularly represented here at Brian’s Brain, for several good reasons. They’re robustly featured, notably more economical than Google’s own Android TV-now-Google TV offerings, and frequently price-undershoot competitive devices from companies like Apple and Roku, too. Most recently, from a “box” standpoint, I took apart the company’s high-end onn. 4K Pro for publication at EDN in July, following up on the entry-level onn. 4K, which had appeared in April. And, within a subsequent August-published teardown of Google’s new TV Streamer 4K, I also alluded to an upcoming analysis of Walmart’s mid-tier onn. 4K Plus.
An intro to the onn.That time is now. And “mid-tier” is subjective. Hold that thought until later in the write-up. For now, I’ll start with some stock shots:

Here’s how Walmart slots the “Plus” within its current portfolio of devices:

Note that, versus the Pro variant, at least in its final configuration, the remote control is not backlit this time. I was about to say that I guess we now know where the non-backlit remotes for the initial production run(s) of the Pro came from, although this one’s got the Free TV button, so it’s presumably a different variant from the other two, too (see what I did there?). Stand by.
And hey, how about a promo video too, while we’re at it?
Now for some real-life photos. Box shots first:




Is it wrong…

that I miss the prior packaging, even though there’s no longer a relevant loop on top of the box?

I digress. Onward:

Time to dive inside:

Inside is a two-level tray, with our patient (and its companion wall wart) on top, along with a sliver of literature:


Flip the top half over:

and the rest of the kit comes into view: a pair of AA batteries, an HDMI cable, and the aforementioned remote control:

Since I just “teased” the remote control, let’s focus on that first, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:





All looks the same as before so far, eh? Well then, last but not least, let’s look at the back:



Specifically, what does the product-code sticker say this time?

Yep, v2.32, different than the predecessors. Here’s the one in the baseline onn. 4K (v2.15, if you can’t read the tiny print):

And the two generations that ship(ped) with the 4K Pro, Initial (v2.26):

And subsequently, whose fuller feature set matched the from-the-beginning advertising (v2.30):

Skipping past the HDMI cable and AA battery set (you’re welcome), here’s the wall wart:

Complete with a “specs” close-up,”

whose connector, believe it or not, marks the third iteration within the same product generation: micro-USB for the baseline 4K model:

“Barrel” for the 4K Pro variant:

And this time, USB-C:

I would not want to be the person in charge of managing onn. product contents inventory…
Finally, our patient, first still adorned with its protective translucent, instructions-augmented plastic attire:

And now, stark nekkid. Top:

Front:

Bare left side:

Back: left-to-right are the reset switch, HDMI output, and USB-C input. Conceptually, you could seemingly tether the latter to an OTG (on-the-go) splitter, thereby enabling you to (for example) feed the device with both power and data coming from an external storage device, but in practice, it’s apparently hit-and-miss at best:

And equally bare right side:

There’s one usual externally visible adornment that we haven’t yet seen. Can you guess what it is before reading the following sentence?
Yes, clever-among-you, that’s right: it’s the status LED. Flip the device over and…there it be:

Now for closeups of the underside marking and (in the second) the aforementioned LED, which is still visible from the front of the device when illuminated because it’s on a beveled edge:

Enough of the teasing. Let’s get inside. For its similar-form-factor mainstream 4K precursor, I’d gone straight to the exposed circumference gap between the two halves. But I couldn’t resist a preparatory peek underneath the rubber feet that taunted me this time:

Nope. No screw heads here:

Back to Plan B:

There we go, with only a bit of collateral clip-snipped damage:


The inside of the bottom half of the case is bland, unless you’re into translucent LED windows:

The other half of the previous photo is much more interesting (at least to me):

Three more screws to go…

And the PCB then lifts right out of the enclosure’s remaining top half:

Allowing us to first-time see the PCB topside:


Here are those two PCB sides again, now standalone. Bottom:

and top:

Much as (and because) I know you want me to get to ripping the tops off those Faraday cages, I’ll show you some side shots first. Right:

Front; check out those Bluetooth and Wi-Fi antennae, reminiscent of the ones in the original 4K:

Left:

And back:

Let’s pop the top off the PCB bottom-side cage first:

Pretty easy; I managed that with just my fingernail and a few deft yanks:


At the bottom is the aforementioned LED:

And within the cage boundaries,

are two ICs of particular note; an 8 Gbit (1 GByte) Micron DDR4 SDRAM labeled as follows:
41R77
D8BPK
And, below these ICs are the nonvolatile memory counterpart, a FORESEE FEMDNN016G 16 GByte eMMC.
Now for the other (top) side. As you likely already noticed from the side shots, the total cage height here is notably thicker than that of its bottom-side counterpart. That’s because, unsurprisingly, there’s a heat sink stuck on top of it. Heat rises, after all; I already suspected, even before not finding the application processor inside the bottom-side cage, that we’d find it here instead.
My initial attempts at popping off the cage-plus-heatsink sandwich using traditional methods—first my fingernail, followed by a Jimmy—were for naught, threatening only to break my nail and bend the blade, as well as to damage the PCB alongside the cage base. I then peeked under the sticker attached to the top of the heatsink to see if it was screwed down in place. Nope:

Eventually, by jamming the Jimmy in between the heatsink and cage top, I overcame the recalcitrant adhesive that to that point had succeeded in keeping them together:




Now, the cage came off much more easily. In retrospect, it was the combined weight of the two pieces (predominantly the heatsink, a hefty chunk of metal) that had seemingly made my prior efforts be for naught:



At the bottom, straddling the two aforementioned antennae, is the same Fn-Link Technology 6252B-SRB wireless communications module that we’d found in the earlier 4K Pro teardown:

And inside the cage? Glad you asked:

To the left is the other 8 Gbit (1 GByte) Micron DDR4 SDRAM. And how did I know they’re both DDR4 in technology, by the way? That’s because it’s the interface generation that mates up with the IC on the right, the application processor, which is perhaps the most interesting twist in this design. It’s the Amlogic S905X5M, an upgrade to the S905X4 found in the 4K Pro. It features a faster Arm Cortex A-55 CPU quad-core cluster (2.5 GHz vs 2 GHz), which justifies the beefy heatsink, and an enhanced GPU core (Arm Mali-G310 v2 vs Arm Mali-G21 MP2).
The processing enhancements bear fruit when you look at the benchmark comparisons. Geekbench improvements for the onn. 4k Plus scales linearly with the CPU clock speed boost:
While GFXBench comparative results also factor in the graphics subsystem enhancements:

I’d be remiss if I didn’t also point out the pricing disparity between the two systems: the 4K Plus sells for $29.88 while the 4K Pro is normally priced $20 more than that ($49.88), although as I type these words, it’s promotion-priced at 10% off, $44.73. Folks primarily interested in gaming on Google TV platforms, whether out-of-the-box or post-jailbreaking, are understandably gravitating toward the cheaper, more computationally capable 4K Plus option.
That said, the 4K Pro also has 50% more DRAM and twice the storage, along with an integrated wired Ethernet connectivity option and other enhancements, leaving it the (potentially, at least) better platform for general-purpose streaming box applications, if price isn’t a predominant factor.
That wraps up what I’ve got for you today. I’ll keep the system disassembled for now in case readers have any additional parts-list or other internal details questions once the write-up is published. And then, keeping in mind the cosmetic-or-worse damage I did getting the heatsink and topside cage off, I’ll put it back together to determine whether its functionality was preserved. One way or another, I’ll report back the results in the comments. And speaking of which, I look forward to reading your thoughts there, as well.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Perusing Walmart’s onn. 4K Pro Streaming Device with Google TV: Storage aplenty
- Walmart’s onn. full HD streaming device: Still not thick, just don’t call it a stick
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
- Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost
The post Inside Walmart’s onn. 4K Plus: A streaming device with a hidden bonus appeared first on EDN.
Tesla’s wireless-power “dream” gets closer to reality—maybe

You are likely at least slightly aware of the work that famed engineer, scientist, and researcher Nikola Tesla did in the early 1900s in his futile attempt to wirelessly transmit usable power via a 200-foot tower. The project is described extensively on many credible web sites, such as “What became of Nikola Tesla’s wireless dream?” and “Tesla’s Tower at Wardenclyffe” as well as many substantive books.
Since Tesla, there have been numerous other efforts to transmit power without wires using RF (microwave and millimeter waves) and optical wavelengths. Of course, both “bands” are wireless and governed by Maxwell’s equations, but there are very different practical implications.
Proponents of wireless transmitted power see it as a power-delivery source for both stationary and moving targets including drones and larger aircraft—very ambitious objectives, for sure. We are not talking about near-field charging for devices such as smartphones, nor the “trick” of wireless lighting of a fluorescent bulb that is positioned a few feet away from a desktop Tesla coil. We are talking about substantial distances and power.
Most early efforts to beam power were confined to microwave frequencies due to available technologies. However, they require relatively larger antennas to focus the transmitted beam, so millimeter waves or optical links are likely to work better.
The latest efforts and progress have been in the optical spectrum. These systems use a fiber-optic-based laser for a tightly confined beam. The “receivers” for optical power transmission are specialized photovoltaic cells optimized to convert a very narrow wavelength of light into electric power with very high efficiency. The reported efficiencies can exceed 70%, more than double that of a typical broader-spectrum solar cell.
In one design from Powerlight Technologies, the beam is contained within a virtual enclosure that senses an object impinging on it—such as a person, bird, or even airborne debris—and triggers the equipment to cut power to the main beam before any damage is done (Figure 1). The system monitors the volume the beam occupies, along with its immediate surroundings, allowing the power link to automatically reestablish itself when the path is once again clear.

Figure 1 This free-space optical-power path link includes a safety “curtain” which cuts off the beam within a millisecond if there is a path interruption. Source: Powerlight Technologies
Although this is nominally listed as a “power” project, as with any power-related technology, there’s a significant amount of analog-focused circuitry and components involved. These provide raw DC power to the laser driver and to the optical-conversion circuits, lasers, overall system management at both ends, and more.
Recent progress raises effectiveness
In May 2025, DARPA’s Persistent Optical Wireless Energy Relay (POWER) program achieved several new records for transmitting power over distance in a series of tests in New Mexico. The team’s POWER Receiver Array Demo (PRAD) recorded more than 800 watts of power delivered during a 30-second transmission from a laser 8.6 kilometers (5.3 miles) away. Over the course of the test campaign, more than a megajoule of energy was transferred.
In the never-ending power-versus-distance challenge, the previous greatest reported distance records for an appreciable amount of optical power (>1 microwatt) were 230 watts of average power at 1.7 kilometers for 25 seconds and a lesser (but undisclosed) amount of power at 3.7 kilometers (Figure 2).

Figure 2 The POWER Receiver Array Demo (PRAD) set the records for power and distance for optical power beaming; the graphic shows how it compares to previous notable efforts. Source: DARPA
To achieve the power and distance record, the power receiver array used a new receiver technology designed by Teravec Technologies with a compact aperture for the laser beam to shine. That’s to ensure that very little light escapes once it has entered the receiver. Inside the receiver, the laser strikes a parabolic mirror that reflects the beam onto dozens of photovoltaic cells to convert the energy back to usable power (Figure 3).

Figure 3 In the optical power-beaming receiver designed for PRAD, the laser enters the center aperture, strikes a parabolic mirror, and reflects onto dozens of photovoltaic cells (left) arranged around the inside of the device to convert the energy back to usable power (right). Source: Teravec Technologies
While it may seem logical to use a mirror or lens when it comes to redirecting laser beams, the project team instead found that diffractive optics were a better choice because they are good at efficiently handling monochromatic wavelengths of light. They used additive manufacturing to create optics and included an integrated cooling system.
Further details on this project are hard to come by, but that’s almost beside the point. The key message is that there has been significant progress. As is usually the case, some of it leverages progress in other disciplines, and much of it is “home made.” Nonetheless, there are significant technical costs, efficiency burdens, and limitations due to atmospheric density—especially at lower attitudes and ground level.
Do you think advances in various wireless-transmission components and technologies will reach to where it’s a viable power-delivery approach for broader uses besides highly specialized ones? Can it be made to work for moving targets as well as stationary ones? Or will this be one of those technologies where success is always “just around the corner”? And finally, is there any relationship between this project and the work on directed laser energy systems to “shoot” drones out of the sky, which has parallels to the beam generation/emission part?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- What…You’re Using Lasers for Area Heating?
- Forget Tesla coils and check out Marx generators
- Pulsed high-power systems are redefining weapons
- Measuring powerful laser output takes a forceful approach
The post Tesla’s wireless-power “dream” gets closer to reality—maybe appeared first on EDN.
Intel releases more details about Panther Lake AI processor

Intel Corp. unveils new details about its next-generation client processor for AI PCs, the Core Ultra series 3, code-named Panther Lake, which is expected to begin shipping later this year. The company also gave a peek into its Xeon6+ server processor, code-named Clearwater Forest, expected to launch in the first half of 2026.
Core Ultra series 3 client processor (Source: Intel Corp.)
Panther Lake is the company’s first product built on the advanced Intel 18A semiconductor process, the first 2-nanometer class node manufactured in the United States. It delivers up to 15% better performance per watt and 30% improved chip density compared to Intel 35 thanks to two key advances—RibbonFET and PowerVia.
The RibbonFET transistor architecture, Intel’s first in over a decade, delivers greater scaling and more efficient switching for better performance and energy efficiency. The PowerVia backside power delivery system improves power flow and signal delivery.
Also contributing to its greater flexibility and scalability is Foveros, Intel’s advanced packaging and 3D chip stacking technology for integrating multiple chiplets into advanced SoCs.
Panther LakeThe Core Ultra series 3 processors offer scalable AI PC performance, targeting a range of consumer and commercial AI PCs, gaming devices, and edge solutions. Intel said the multi-chiplet architecture offers flexibility across form factors, segments, and price points.
The Panther lake processors offer Lunar Lake-level power efficiency and Arrow Lake-class performance, according to Intel. They offer up to 16 CPU cores, up to 96-GB LPDDR5, and up to 180 TOPS across the platform. They also feature new P- and E-cores, along with a new GPU and next-generation IPU 7.5 and NPU 5, delivering higher-performance and greater efficiency over previous generations.
Key features include up to 16 new performance-cores (P-cores) and efficient-cores (E-cores) delivering more than 50% faster CPU performance versus the previous generation; 30% lower power consumption versus Lunar Lake; and a new Intel Xe3 Arc GPU with up to 12 Xe cores delivering more than 50% faster graphics performance versus the previous generation, along with up to 12 ray tracking units and up to 16-MB L2 cache.
Panther Lake also features the next-gen NPU 5 with up to 50 trillion of operations per second (TOPS), offering >40% TOPS/area versus Lunar Lake and 3.8× TOPS versus Arrow Lake-H.
The IPU 7.5 offers AI-based noise reduction and local tone mapping. It delivers 16-MP stills and 120 frames per second slow motion and supports up to three concurrent cameras. It also offers a 1.5-W reduction in power with hardware staggered HDR compared to Lunar Lake.
Other features include enhanced power management, up to 12 lanes PCIe 5, integrated Thunderbolt 4, integrated Intel Wi-Fi 7 (R2) and dual Intel Bluetooth Core 6, and LPCAMM support.
Panther Lake will also extend to edge applications including robotics, Intel said. A new Intel Robotics AI software suite and reference board is available with AI capabilities to develop robots using Panther Lake for both controls and AI/perception. The suite includes vision libraries, real-time control frameworks, AI inference engines, orchestration-ready modules, and hardware-aware tuning
Panther Lake will begin ramping high-volume production this year, with the first SKU scheduled to ship before the end of the year. General market availability will start in January 2026.
Recommended Intel’s confidence shows as it readies new processors on 18A
Clearwater ForestIntel also provided a sneak peek into the Xeon 6+, its first 18A-based server processor. It is also touted as the company’s most efficient server processor. Both Panther Lake and Clearwater Forest, built on Intel 18A, are being manufactured at Intel’s new Fab 52, which is Intel’s fifth high-volume fab at its Ocotillo campus in Chandler, Arizona.
Xeon 6+ server processor (Source: Intel Corp.)
Clearwater Forest is Intel’s next-generation E-core processor, featuring up to 288 E-cores, and a 17% increase in instructions per cycle (IPC) over the previous generation. Expected to offer significant improvements in density, throughput, and power efficiency, Intel plans to launch Xeon 6+ in the first half of 2026. This server processor series targets hyperscale data centers, cloud providers, and telcos.
The post Intel releases more details about Panther Lake AI processor appeared first on EDN.
Broadcom debuts 102.4-Tbits/s CPO Ethernet switch

Broadcom Inc. launches the Tomahawk 6 – Davisson (TH6-Davisson), the company’s third-generation co-packaged optics (CPO) Ethernet switch, delivering the bandwidth, efficiency, and reliability for next-generation AI networks. The TH6-Davisson provides advances in power efficiency and traffic stability for higher optical interconnect performance required to scale-up and scale-out AI clusters.
The trend toward CPOs in data centers is to increase bandwidth and lower energy consumption. With the TH6-Davisson, Broadcom claims the industry’s first 102.4 Tbits/s of optically enabled switching capacity, doubling the bandwidth of any CPO switch available today. This sets a new benchmark for data-center performance, Broadcom said.
(Source: Broadcom)
Designed for power efficiency, the TH6-Davisson heterogeneously integrates TSMC Compact Universal Photonic Engine (TSMC COUPE) technology-based optical engines with advanced substrate-level multi-chip packaging. This is reported to dramatically reduce the need for signal conditioning and minimize trace loss and reflections, resulting in a 70% reduction in optical interconnect power consumption. This is more than 3.5× lower than traditional pluggable optics, delivering a significant improvement in energy efficiency for hyperscale and AI data centers, Broadcom said.
In addition to power efficiency, the TH6-Davisson Ethernet switch addresses link stability, which has become a critical bottleneck as AI training jobs scale, the company added, with even minor interruptions causing losses in XPU and GPU utilization.
The TH6-Davisson solves this challenge by directly integrating optical engines onto a common package with the Ethernet switch. The integration eliminates many of the sources of manufacturing and test variability inherent in pluggable transceivers, resulting in significantly improved link flap performance and higher cluster reliability, according to Broadcom.
In addition, operating at 200 Gbits/s per channel, TH6-Davisson doubles the line rate and overall bandwidth of Broadcom’s second-generation TH5-Bailly CPO solution. It seamlessly interconnects with DR-based transceivers as well as NPO and CPO optical interconnects running at 200 Gbits/s per channel, enabling connectivity with advanced NICs, XPUs, and fabric switches.
The TH6-Davisson BCM78919 supports a scale-up cluster size of 512 XPUs and up to 100,000+ XPUs in two-tier networks at 200 Gbits/s per link. Other features include 16 × 6.4 Tbits/s Davisson DR optical engines and field-replaceable ELSFP laser modules.
Broadcom is now developing its fourth-generation CPO solution. The new platform will double per-channel bandwidth to 400 Gbits/s and deliver higher levels of energy efficiency.
The TH6-Davisson BCM78919 is IEEE 802.3 compliant and interoperable with existing 400G and 800G standards. Broadcom is currently sampling the Ethernet switch to its early access customers and partners.
The post Broadcom debuts 102.4-Tbits/s CPO Ethernet switch appeared first on EDN.
Analog frequency doublers

High school trigonometry combined with four-quadrant multipliers can be exploited to yield sinusoidal frequency doublers. Nothing non-linear is involved, which means no possibly strident filtering requirements.
Starting with some sinusoidal signal and needing to derive new sinusoidal signals at multiples of the original sinusoidal frequency, a little trigonometry and four-quadrant multipliers can be useful. Consider the following SPICE simulation in Figure 1.

Figure 1 Two analog frequency doublers, A1 + U1 and A2 + U2, in cascade to form a frequency quadruple.
The above sketch shows the pair A1 and U1 configured as a frequency doubler from V1 to V2, and the pair A2 and U2 configured as another frequency doubler from V2 to V3. Together, the two of them form a frequency quadrupler from V1 to V3. With more circuits, you can make an octupler and so on within the bandwidth limits of the active semiconductors, of course.
Frequency doubler operation is based on these trigonometric identities:
sin² (x) = 0.5 * ( 1 – cos (2x) ) and cos² (x) = 0.5 * ( 1 + cos (2x) )
sin² (x) = 0.5 – 0.5 * cos (2x) and cos² (x) = 0.5 + 0.5* cos (2x)
Take your pick, both equations yield a DC offset plus a sinusoid at twice the frequency you started with. Do a DC block as with C1 and R1 above, and you are left with a doubled-frequency sinusoid at half the original amplitude. Follow that up with a times two gain stage, and you have made a sinusoid at twice the original frequency and at the same amplitude with which you started.
This way of doing things takes less stuff than having to do some non-linear process on the input sinusoid to generate a harmonic comb and then having to filter out everything except the one frequency you want.
Although there might actually be some other harmonics at each op-amp output, depending on how non-ideal the multiplier and op-amp might be, this process does not nominally generate other unwanted harmonics. Such harmonics as might incidentally arise won’t require a high-performance filter for their removal.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Frequency doubler with 50 percent duty cycle
- A 50 MHz 50:50-square wave output frequency doubler/quadrupler
- Frequency doubler operates on triangle wave
- Fast(er) frequency doubler with square wave output
- Triangle waves drive simple frequency doubler
The post Analog frequency doublers appeared first on EDN.
Dual-input inductive sensor simplifies design

Melexis introduces the MLX90514, a dual-input inductive sensor IC that simultaneously processes signals from two sets of coils to compute differential or vernier angles on-chip. The inductive sensor targets automotive applications, such as steering torque feedback, steering angle sensing, and steering rack motor control.
(Source: Melexis)
Traditionally, designers have combined two single-channel ICs or used magnetic sensors for many applications, Melexis said. However, with the move to electrification, autonomy, and advanced driver-assistance systems (ADAS), vehicle control systems have become more complex particularly in systems such as steering torque feedback, steering rack motor control, including steer-by-wire implementations, which need dual-channel position sensing to deliver accurate torque and angle measurements.
By integrating differential and vernier angle calculations on-chip, the MLX90514 reduces processing demands on the host system, enabling smaller and more streamlined sensor designs. By computing complex position information (such as differential or vernier angles) directly at the sensor it eliminates the need for multiple ICs, which reduces design complexity and component count.
The MLX90514 is Melexis’ first dual inductive application-specific standard product (ASSP). It offers several interface options—including SENT, SPC, and PWM for a standalone module, and SPI for embedded modules—with integrated on-chip processing. The SENT/SPC output accommodates up to a 24-bit payload, enabling high-fidelity transmission of two synchronized 12-bit channels, which is required for high-accuracy torque and angle sensing.
Key features include zero-latency synchronized dual-channel operation, external pulse-width-modulation (PWM) signal integration that allows reading PWM signals from external sources, and the capability to handle small inductive signals, which supports compact coil designs and tighter printed-circuit-board layouts for smaller sensing modules.
The MLX90514 enables ASIL-D-compliant sensing systems, as a Safety Element out of Context (SEooC), for automotive steering torque and angle applications. The inductive interface sensor is available now.
The post Dual-input inductive sensor simplifies design appeared first on EDN.
Reference designs advance AI factories

Schneider Electric offers two reference designs co-engineered with NVIDIA to accelerate deployment of AI-ready infrastructure for AI factories. The controls reference design uses a plug-and-play MQTT architecture to bridge OT and IT systems, enabling operators to access and act on data from every layer.

The first reference design integrates power management and liquid cooling controls with NVIDIA Mission Control software, enabling smooth orchestration of AI clusters. It also supports Schneider’s data-center reference designs for NVIDIA Grace Blackwell systems, giving operators precise control over power and cooling to meet the demands of accelerated AI workloads.
The second reference design supports AI factories running NVIDIA GB300 NVL72 systems at up to 142 kW per rack. It delivers a complete blueprint for facility power, cooling, IT space, and lifecycle software, compatible with both ANSI and IEC standards. Using Schneider’s validated models and digital twins, operators can plan high-density AI data halls, optimize designs, and ensure efficiency, reliability, and scalability for NVIDIA Blackwell Ultra systems.
For more information about these new reference designs, as well as other data-center reference designs developed with NVIDIA, click here.
The post Reference designs advance AI factories appeared first on EDN.
Adaptable gate driver powers 48-V automotive systems

ST’s L98GD8 multichannel gate driver offers flexible output configurations in 48-V automotive power systems. Its eight independent, configurable outputs can drive MOSFETs as individual power switches or as high- and low-side pairs in up to two H-bridges for DC motor control. The device also supports peak-and-hold operation for electrically actuated valves.

Programmable gate current helps minimize MOSFET switching noise to meet EMC requirements. The driver operates from a 3.8-V to 58-V battery supply and a 4.5-V to 5.5-V VDD supply. Its I/O is compatible with both 3.3-V and 5-V logic levels.
To ensure safety and reliability, each output provides comprehensive diagnostics, including short-to-battery, short-to-ground, and open-load conditions. Output status is continuously monitored through dedicated SPI registers. The L98GD8 features fast overcurrent shutdown with dual-redundant failsafe pins, battery undervoltage detection, and an ADC for monitoring battery voltage and die temperature. Additional safety functions include Built-In Self-Test (BIST), Hardware Self-Check (HWSC), and a Communication Check (CC) watchdog timer.
The L98GD8 driver is available now, with prices starting at $3.94 each in lots of 1000 units.
The post Adaptable gate driver powers 48-V automotive systems appeared first on EDN.
Thales introduces quantum-safe smartcard

According to Thales, its MultiApp 5.2 Premium PQC is Europe’s first quantum-resistant smartcard to receive high-level security certification from ANSSI (the French National Cybersecurity Agency). Certified to the EAL6+ level under the Common Criteria framework, the smartcard also uses digital signature algorithms standardized by NIST in the U.S.

The MultiApp 5.2 Premium PQC leverages post-quantum cryptography to protect digital identity data in ID cards, health cards, and driving licenses. This new generation of cryptographic signatures is designed to withstand the vast computational power of quantum computers, both today and in the future.
“This first certification for a solution incorporating post-quantum cryptography reflects ANSSI’s commitment to supporting innovation, while upholding the highest cybersecurity standards,” said Franck Sadmi, Head of National Certification Center, French Cybersecurity Agency (ANSSI). “The joint work of Thales, CEA-Leti IT Security Evaluation Facility, and ANSSI is a strong signal that Europe is ready to lead the way in post-quantum security, enabling organizations and governments to deploy solutions that anticipate future risks, rather than waiting for quantum computers to become mainstream.”
The post Thales introduces quantum-safe smartcard appeared first on EDN.
HDR sensor improves automotive cabin monitoring

Joining Omnivision’s Nyxel NIR line, the OX05C1S global-shutter HDR image sensor targets in-cabin driver and occupant monitoring systems (DMS and OMS). The 5-Mpixel sensor, with 2.2-µm backside-illuminated pixels, captures clear images of the entire cabin, enhancing algorithm accuracy even under challenging high-brightness conditions.

The OX05C1S leverages Nyxel technology to achieve high quantum efficiency at the 940-nm NIR wavelength, improving DMS and OMS performance in low-light environments. On-chip RGB-IR separation reduces the need for a dedicated image signal processor and backend processing.
With package dimensions of 6.61×5.34 mm, the OX05C1S is 30% smaller than the previous-generation OX05B (7.94×6.34 mm), providing greater mechanical design flexibility for in-cabin camera integration. Lens compatibility with the OX05B enables reuse of existing optics, simplifying system upgrades and reducing overall design cost.
The OX05C1S sensor is offered in both color filter array (RGB-IR) and monochrome configurations. Samples are available now, with mass production scheduled for 2026.
The post HDR sensor improves automotive cabin monitoring appeared first on EDN.
Vishay launches extensive line of inductors

Expanding its line of inductors and frequency control devices (FCDs), Vishay has added more than 2000 new SKUs across nearly 100 series. The broader offering simplifies sourcing and supports more applications with wider inductance and voltage ranges, improved noise suppression, and additional sizes for compact PCB layouts.

Recent additions include wireless charging inductors, common-mode chokes, high-current ferrite impedance beads, and TLVR inductors, along with nearly 15 new FCD products. To meet the demand for diversified manufacturing, the company is expanding production in Asia, Mexico, and the Dominican Republic. IHLP series power inductors are now shipping from the company’s Gomez Palacio, Durango, Mexico facility.
Product rollouts will continue through 2025, with additional series scheduled to launch in the coming months. In total, Vishay expects to surpass 3000 new SKUs of inductors and FDCs, supporting design activity across industrial, telecom, and consumer markets.
The post Vishay launches extensive line of inductors appeared first on EDN.
Watchdog versus the truck

One of the first jobs I had, when I first got out of college, was for a company that designed and manufactured monitors for large trucks, the kind used in mining operations. This company was a small entity with around 25 employees and a couple of engineers. The main product was a monitor that sat on the dash of these trucks and watched over things like oil pressure, coolant temperature, and level, hydraulic pressure, etc. Variations of this monitor had 4, 5, or 6 indicator lights that lit if the monitored point went out of spec. An alarm also sounded, and the truck was shut down by a relay connection.
Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale
Another engineer and I decided it was time to bring this analog monitor into the microprocessor era. The idea was to monitor the same functions, but only have one indicator light with an LCD showing the issue. Along with the alarming function, we could also add more information on the LCD, like temperatures and pressure readings. It wasn’t a very complex design. At that time, micros didn’t typically have watchdog circuits, so we added one of the few external watchdogs available at the time. Our concern was that some transient would throw the micro off course, and we wanted the watchdog to reset the monitor in that case. The 24-V input voltage and all sensor inputs had some level of transient suppression (but, after several decades, I have forgotten what the circuit consisted of).
We completed a design, and it worked very well on the bench. Next, we hit it with various transients that we could generate. Not having access to any transient test equipment, we had to invent some methods to test this. Worse, we had no specs or general information on what kind of transients these trucks can experience, but we ourselves were satisfied that it was ready for a beta test.
After testing, we sent the monitor to a local mining company to have it installed on a working truck. We also sent a harness system with leads long enough to get to the sensors located around the truck. The company called us after they got the monitor mounted on the dash and all the sensors wired to the harness, so a visit was scheduled to test the monitor on a running truck.
I need to stop at this point to describe the truck. It was a 175-ton dump truck. There are bigger trucks now, but it was very large for the time. Picture tires 10 feet high and a 1600 HP diesel/electric generator system powering electric motors turning each wheel. The driver’s cab was about 18 feet off the ground and was reached using an attached ladder. The driver and the two of us climbed this ladder to begin the test.
To add to the pressure, there were a dozen or so managers and workers on the ground watching the tests. The mining company managers gave the go-ahead to begin. The driver started the truck (quit a roar)—the monitor fired up, and the LCD began showing the status of the monitored points… great!
After a few seconds, the truck shut down… not great. We looked at each other—a few seconds later, the truck roared alive again—monitor working—a couple more seconds, the truck shuts down—a few seconds later, the truck restarts, etc., etc., etc.
After a half dozen of these cycles, we told the driver to shut the truck down. We couldn’t tie up the million-dollar truck any longer, so we could not do any more investigation. We packed up our equipment and left with our heads down.
Back at the shop, we talked through what went on. We concluded that the monitor’s micro was disrupted by an unknown transient. The watchdog then discovered the code running amok and tripped the shutdown relay. The watchdog then rebooted the micro, resetting the relay, which allowed the truck to restart itself.
One of the major design issues was that some sensors required tens of feet of wire and were unshielded single leads (most sensors used chassis ground). These single wires (or should I call them antennas) could have been close to various relays and electric actuators on the truck, or worse yet, near the cabling used for the generator-to-motor system. Also, the watchdog, which did discover the issue, did not fulfill its function—it allowed the truck to restart.
This is where “Tales from the Cube” articles tell us how they fixed the issue by adding a larger resistor, fixing a bad solder joint, or reworking a reversed diode. In this tale, there is no happy ending. The boss didn’t want to continue with the project, and I’m sure the customer was not impressed. The project was cancelled. So why did I write this up?
I thought it was a good example of what can happen on engineering projects—sometimes they fail (moving from the lab to the field often exposes design issues), and sometimes you don’t get a chance to fix the design. Young engineers should understand this and not be disenchanted when it does. Don’t let it get you down. Remember, we learn a lot by failure.
Shortly after this project, we got the opportunity to design a full, micro-based dashboard for a large articulated truck. One of the things we designed was a fiber-optic cable data-transfer system to the back portion of the truck. This minimized the length of sensor wires, providing antennas for the transient. In this design, the system worked flawlessly.
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- When a ring isn’t really a ring
- In the days of old, when engineers were bold
- Software sings the cold-weather blues
- Going against the grain dust
The post Watchdog versus the truck appeared first on EDN.
PWM nonlinearity that software can’t fix

There’s been interest recently here in the land of Design Ideas (DIs) in a family of simple interface circuits for pulse width modulation (PWM) control of generic voltage regulators (both linear and switching). Members of the family rely on the regulator’s internal voltage reference and a discrete FET connected in series with the regulator’s programming voltage divider.
PWM uses the FET as a switch to modulate the bottom resistor (R1) of the divider, so that the 0 to 100% PWM duty factor (DF) varies the time-averaged effective conductance of R1 from 0 to 100% of its nominal value. This variation programs the regulator output from Vo = Vs (its feedback pin reference voltage) at DF = 0 to Vo = Vs(R2/R1 + 1) at DF = 100%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Some of these circuits establish a linear functionality between DF and Vo. Figure 1 is an example of that genre as described in “PWM buck regulator interface generalized design equations.”
Figure 1 PWM programs Vo linearly where Vo = Vs(R2/(R1/DF) + 1).
For others, like Figure 2’s concept designed by frequent contributor Christopher Paul and explained in “Improve PWM controller-induced ripple in voltage regulators”…it’s nonlinear…

Figure 2 PWM programs Vo nonlinearly where Vo = Vs(R2/(R1a/DF + R1b + R1c) + 1).
Note that for clarity, Figure 2 does not include many exciting details of Paul’s innovative design. See his article at the link for the whole story.
The nonlinearity problemHowever, to explore the implications of Figure 2’s nonlinearity a bit further, in the example of the circuit provided in Paul’ DI:
R1a = 2490 Ω
R1b = 2490 Ω
R1c = 4990 Ω
Vs = 0.800 V
R2 = 53600 Ω
Which, if we assume 8-bit PWM resolution, provides the response curves shown in Figure 3.

Figure 3 The 8-bit PWM setting versus DF = X/255. The left axis (blue curve) is Vo = 0.8(53600/(2490/(X/255) + 7480) + 1). The right axis (red curve) is Vo volts increment per PWM least significant bit (LSBit) increment.
Paul says of this nonlinear response: “Although the output voltage is no longer a linear function of the PWM duty cycle, a simple software-based lookup table renders this a mere inconvenience. (Yup, ‘we can fix it in software!’)”Of course, he’s absolutely right: For any chosen Vo, a corresponding DF can be easily calculated and stored in a small (256-entry) lookup table.
However, translating from the computed DF to an integer 8-bit PWM code is a different matter. Figure 3’s increment-vs-increment red curve provides an important caveat to Paul’s otherwise accurate statement.
If the conversion from 8-bit 0 to 255 code to the 0.8 V to 5.1 V, or 4.3V Vo span, were linear, then each LSBit increment would bump Vo by a constant 15.8 mV (= 4.3 V/256). But it isn’t.
And, as Figure 3’s red curve shows, due to the strong nonlinearity of the conversion, the 8-bit resolution criterion is exceeded for all PWM codes < 75 and Vo < 3.77 V = 74% of full scale.
And it gets worse: For Vo values down near Vs = 0.8 V, the LSBit increment soars to 67 mV (= 4.3 V/64). This, therefore, equates to a resolution of not 8 bits, but barely 6.
The fixUnfortunately, there’s very little any software fix can do about that. Which might make nonlinearity for some applications perhaps more than just an “inconvenience?” So what could fix it?
The nonlinearity basically arises from the fact that only a fraction (R1a) of the total R1abc resistance is modulated by PWM, as the PWM DF changes, that fraction changes, which in turn changes the rate of change of Vo versus DF. In fact, it changes this by quite a lot.
Getting to specifics, in the example of Paul’s circuit provided in his DI, we see they make the modulated resistance R1a only 25% of the total R1 resistance at DF = 100%, with this proportion increasing to 100% as DF goes to 0%. This is obviously a big change concentrated toward lower DF.
A clue to a possible (at least partial) fix is found back in the observation that the nonlinearity and resolution loss originally arose from the fact that only a small fraction (25% R1a) of the total R1abc resistance is modulated by PWM. So, perhaps a bigger R1a fraction of R1abc could recover some of the lost resolution.
As an experiment, I changed Paul’s R1 resistor values to the following.
R1a = 7960 Ω
R1b = 1000 Ω
R1c = 1000 Ω
This makes R1a now 80% of R1abc instead of only 25%. Figure 4 illustrates the effect on the response curves. 
Figure 4 The impact of making R1a 80% of R1abc. The left axis (blue curve) is Vo = 0.8(53600/(7960/(X/255) + 2000) + 1). The right axis (red curve) is Vo volts increment per PWM LSBit increment.
Figure 4’s blue Vo versus PWM curve is obviously still nonlinear, but significantly less so. But perhaps the more important improvement is to the red curve: Unlike the previous erosion of resolution at the left end of the curve to 67 mV per PWM LSBit to just 6 bits, Figure 3 maxes out at 21 mV, or 7.7 bits.
Is this a “fix?” Well, obviously, 7.7 bits is better than 6 bits, but it’s still not 8 bits, so resolution recovery isn’t perfect. Also, my arbitrary shuffling of R1 ratios is almost certain to adversely impact the spectacular ripple attenuation cited in Christopher Paul’s original article. Mid-frequency loop gain may also suffer from the heavier loading on C2 and R2 imposed by the reduced R1c value. This could lead to a possible deterioration of the transient response and noise rejection. Perhaps C2 could be increased to moderate that effect.
Still, it would be fair to call it a start at a fix for nonlinearity that lay beyond the reach of software.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Improve PWM controller-induced ripple in voltage regulators
- PWM buck regulator interface generalized design equations
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Brute force mitigation of PWM Vdd and ground “saturation” errors
The post PWM nonlinearity that software can’t fix appeared first on EDN.



