EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 8 min ago

Powerline module enables EV charger data links

Thu, 06/05/2025 - 23:09

Comtrend’s PM-1540 powerline data module uses MaxLinear’s G.hn (data-over-powerline) chips to support backend communication in EV charging stations. It transmits data from power meters over existing electrical wiring, eliminating the need for dedicated communication cables, and can also extend connectivity to backend systems in data centers or smart parking environments.

By leveraging existing electrical wiring, the PM-1540 delivers lower latency, higher speeds, and more stable performance than conventional methods. It enables real-time connectivity while reducing costs compared to LAN, Wi-Fi, or 4G systems. The module supports up to 250 nodes within the same powerline domain and transmits signals over distances up to 700 meters, with up to 16 levels of signal repetition for extended reach.

MaxLinear’s G.hn baseband processors and analog front-end chipsets provide reliable, low-latency connectivity over existing wiring, delivering physical data rates up to 2 Gbps with full ITU compliance. Their support for Quality of Service (QoS) and broad media compatibility—including powerline—makes them well-suited for EV charging infrastructure, enabling seamless interoperability and cost-effective deployment.

For detailed information on Comtrend’s PM-1540 G.hn powerline module, click here. An overview of MaxLinear’s G.hn solutions can be found here.

Comtrend

MaxLinear

The post Powerline module enables EV charger data links appeared first on EDN.

A quick and practical view of USB Power Delivery (USB-PD) design

Thu, 06/05/2025 - 10:52

USB Power Delivery (USB-PD) now offers faster, more efficient, and more versatile power handling solutions. As we can all see, it’s an exciting advancement that significantly enhances the capabilities of USB connections.

This mechanism uses the USB configuration channel (CC) to allow a device to request a specific voltage. While this might seem complex at first, it’s pretty easy to utilize in practice.

Figure 1 The module has several jumpers to set the DC output voltage at multiple levels. Source: Author

What makes it easy nowadays is that we can buy compact USB-PD Trigger/Decoy modules that do the complicated background tasks for us (Figure 1). You can see such a module has a number of jumpers to set the DC output voltage to 5 V, 9V, 12 V, 15 V or 20 V.

This module acts as a trigger or decoy to request specific power profiles from USB-PD power sources such as USB-C chargers, power banks, and adapters. So, with this module, you can trigger USB-PD protocols and thus, for example, charge your laptop via a PD-capable USB-C power supply.

Note at this point that a USB-PD Trigger, sometimes called a USB-PD Decoy, is a small but clever circuitry that handles the USB-PD negotiation and simply outputs a predefined DC voltage.

Some USB-PD Trigger/Decoy modules are adjustable with a selector switch, or cycle among voltages with a pushbutton press, while others deliver a fixed voltage, or will have solder jumpers (or solder pads to install a fixed resistor) to select an output voltage. The output connection points on these modules are typically just two bare solder pads, or small screw terminals in certain cases (Figure 2).

Figure 2 The output connection points are shown on the modules. Source: Author

For just a few bucks each, these smaller and slenderer USB-PD Trigger/Decoy modules are useful to have in your tool chest, both for individual projects and for use in a pinch. In my view, for most applications, the fixed voltage type power provider is preferable, as this prevents accidental slips that could destruct the power consumer.

I recently bought a set of these fixed voltage modules. As you can see, the core part of these single-chip modules is the IP2721 USB Type-C physical layer protocol IC for USB Type-C input interfaces.

Figure 3 IP2721 is a USB Type-C PD protocol IC for USB input port that supports USB Type-C/PD2.0/PD3.0 protocols. Source: Author

The USB Type-C device plug-in and plug-out process is automatically detected based on CC1/CC2 pins. The chip has an integrated power delivery protocol analyzer to get the voltage capabilities and request the matched voltage.

Figure 4 The schematics shows a design use case built around the USB Type-C PD protocol IC. Source: Injoinic Technology

Surprisingly, the newly arrived module—designed for a single, fixed-voltage output—features the IP2721 controller in a bare minimum configuration without the power-pass element.

Figure 5 The module features the IP2721 controller in a bare minimum configuration. Source: Author

Hence, the output voltage will be whatever VBUS is, and this could be 5 V during initial enumeration or stay at this voltage in case negotiations failed. Luckily, for many applications, this will not be much of an issue. But on paper, to comply with the USB power delivery specifications, the device is supposed to have a high-side power MOSFET as the power-pass element to disconnect the load until a suitable power contract has been negotiated.

For this writing, I needed to test the output of my module. So, below you can see a little snap taken during the first test of my IP2721 USB-PD trigger 9-V module; nothing but the process of testing the module with a compatible power source and a DC voltmeter.

Figure 6 DV voltmeter shows the output of the IP2721-based USB-PD module. Source: Author

Here are some final notes on the power delivery.

  • USB-PD is a convenient way of replacing power supply modules in many electronics projects and systems. Although USB-PD demands specialized controller chips to be utilized properly, easily available single-purpose USB-PD Trigger/Decoy modules can be used in standalone systems to provide USB-PD functionality.
  • Interestingly, legacy USB can only provide a 5-V power supply, but USB-PD defines prescriptive voltages such as 9 V, 15 V, and 20 V in addition to 5 V.
  • Until recently, the USB-PD specification allowed for up to 100 W (5 A@20 V) of power, called Standard Power Range (SPR), to flow in both directions. The latest USB-PD specification increases the power range to 240 W (5 A@48 V), called Extended Power Range (EPR), through a USB-C cable. So, if a device supports EPR expansion commands, it can use 28 V, 36 V, and 48 V.
  • Since the most recent USB-PD specification allows to realize up to 240 W power delivery through a single cable, it’s possible to provide ample power over USB to multiple circuit segments or devices simultaneously.
  • Electronic marking is needed in a Type-C cable when VBUS current of more than 3 A is required. An electronically marked (E-Marked) cable assembly (EMCA) is a USB Type-C cable that uses a marker chip to provide the cable’s characteristics to the Downstream Facing Port (DFP). It’s accomplished by embedding a USB PD controller chip into the plug at one or both ends of the cable.
  • The USB-PD Programmable Power Supply (PPS) was implemented with USB PD3.0. With PPS, devices can gradually adjust the current (50-mA steps) and voltage (20-mV steps) in the range from 5 V to 20 V. PPS can directly charge a battery, bypassing the battery charger in a connected device.
  • Adjustable Voltage Supply (AVS) was implemented with USB PD3.1 and extended with PD3.2, allowing it to work within SPR below 100 W, down to a minimum of 9 V. AVS is similar to PPS in terms of function, but the difference is that it does not support current-limit operation, and the output voltage is adjusted in 100-mV steps in the range from 9 V to 48 V.

Note that USB-PD, which is combined with USB-C, takes full advantage of the power supply and multi-protocol functions over USB-C. Implementing USB-C for portable battery-powered devices enables them to both charge from the USB-C port as well as supply power to a connected device using the same port.

So, devices using a single or multicell battery charger can now be paired with a USB-C or USB PD controller, which enables the applications to source and sink power from the USB-C port. Below is an application circuit based on MP2722, a USB Type-C 1.3 compliant, highly integrated, 5-A, switch-mode battery management device for a single cell Li-ion or Li-polymer battery.

Figure 7 The application circuit is built around a 5 A, single-cell buck charger with integrated USB Type-C detection. Source: Monolithic Power Systems (MPS)

In the final analysis, it’s important to recall that the USB-PD is not just about the power delivery-related negotiations. Feel free to comment if you can help add to this post or point out issues and solutions you have found.

T. K. Hareendran is a technical author, hardware beta tester, and product reviewer.

Related Content

The post A quick and practical view of USB Power Delivery (USB-PD) design appeared first on EDN.

10-octave linear-in-pitch VCO with buffered tri-wave output

Wed, 06/04/2025 - 14:03

Frequent contributor Nick Cornford recently assembled an ensemble of cool circuit designs incorporating linear-in-pitch VCOs (LPVCOs). 

These elegant and innovative designs (standard fare for Nick’s contributions) were perfectly adequate for their intended applications. Nevertheless, it got me wondering how difficult it would be to implement an LPVCO with a range covering the full 10-octave audio spectrum, from 20 Hz to 20 kHz. I even decided to try for extra credit by going for a tri-wave output suitable for direct drive of one of Nick’s famous squish-diode sine converters. Figure 1 shows the result.

Figure 1 An LPVCO with 10-octave (20 Hz to 20 kHz) tri-wave output comprises antilog pair Q1 and Q2, two-way current mirror Q3 and Q4, integrator A1b, comparator A1a, and buffer A1c. Resistors R1 and R2 are precision types, and T1 is a Vishay NTCSC0201E3103FLHT (inhale!).

Wow the engineering world with your unique design: Design Ideas Submission Guide

Vin is scaled by the tempco-compensating voltage divider, ((R1+T1)/R2 + 1) = 28:1, and applied to the Q1 and Q2 antilog pair, where Q1 level shifts and further temperature compensates it. Then, with the help of buffer A1c, it’s antilogged and inverted by Q2 to produce Ic2 = 2(2Vin) µA = 1 µA to 1 mA for Vin = 0 to 5v.

From there, it goes to the two-way current mirror: Q3 and Q4. A description of how the TWCM works can be found here in “A two-way mirror—current mirror that is.”

The TWCM passes Ic2 through to the integrator A1b if comparator A1a’s output is zero, and mirrors (inverts) it if A1a’s output is high. Thus, A1b ramps up if A1a’s output is at 0v, and down if it’s at 5v, resulting in sustained oscillation.

The C1 timing ramp has a duration in each direction ranging from 25 ms (for Vin = 0) to 25 µs (for Vin = 5v). The triangular cycle will therefore repeat at Fosc = 2(2Vin)µA/(25nCb)/2 = 20(2(2Vin) ) Hz.

 So, there’s the goal of a tri-wave LPVCO with an output span of 20 Hz to 20 kHz centered at 640 Hz, and it wasn’t so terribly messy to get there after all!

My thanks go to Nick Cornford for introducing the LPVCO to Design Ideas (DIs), and to Christopher Paul and Andy I for their highly helpful simulations and constructive criticisms of my halting steps to temperature-compensating antilogging circuits. I also thank editor Aalyia Shaukat for her DI environment that makes such teamwork possible for a gang of opinionated engineers, and mostly accomplished without actual bloodshed! 

Mostly.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

 

The post 10-octave linear-in-pitch VCO with buffered tri-wave output appeared first on EDN.

Seeing inside entry-level audiophile desire: Monoprice’s Liquid Spark Headphone Amplifier

Wed, 06/04/2025 - 13:17
My audio gear

Back in July 2019, I told you about the combo of Massdrop’s x Grace Design Standard DAC:

and its companion Massdrop Objective 2 Headphone Amp: Desktop Edition (Massdrop is now just Drop, by the way, and is now owned by Corsair):

that I’d recently acquired for listening to computer-sourced audio over headphones in a quality-upgraded fashion beyond just the DAC (and amp-fed headphone jack) built into my Mac. That same two-device stack:

remains on my desk and to my right to this very day, albeit subsequently joined by an even higher quality balanced audio stack to my left:

combining a Topping D10 Balanced DAC:

and a Drop + THX AAA 789 Linear Headphone Amplifier:

The Monolith

But I digress. Today’s dissection showcase is of none of these. Instead, I’ll be analyzing the guts of the Monolith by Monoprice Liquid Spark Headphone Amplifier by Alex Cavalli:

It’s comparable in size to the Massdrop Objective 2 Headphone Amp: Desktop Edition I mentioned at the beginning of the writeup:

And what about a companion digital-to-analog converter? That’s a story all by itself. It originally had one, the unsurprisingly named “Monolith by Monoprice Liquid Spark DAC by Alex Cavalli”:

based on an Asahi Kasei Microdevices (AKM) Semiconductor DAC chip. However, in October 2020, just as COVID was in general throttling the tech economy, audio equipment suppliers got a double-whammy: a massive three-day fire at AKM’s semiconductor facility in Japan, which clobbered its output. Some AKM customers, such as Fiio and Schiit (the latter, for example, redesigning and renaming its Modi 3 DAC as the Modi 3e, with “e” short for ESS Technology), redesigned their systems to use chips from other suppliers instead. Others, like Monoprice, threw in the towel. That said, since the Monoprice Liquid Spark headphone amp has conventional RCA (unbalanced) analog line inputs, you can use it with any standard DAC.

A bit of background info

A few definitions before proceeding with the dissection. “Monolith” is Monoprice’s audio products brand. Alex Cavalli is a now-retired, well-known audio amplifier designer who, in addition to selling both self-branded Cavalli Audio equipment (now repaired by Avenson Audio since his retirement) and gear branded by Monoprice (obviously) and Massdrop/Drop, also published complete design documentation sets for others to use in building their own gear, DIY style. Alex is a contemporary of another audio amplifier “wizard” whose name may be more familiar to you: Nelson Pass.

And finally, why do I categorize it as being for “entry-level audiophiles”? The feature set, for one thing. I’ve already noted that it doesn’t offer balanced inputs and outputs, for example, the magnitude-of-benefits of which are debatable, anyway. That said, unlike the Massdrop Objective 2 Headphone Amp: Desktop Edition, it does include preamp outputs, the benefit of which I’ll elaborate on shortly. And its performance is nothing to sneeze at:

And the price, although that’s an imperfect-at-best barometer of quality. That said, Schiit’s current high-end solid-state Mjolnir 3 headphone amp (the company also sells tube-based products) goes for $1,199-$1,299, depending on color. Conversely, when the Liquid Spark Headphone Amplifier was introduced in 2018 (a year after Alex Cavalli announced his retirement, interestingly), Monoprice sold it for $99. Its list price is now $129. But (in explaining how I first came across it) I’ve long subscribed to Monoprice’s periodic promotional emails, and back in March of last year, I stumbled across a smokin’ deal; $32.49 each plus a further 25%-off discount. I bought two at $50.29 total (with tax), one for a buddy’s birthday, the other for me.

What I’ll be taking apart today is neither of these devices, however. Last October, while searching for a Liquid Spark DAC mate to my headphone amplifier, I stumbled across “as-is” Liquid Spark amps on eBay for $29.99 plus tax and $9.99 for shipping. The seller notes said:

Pulled from a professional working environment. Tested for power, no further testing was done. Due to lack of knowledge and having the proper equipment to fully test these units, we are selling AS-IS for parts/not working. Unit shows some signs of scuffs/scratches all around the unit. Please refer to the photos for more detailed information on the cosmetic condition.

As I’ve mentioned (and exemplified) many times before, such “for parts only” listings are perfect for teardown purposes. I ended up getting one for $20.99 (plus the aforementioned sales tax and shipping). I’m not sure how the seller “tested for power”, since it didn’t come with the requisite “wall wart”. And as you may have already noticed from the back panel “stock photo” shown earlier, it’s an uncommon one, outputting 36V at 1.25A min (that said, at least it’s got a DC output; Schiit’s are all just AC transformers).

Overview

I have no idea if this one actually works, and I’m not going to chance zapping my personal amplifier’s functional PSU to find out. That said, here it is in all its cosmetically imperfect glory, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the unit has dimensions of 4.6″ x 3.7″ x 1.5″/117 x 94 x 38 mm and weighs 9.6 oz./271g):

Left-to-right are the power switch, a ¼” TRS headphone jack, a 3-or-6 dB gain switch (to accommodate headphones of varying impedance), and a rotary volume control knob. Now, about that back panel (following up on my earlier “teaser” comment about the RCA output set):

Most headphone amps in this price range have unbalanced RCA inputs, but their only output is an unbalanced TRS headphone jack (of varying diameter) up front. But this one also has a pair of unbalanced RCA outputs. And they’re not just simple line level pass-throughs, either; they route through the internal preamplifier first, although they (obviously) then bypass the headphone amplifier circuitry. Why’s this nice? Well, you can connect them to an external power amplifier to drive a set of speakers from the same audio source. And, because the preamp is still in the loop, the headphone amp’s volume control manages speaker volume, too.

The one thing I don’t know (and haven’t tested with my unit) yet, and the user manual doesn’t clarify, is whether the two output sets operate simultaneously or (as is the case with Schiit’s device equivalents) in a one-or-the other fashion. Said another way, when you plug in some “cans”, does this also mute the sound that would otherwise come out the connected speakers?

Onward: the left and right sides:

The top:

and bottom:

complete with a label closeup:

I admittedly enjoyed fondling this device, both in an absolute sense and relative to the scores of predominantly plastic-based products I’ve taken apart in the past (and will undoubtedly continue to do so in the future). It’s heft…the solidity of its all-metal construction…very nice!

Teardown time

Speaking of that all-metal construction, let’s start by getting the front panel off, starting with the Torx head screws on both ends:

Part of the way there…

Let’s see what’s behind that volume knob, which was snugly attached but pulled off after a bit of muscle-powered coercion:

Unscrew the nut, remove the washer:

and this part of the total task is successfully completed:

Check out that gloriously thick and otherwise solid PCB!

Now for the back panel. Six screws there and another one below:

And the panel-still-attached PCB slides out the rear:

Voila!

Jumping forward to the future for a moment, I went back and perused the product page after the teardown-in-progress and found this, which I hadn’t noticed earlier:

It wasn’t surprising. It was, conversely, validating. Truth be told, even before I took this amp apart, I’d suspected I’d encounter a discretes- (vs op amp-) based design. And when I saw the horde of tiny ICs scattered all over the top of the PCB, my in-advance hunch was validated.

Before diving in, let’s first flip the PCB over and take a look at the other side:

Not as much to see here, aside from this closeup:

The largest two ICs shown, which curiously don’t have their own PCB-marking notations, versus the resistors and capacitors surrounding them (perhaps the marks are underneath the chip packages) are labeled as follows, along with what I think is a STMicroelectronics logo:

071I
GZ229

Any idea what they are, readers? Back to the front for a close-up of the most interesting section:

The largest packaged parts on this side are a mix of what I believe to be QJ423 p-channel and QJ444 n-channel MOSFETs, both curiously identified in online specs as intended for automotive applications. And look, Alex even brands his PCBs!

I’ll close with a few side views of the solid-construction circuit board:

And that’s all I’ve got for you today. I’ll hold onto the disassembled device for a while in case you have any specific questions on the markings on some of the other, tinier ICs and/or passives. And, as always, I welcome your thoughts in the comments. Bonus points for anyone who is able to dig up an Alex-authored DIY schematic that corresponds to this design!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post Seeing inside entry-level audiophile desire: Monoprice’s Liquid Spark Headphone Amplifier appeared first on EDN.

New AI networking switch breaks the 100-Tbps barrier

Wed, 06/04/2025 - 12:55

The need for unified networks serving artificial intelligence (AI) training and inference is reaching an unprecedented scale. Broadcom’s answer: The Tomahawk 6 switch delivers 102.4 Tbs of switching capacity in a single chip, doubling the bandwidth of any Ethernet switch currently available on the market.

AI clusters—scaling from tens to thousands of accelerators—are turning the network into a critical bottleneck with bandwidth and latency as major limitations. Tomahawk 6, boasting 100G/200G SerDes and co-packaged optics (CPO) technology, breaks the 100-Tbps barrier while facilitating a flexible path to the next wave of AI infrastructure.

Figure 1 Tomahawk 6’s two-tier network structure, instead of a three-tier network, leads to fewer optics, lower latency, and higher reliability. Source: Broadcom

Ram Velaga, senior VP and GM of Core Switching Group at Broadcom, calls Tomahawk 6 not just an upgrade but a breakthrough. “It marks a turning point in AI infrastructure design, combining the highest bandwidth, power efficiency, and adaptive routing features for scale-up and scale-out networks into one platform.”

First, the Tomahawk 6 family of switches includes an option for 1,024 100G SerDes on a single chip, allowing designers to deploy AI clusters with extended copper reach. Moreover, Broadcom’s 200G SerDes provides the longest reach for passive copper interconnect, facilitating high-efficiency, low-latency system design with greater reliability, and lower total cost of ownership (TCO).

Second, Tomahawk 6 is also available with co-packaged optics, which lowers power and latency while reducing link flaps. Tomahawk 6’s CPO solution is built upon Broadcom’s CPO versions of Tomahawk 4 and Tomahawk 5.

Third, Tomahawk 6 incorporates advanced AI routing capabilities that encompass features like advanced telemetry, dynamic congestion control, rapid failure detection, and packet trimming. These features enable global load balancing and adaptive flow control while supporting modern AI workloads, including mixture-of-experts, fine-tuning, reinforcement learning, and reasoning models.

Figure 2 Cognitive Routing 2.0 in Tomahawk 6 features advanced telemetry, dynamic congestion control, rapid failure detection, and packet trimming. Source: Broadcom

The capabilities outlined above provide essential advantages for hyperscale AI network operators. They also allow cloud operators to dynamically partition their XPU assets into the optimal configuration for different AI workloads. Broadcom claims that Tomahawk 6 meets all networking demands for emerging 100,000 to one million XPU clusters.

Figure 3 Tomahawk 6 can accommodate up to 512 XPUs in a scale-op cluster. Source: Broadcom

While Tomahawk 5 has proven itself in large GPU clusters, Tomahawk 6 takes it a step further in terms of bandwidth, SerDes speed and density, load balancing, and telemetry. Tomahawk 6, compliant with the Ultra Ethernet Consortium, also supports arbitrary network topologies, including scale-up, Clos, rail-only, rail-optimized, and torus.

Related Content

The post New AI networking switch breaks the 100-Tbps barrier appeared first on EDN.

The analog-centric timing world takes a digital turn

Tue, 06/03/2025 - 20:05

The analog-based timing semiconductor world, comprising crystals and phase-lock loops (PLLs), is facing a conundrum. While crystals provide higher performance at lower frequencies, PLLs accommodate higher frequencies with lower performance. An Irvine, California-based timing startup claims to have an answer to this conundrum. It digitally synthesizes timing signals using CMOS technology, thereby replacing legacy analog chains.

Read the full story at EDN’s sister publication, Planet Analog.

Related Content

The post The analog-centric timing world takes a digital turn appeared first on EDN.

GMSL video link’s quest to become open automotive standard

Tue, 06/03/2025 - 16:00

The Gigabit Multimedia Serial Link (GMSL) technology of Analog Devices Inc. (ADI) is finally heading down the standardization path with the inception of the OpenGMSL Association, a non-profit entity joined by an automotive OEM, tier 1 suppliers, semiconductor companies, and several test and measurement firms.

GMSL—a SerDes technology for automotive applications like advanced driver assistance systems (ADAS), touchscreen infotainment, and in-vehicle connectivity—facilitates high-resolution video links while supporting data transfer speeds of up to 12 Gbps. ADI claims to have shipped more than 1 billion GMSL chips for automotive-grade platforms.

Figure 1 GMSL is a point-to-point serial link technology dedicated to video data transmission; it was originally designed for automotive camera and display applications. Source: ADI

OpenGMSL Association aims to turn this automotive SerDes technology into an open standard for in-vehicle connectivity. “As automotive architectures evolve to meet the growing demands of in-vehicle communication, networking and data transfer, it is critical that the industry has access to open global standards such as OpenGSML to enable ecosystem-led innovation,” said Fred Jarrar, VP and GM of Power and ASIC Business Unit at indie Semiconductor, a member of OpenGMSL Association.

Among the test and measurement companies joining the OpenGMSL Association are Keysight Technologies, Rohde & Schwarz, and Teledyne LeCroy. These in-vehicle network test outfits will help OpenGMSL in facilitating the development and deployment of interoperable and reliable automotive systems through a standardized, open ecosystem for in-vehicle connectivity.

Hyundai Mobis, which has used the GMSL technology in the Korean OEM’s vehicles for many years, has also joined the initiative to standardize GMSL. Then, there is GlobalFoundries (GF), pitching its 22FDX, 12LP+ and 40LP process technologies for GMSL chips targeted at next-generation automotive applications.

Figure 2 OpenGMSL aims to transform SerDes transmission of video and/or high-speed data as an open standard across the automotive ecosystem. Source: ADI

Next-generation automotive platforms like ADAS heavily rely on high-quality video data to make critical, real-time decisions that improve driver safety and reduce accidents. Likewise, touchscreen infotainment systems demand high-speed and low-latency connectivity for seamless, immersive user experiences.

OpenGMSL aims to accelerate innovation across these automotive platforms by cultivating a standardized, open ecosystem for in-vehicle connectivity. ADI is betting that an open standard for video and/or high-speed data transmission built around its GMSL technology will bolster autonomous driving, ADAS, and infotainment applications, and its own standing in the automotive market.

Related Content

The post GMSL video link’s quest to become open automotive standard appeared first on EDN.

Power amplifiers that oscillate—deliberately. Part 1: A simple start.

Tue, 06/03/2025 - 13:05

Editor’s Note: This DI is a two-part series.

In Part 1, Nick Cornford deliberately oscillates the TDA7052A audio power amplifier to produce a siren-like sound and, given the device’s distortion characteristics, a functional Wien bridge oscillator.

In Part 2, Cornford minimizes this distortion and adds amplitude control to the circuit.

When audio power amplifiers oscillate, the result is often smoke, perhaps with a well-cooked PCB and a side order of fried tweeter. This two-part Design Idea (DI) shows some interesting ways of (mis-)using a common power amp to produce deliberate oscillations of varying qualities.

That device is the TDA7052A, a neat 8-pin device with a high, voltage-controllable gain, capable of driving up to a watt or so into a bridge-tied load from its balanced outputs. The TDA7056A is a better-heatsinked (-heatsunk?) 5-W version. (That “A” on the part number is critical; the straight TDA7052 has slightly more gain, but no control over it.) The TDA7052B is an uprated device with a very similar spec, and the TDA7056B is the 5-W counterpart of that. But now the bad news: they are no longer manufactured. Some good news: they can easily be found online, and there is also a Taiwanese second source (or clone) from Unisonic Technologies Ltd.

A simple circuit’s siren song

For the best results, we’ll need to check out some things that don’t appear on the data sheets, but let’s cut straight to something more practical: a working circuit. Figure 1 shows how the balanced, anti-phase outputs help us build a simple oscillator based on the integrator-with-thresholds architecture.

Figure 1 A minimalist power oscillator, with typical waveforms.

This circuit has just three advantages: it’s very simple, reasonably efficient, and, with a connected speaker, very loud. Apart from those, it has problems. Because of the amp’s input loading (nominally 20k) and the variation of drive levels with different loads, it’s hard to calculate the frequency precisely. (The frequency-versus-R1 values shown are measured ones.) R2 is needed to reduce loading on the timing network, but must leave enough gain for steady operation. (A series capacitor here proved unnecessary, as the internally biased input pin is being over-driven.) Its efficiency is due to the amp’s output devices being run in saturation: with no extra heatsinking, the (DIL-8) package warms by ~15°C when driving into an 8 Ω speaker. The square wave produced is somewhat asymmetrical, though good enough for alarm use.

Figure 1 shows a 5-V supply. Raising that to 12 V made only one change to the performance: the output became very, very loud. And it drew around an amp with a 10 Ω load. And it could do with a heatsink. And a TDA7056A/B rather than a ’52.

The Vcon input on pin 4 is not used. Left open, it floats at ~1.14 V, giving the device a measured gain of around 25 dB. Taking it close to ground inhibits operation, so a bare-drain MOSFET hooked on here can give on/off control. Taking it higher gives full gain, with a shift in frequency. If that is not important (and, in this context, why should it be?), logic control through a 22k resistor works fine. When inhibited, the device still draws 8–10 mA.

Feeding Vcon with varying analog signals of up to a few tens of hertz can produce interesting siren effects because changes in gain affect the oscillation frequency. But for a proper siren, it would be better to generate everything inside a small micro and use an H-bridge of (less lossy) MOSFETs to drive the speaker with proper square waves. (We’ve all heard something like that on nearby streets, though hopefully not in our own.)

Fancy sound effects apart, any power amp with a suitable input structure, enough gain, and balanced (BTL) outputs should work well in this simplest of circuits.

Determining distortion

So much for simplicity and raw grunt. Now let’s take a look at some of the device’s subtleties and see how we can use those to good effect. Distortion will be critical, but the data sheet merely quotes 0.3 to 1% under load, which is scarcely hi-fi. If we remove the load, things look much healthier. Figure 2 shows the unloaded output spectrum when the input was driven from an ultra-low-distortion oscillator, at levels trimmed to give 0 dBu (2.83 V pk-pk) and -20 dBu at the output with a device gain fixed at around 25 dB (Vcon was left open, but decoupled).

Figure 2 The TDA7052A’s output spectra for high and low output levels, taken under ideal conditions and with no output load.

Further tests with various combinations of input level and device gain showed that distortion is least for the highest gains—or smallest gain-reductions—and lowest levels. With outputs less than ~300 mVpk–pk (~-18 dBu) and gains more than 10 dB, distortion is buried in the noise.

That’s unloaded. Put a 10 Ω resistive load across the outputs, and the result is Figure 3.

Figure 3 Similar spectra to Figure 2, but with a 10-Ω output load.

That looks like around -38 dB THD for each trace, compared with better than -60 and -70 dB for the unloaded cases. All this confirms that the distortion comes mainly from the output stages, and then only when they are loaded.

A working one-chip sine-wave oscillator

This means that we have a chance to build a one-chip Wien bridge audio oscillator, which could even drive a power load directly while still having lower distortion than the average loudspeaker. Let’s try adding a Wien frequency-selective network and a simple gain-control loop, which uses Zener diodes to sense and stabilize the operating level, as in Figure 4.

Figure 4 A simple gain control loop helps maintain a constant output amplitude in a basic Wien bridge oscillator.

The Wien network is R1 to R4 with C1 and C2. This has both minimum loss (~10 dB) and minimum phase shift (~0°) at f = 1 / 2π C2 (R2 + R4), which gives the oscillation frequency when just enough positive feedback is added. When the amplitude is large enough, Zeners D1 and D2 start to conduct on the peaks, progressively turning Q1 on, thus pulling U1’s Vcon pin lower to reduce its gain enough to maintain clean oscillation.

C3 smooths out the inevitable ripple and determines the control loop’s time-constant. R5 minimizes U1’s loading of the Wien network while C3 blocks DC, and R6 sets the output level. The unloaded spectra for outputs of 0 and -10 dBV are shown in Figure 5.

Figure 5. The spectra of Figure 4’s oscillator for 0 and -10 dBV outputs with no load.

—that has problems

While those spectra are half-decent, with THDs of around -45 and -60 dB (or ~0.1% distortion), they are only valid for a given temperature and with no extra load on the output. Increasing the temperature by 25°C halves the output amplitude—no surprise, given the tempcos of the diodes and the transistor. And those 3.3-V Zeners have very soft knees, especially at low operating currents, so they are better regarded as non-linear resistors than as sharp level-sensors.

Adding a 10-Ω resistor as a load—and tweaking R6 to readjust the levels—gives Figure 6.

Figure 6 Similar spectra to Figure 5’s but with the output loaded with 10 Ω.

THD is now around -30 dB, or 3%. Unimpressive, but comparable with many speakers’ distortions, and actually worse than the data-sheet figures for a loaded device.

So, we must conclude that while a one-chip sinusoidal oscillator based on this is doable, it isn’t very usable, and further tweaks won’t help much. We need better amplitude control, which means adding another chip, perhaps a dual op-amp, and that is what we will do in Part 2.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

The post Power amplifiers that oscillate—deliberately. Part 1: A simple start. appeared first on EDN.

Basic design considerations for anti-tampering circuits

Mon, 06/02/2025 - 17:58

Tamper detection devices, commonly built around switches and sensors, employ several techniques according to design specifications and operating environments. T. K. Hareendran examines several anti-tampering device designs to educate and inform users on various aspects of tamper detection circuits. He also presents anti-tampering use cases built around switches and sensors.

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post Basic design considerations for anti-tampering circuits appeared first on EDN.

The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance

Mon, 06/02/2025 - 14:49

The fundamental difference between Microsoft and Google’s dueling mid-May keynotes this year comes down sizzle versus steak. And that isn’t just my opinion; I can even quantify the disparity that others apparently also ascertained. As noted in my recent coverage of Microsoft’s 2025 Build conference, the full keynote ran for a minute (and a few seconds) shy of 2 hours:

But The Verge was able to condense the essentials down to a 15-minutes (and a few seconds) summary video, 1/8th the length of the original:

What about Google’s day-later alternative? It was only a couple of minutes shorter in total:

But this time, The Verge was only able to shrink it down to around 1/3 the original length, resulting in a 32-minute (and change) summary video:

Translation: nearly the same keynote duration, but much more “meat” in the Google keynote case. And that’s not even counting the 70-minute developer-tailored keynote that followed it:

That said, in fairness, I’ll point out that Google’s own summary video for the keynote was only 10 minutes long, so…🤷‍♂️

What did Google’s presenters cover in those 3+ two-keynote hours, and more generally across the two-day event (and its virtual-event precursor)? Glad you asked. In the sections that follow, I’ll touch on what I thought were at least some of the high points. For more, check out Google’s summary blogs for the developer community and the public at large, along with the conference coverage summary pages from folks like 9to5Google, Engadget, The Verge and Wired.

Android (and its variants)

Conceptually similar to what Microsoft had done, Google decided to release some of its news ahead of the main event. This time, though, it was one week prior, not two. And the focus this time was on software, not hardware. Specifically, Google discussed its upcoming Expressive Design revamp of the core Android UI and associated apps, along with planned added-and-enhanced features for the O/S and apps, and related evolutions of the Android variants tailored for smart watches (Wear OS), smart glasses and headsets (Android XR), vehicles (Android Auto), displays (Google TV), and any other O/S “spins” I might have overlooked at the moment. In the process, Google got the jump on Apple, who will reportedly announce a conceptually similar revamp for its various O/Ss in a couple of weeks (stay tuned for my coverage)!

I’ll talk more about Android XR and its associated hardware, as Google did at I/O itself, in a separate topic-focused section to come later in this piece.

Multimodal large language models

Gemini, as I’ve discussed in past years’ Google I/O reports and other writeups, is the company’s suite of proprietary deep learning models, all becoming increasingly multimodal in their supported data input-and-output diversity. There are currently three primary variants:

  • Pro: For coding and complex prompts
  • Flash: For fast performance on complex tasks, and
  • Flash-lite: For cost-efficient performance

Plus, there’s Gemma, a related set of models, this time open source, which, thanks to their comparatively low resource demands, are also useful for on-device inference with edge systems.

Latest v2.5 of Gemini Pro and Gemini Flash had both already been unveiled, but at I/O Google touted iterative updates to both of them, improving responsiveness, accuracy and other metrics. Also unveiled, this time first-time, was Gemma 3n, specifically tailored for mobile devices. And also newly announced was Gemini Live, which supports the real-time analysis and interpretation of (and response to) live audio and video feeds coming from a camera and microphone. If you’re thinking this sounds a lot like Project Astra, which I mentioned at the tail-end of last year’s Google I/O coverage (albeit not by name)…well, you’d be spot-on.

AI integration into other Google products and services…including search

Just as Microsoft is doing with its operating system and applications, Google is not only developing user direct-access capabilities to Gemini and Gemma via dedicated apps and web interfaces, it’s also embedding this core AI intelligence into its other products, such as Gmail, various Workspace apps, and Google Drive.

The most essential augmentation, of course, is that of the Google Search engine. It was Google’s first product and remains a dominant source of revenue and profit for it and parent company Alphabet, by virtue of the various forms of paid advertising it associates with search results. You may have already noticed the “AI Overview” section that for a while now has appeared at the top of search results pages, containing a summary explanation of the searched-for topic along with links to the pages used to generate that explanation:

Well, now (as I was writing this piece, in fact!) “AI Mode” has its own tab on the results page:

And similarly, there’s now an “AI Mode” button on the Google Search home page:

Google is even testing whether to relocate that button to a position where it would completely replace the longstanding “I’m Feeling Lucky” button.

It wasn’t too long ago when various tech pundits (present company excluded, to be clear) were confidently forecasting the demise of Google’s search business at the hands of upstarts like OpenAI (more on them later). But the company’s “deft pivot” to AI teased in the title of this piece has ensured otherwise (at least until regulatory entities may say otherwise)…perhaps too much, it turns out. As I’ve increasingly used AI Overview (now AI Mode), I find that its search results summaries are often sufficient to answer my question without compelling me to click through to a content-source page, a non-action (versus tradition) that suppresses traffic to that page. Google has always “scraped” websites to assemble and prioritize search results for a given keyword or phrase, but by presenting the pages’ information itself, the company is now drawing the ire of publishers who are accusing it of content theft.

Rich content generation

Take generative AI beyond LLMs (large language models) with their rudimentary input and output options (at least nowadays, seemingly…just a couple of years ago, I was more sanguine about them!), and you’re now in the realm of generating realistic still images, videos, audio (including synthesized music) and the like. This is the realm of Google’s Imagen (already at v4), Veo (now v3), and Lyria (v2 and new RealTime) models and associated products. Veo 3, for example, kicked off the 2025 Google I/O via this impressive albeit fanciful clip:

Here’s another (less silly overall therefore, I’d argue, even more impressive) one from Google:

More synthesized video examples and their associated text prompts can be found at the Veo page on the Google DeepMind site. Veo 3 is already in public release, with oft-impressive albeit sometimes disturbing results and even real-life mimickers. And combine audio, video and still images, add some additional scripting smarts, and you’ve got the new AI filmmaking tool Flow:

Who would have thought, just a few short years ago, that the next Spielberg, Scorsese, Hitchcock, Kubrick, Coppola or [insert your favorite director here] would solely leverage a keyboard and an inference processor cloud cluster as his or her content-creation toolbox? We may not be there yet, but we’re getting close…

Coding assistants

Coding is creative, too…right, programmers? Jules is Google’s new asynchronous coding agent, unveiled in Google Labs last December and now in public beta, where it goes up against

OpenAI’s recently delivered one-two punch of the internally developed Codex and acquisition (for $3B!) of Windsurf. That said, as VentureBeat also notes, it’s not even the only AI-powered coding tool in Google’s own arsenal: “Google offers Code Assist, AI Studio, Jules and Firebase”.

Android XR-based products (and partnerships)

Google co-founder Sergey Brin made a curious onstage confession during a “fireside chat” session at Google I/O, admitting that he “made a lot of mistakes with Google Glass”:

His critique of himself and the company he led was predominantly two-fold in nature:

  • Google tried to “go it alone” from a hardware development, manufacturing and marketing standpoint, versus partnering with an established glasses supplier such as Italian eyewear company Luxottica, with whom Meta has co-developed two generations (to date) of smart glasses (as you’ll soon learn about in more detail via an upcoming sorta-teardown by yours truly), and
  • The bulbous liquid crystal on silicon (LCoS) display in front of one of the wearer’s eyes ensured that nobody would mistake them for a conventional pair of glasses…a differentiation which was not advantageous for Google.

Judging from the 2025 Google I/O messaging, the company seems determined not to make the same mistake again. It’s partnering with Warby Parker, Korea-based Gentle Monster, Samsung and Xreal (and presumably others in the future) on smart glasses based on its Android XR platform…glasses that it hopes folks will actually want to be seen wearing in public. Samsung is also Google’s lead partner for a VR headset based on Android XR…the “extended reality” (XR) that Google envisions for the operating system spans both smart glasses—with and without integrated augmented reality displays—and head-mounted displays. And it not only did live demos during the keynote but also gave attendees the chance to (briefly) try out its prototype smart glasses, glimpsed a year ago in the Project Astra clip I mentioned earlier, for themselves.

Google Beam

Two years ago, I noted that the way-cool Project Starline hologram-based virtual conferencing booth system announced two years earlier (during COVID-19 lockdowns; how apropos):

had subsequently been significantly slimmed down and otherwise simplified:

Fast forward two more years to the present and Google has rebranded the 3D-rendering technology as Beam, in preparation for its productization by partners such as HP and Zoom:

And in the process, Google has notably added near real-time, AI-powered bidirectional language translation to the mix (as well as to its baseline Google Meet videoconferencing service, which previously relied on captions), preserving each speaker’s tone and speaking style in the process:

Now there’s a practical application for AI that I can enthusiastically get behind!

OpenAI’s predictable (counter)punch

In closing, one final mention of one of Google’s primary competitors. Last year, OpenAI attempted to proactively upstage Google by announcing ChatGPT’s advanced voice mode one day ahead of Google I/O. This time, OpenAI attempted to suck the wind out of Google’s sails retroactively, by trumpeting that it was buying (for $6.5B!) the “io” hardware division of Jony Ive’s design studio, LoveFrom, one day after Google I/O. Not to mention the $3M allegedly spent on the “odd” (I‘m being charitable here) video that accompanied the announcement:

While I don’t at all discount OpenAI’s future prospects (or Meta’s, for that matter, or anyone else’s), I also don’t discount Google’s inherent advantage in developing personalized AI: it’s guided by the reality that it already knows (for better and/or worse) a lot about a lot of us.

How do you think this’ll all play out in the future? And what did you think about all the news and enhanced technologies and products that Google recently unveiled? Let me (and your fellow readers) know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance appeared first on EDN.

Designing power supplies for industrial functional safety, Part 1

Fri, 05/30/2025 - 13:13

A power supply unit is one of the most crucial components in an electronics system, as its operation can affect the entire system’s functionality. In the context of industrial functional safety, as in IEC 61508, power supplies are considered elements and supporting services to electrical/electronic/programmable electronic (E/E/PE) safety-related systems (SRS) as well as other subsystems. With the IEC 61508’s three key requirements for functional safety (FS) compliance alongside recommended diagnostic measures, developing power supplies for industrial FS can be tiresome. For this reason, this first part of the series discusses what the basic functional safety standard states about power supplies.

The first part of this series on functional safety in power supply design focuses on insights about the safety requirements for such elements of E/E/PE SRS. This is accomplished by showing what the basic functional safety standard requires from power supplies.

Power Supplies in E/E/PE Safety-Related Systems

The IEC 61508-4 defines E/E/PE systems as systems used for control, protection, or monitoring based on one or more E/E/PE devices. This includes all elements of the system, such as power supplies, sensors, and other input devices, data highways and other communication paths, and actuators and other output devices.

Meanwhile, an SRS is defined as a designated system that both implements the required safety functions necessary to achieve or maintain a safe state for the equipment under control (EUC) and is intended to achieve—on its own or with other E/E/PE SRS and other risk reduction measures—the necessary safety integrity for the required safety functions. This is shown in Figure 1, where power supplies also serve as an example of supporting services to an E/E/PE SRS aside from the hardware and software required to carry out the specified safety function.

Figure 1 E/E/PE system—structure and terminology showing that power supplies serve as a supporting service to an E/E/PE SRS device. Source: Analog Devices

Common cause failures

The basic functional safety standard defines common cause failure (CCF) as a failure resulting from one or more events that cause concurrent failures of two or more separate channels in a multiple-channel system, ultimately leading to system failure. One example is a power supply failure that can result in multiple dangerous failures of the SRS. This is shown in Figure 2 where a failure in the 24-V supply, assuming the 24 V input becomes shorted to its outputs 12 VCC and 5 VCC, will result in a dangerous failure of the succeeding circuits.

Figure 2 Example of a power supply CCF scenario showing how a shorting of the 24-V supply input and the 12-V or 5-V outputs would result in a dangerous failure of the downstream systems. Source: Analog Devices

CCFs are important to consider when complying with functional safety, as they affect compliance with the IEC 61508’s three key requirements: systematic safety integrity, hardware safety integrity, and architectural constraints. These standard-cited requirements regarding CCF and power supplies in certain circumstances are shown here:

  • IEC 61508-1 Section 7.6.2.7 takes the possibility of CCF into account when allocating overall safety requirements. This section also requires that the EUC control system, E/E/PE SRS, and other risk reduction measures, when treated as independent for the allocation, shall not share common power supplies whose failure could result in a dangerous mode of failure of all systems.
  • Similarly, under synthesis of elements to achieve the required systematic capability (SC), IEC 61508-2 Section 7.4.3.4 Note 1 cites ensuring that there’s no common power supply failure that will cause a dangerous mode of failure of all systems is a possible approach to achieve sufficient independence.
  • For integrated circuits with on-chip redundancy, IEC 61508-2 Annex E also cites several normative requirements, including the separation of input and outputs, such as power supply, among others, and the use of measures to avoid dangerous failures caused by power supply faults.

While these clauses prohibit sharing common power supplies whose failure could cause a dangerous mode of failure for all systems, implementing such a practice when designing a system will result in an increased footprint, with greater board size and cost. One way to still use common power supplies is by employing sufficient power supply monitoring. By doing this, dangerous failures brought by the power supply to an E/E/PE SRS can be reduced to a tolerable level, if not eliminated, in accordance with the safety requirements. More discussion about how effective power supply monitoring can solve common cause failures can be found in the blog post “Functional Safety for Power.”

Power supply failures and diagnostics

To detect failures in the power supply, the basic functional safety standard specifies requirements and recommendations that address both systematic and random hardware failures.

In terms of the requirements for control of systematic faults, IEC 61508-2 Section 7.4.7.1 requires the design of E/E/PE SRS to be tolerant against environmental stresses including electromagnetic disturbances. This clause is cited in IEC 61508-2 Table A.16, which describes some measures against defects in power supplies—voltage breakdown, voltage variations, overvoltage (OV), low voltage, and other phenomena—as mandatory regardless of safety integrity level (SIL), Table 1.

Technique/Measure

SIL 1

SIL 2

SIL 3

SIL 4

Measures against voltage breakdowns, voltage variations, overvoltage, low voltage, and other phenomena such as AC power supply frequency variation that can lead to dangerous failure

M

low

M

medium

M

medium

M

high

Table 1 Power Supply Monitoring Requirement from IEC 61508-2 Table A.16.

IEC 61508-2 Table A.1, under the discrete hardware component, shows the faults and failures that can be assumed for a power supply when quantifying the effect of random hardware failures; this is shown in Table 2. Meanwhile, IEC 61508-2 Table A.9 shows the diagnostic measures recommended for a power supply along with the respective maximum claimable diagnostic coverage.

Component

Low (60%)

Medium (90%)

High (99%)

Power supply

Stuck-at

DC fault model

Drift and oscillation

DC fault model

Drift and oscillation

Table 2 Power supply faults and failures to be assumed according to IEC 61508-2 Table A.1.

Table 3 shows this with more details from IEC 61508-7 Section A.8. Both Table 2 and Table 3 are useful when doing a safety analysis as failure modes per component and diagnostic coverage of diagnostic techniques employed are inputs to the calculation of lambda values, thus the SIL metric: probability of dangerous failure and safe failure fraction (SFF).

Diagnostic Measure

Aim

Description

Max DC Considered Achievable

OV protection with safety shut-off

To protect the SRS against OV.

OV is detected early enough that all outputs can be switched to a safe condition by the power-down routine or there is a switch-over to a second power unit.

Low (60%)

Voltage control (secondary)

To monitor the secondary voltages and initiate a safe condition if the voltage is not in its specified range.

The secondary voltage is monitored and a power-down is initiated, or there is a switch-over to a second power unit, if it is not in its specified range.

High (99%)

Power-down with safety shut-off

To shut off the power, with all safety-critical information stored.

OV or undervoltage (UV) is detected early enough so that the internal state can be saved in non-volatile memory if necessary, and so that all outputs can be set to a safe condition by the power-down routine, or there is a switch-over to a second power unit.

High (99%)

Table 3 The recommended power supply diagnostic measures in IEC 61508-7 Section A.8.

Figure 3a shows an example of a voltage control diagnostic measure. In this example, the power supply of the logic controller subsystem, typically in the form of a post-regulator or LDO, is monitored by a voltage protection circuit, specifically the MAX16126.

Any out-of-range voltage detected by the supervisor, whether it be OV or UV, will result in the disconnection of the logic controller subsystem, composed of a microcontroller and other logic devices, from the power supply as well as assertion of the MAX16126’s FLAG pin. With this, the logic controller subsystem can be switched to a safe condition. Similarly, this circuit can also be used as an OV protection with a safety shut-off diagnostic measure if UV detection is not present.

On the other hand, Figure 3b shows an example of a power-down with a safety shut-off diagnostic measure. In this example, a hot-swappable system monitor, the LTC3351, connects the power supply to the logic controller subsystem while its synchronous switching controller operates in step-down mode, charging a stack of supercapacitors. If the power supply goes outside the OV or UV threshold voltages, the LTC3551 will disconnect the logic controller subsystem from the power supply, and the synchronous controller will run in reverse as a step-up converter to deliver power from the supercapacitor stack to the logic controller subsystem. This will give enough time to the logic controller subsystem to save the internal state to a nonvolatile memory, so that all outputs can be set to a safe condition by the power-down routine.

Figure 3 An illustration of the recommended diagnostic measures for a power supply. Source: Analog Devices

Power supply operation

Aside from CCF, power supply failures, and recommended diagnostic measures, the IEC 61508 also expresses the importance of power supply operation in the E/E/PE SRS. This can be seen in the sixth part of the standard, Annex B.3, discussing the use of the reliability block diagram approach to evaluate probabilities of hardware failure, assuming a constant failure rate. Aside from the scope of the sensor, logic, and final element subsystems, power supply operation is also included—this is shown in the following examples.

  • When a power supply failure removes power from a de-energize-to-trip E/E/PE SRS and initiates a system trip to a safe state, the power supply does not affect the PFDavg of the
  • If the system is energized-to-trip or the power supply has failure modes that can cause unsafe operation of the E/E/PE SRS, the power supply should be included in the evaluation.

Such assumptions make power supply operation in an E/E/PE SRS critical as it can determine whether the power supply can affect the calculation for the probability of a dangerous failure, which is one of the IEC 61508’s key requirements.

SRS’s power supply

This article provided insights regarding the basic functional safety standard’s normative and informative requirements for an E/E/PE SRS’s power supply. This was done by first tackling the role of the power supply in an E/E/PE SRS. A discussion of common cause failures, which prohibit the use of common power supplies, then demonstrated how the use of power supply monitoring eliminates CCFs. Requirements regarding systematic and random hardware failures related to power supplies were also presented, along with the recommended diagnostic measures for power supplies. Finally, depending on the power supply operation—de-energize-to-trip or energize-to-trip—the probability of a dangerous failure of the SRS can be affected by the power supply, which was also covered.

Bryan Angelo Borres is a TÜV-certified functional safety engineer who currently works on several industrial functional safety product development projects. As a senior power applications engineer, he helps system integrators design functionally safe power architectures which comply to industrial functional safety standards such as the IEC 61508. Recently, he became a member of the IEC National Committee of the Philippines to IEC TC65/SC65A and IEEE Functional Safety Standards Committee. Bryan has a postgraduate diploma in power electronics and around seven years of extensive experience in designing efficient and robust power electronics systems.

Noel Tenorio is a product applications manager under multimarket power handling high performance supervisory products at Analog Devices Philippines. He joined ADI in August 2016. Prior to ADI, he worked as a design engineer in a switch-mode power supply research and development company for six years. He holds a bachelor’s degree in electronics and communications engineering from Batangas State University, as well as a postgraduate degree in electrical engineering in power electronics and a Master of Science degree in electronics engineering from Mapua University. He also had a significant role in applications support for thermoelectric cooler controller products prior to handling supervisory products.

Related Content

References

  • Foord, Tony and Colin Howard. “Energise or De-Energise to Trip?Measurement and Control, Vol. 41, No. 9, November 2008.
  • IEC 61508 All Parts, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems. International Electrotechnical Commission, 2010.
  • Meany, Tom. “Functional Safety for Power.” Analog Devices, Inc., March 2019.

 

The post Designing power supplies for industrial functional safety, Part 1 appeared first on EDN.

Wireless SoCs drive IoT efficiency

Thu, 05/29/2025 - 18:33

Built on a 22-nm process, Silicon Labs’ SiXG301 and SiXG302 wireless SoCs deliver improved compute performance and energy efficiency. As the first members of the Series 3 portfolio, they target both line- and battery-powered IoT devices.

Designed for line-powered applications such as LED smart lighting, the SiXG301 integrates an LED pre-driver and a 32-bit Arm Cortex-M33 processor running at up to 150 MHz. It supports concurrent multiprotocol operation with Bluetooth, Zigbee, and Matter over Thread, and includes 4 MB of flash and 512 kB of RAM. Currently in production with select customers, the SiXG301 is expected to be generally available in Q3 2025.

Extending the Series 3 platform to battery-powered applications, the SiXG302 features a power-efficient architecture that consumes just 15 µA/MHz when active—up to 30% lower than comparable devices. It is well-suited for battery-powered wireless sensors and actuators using Matter or Bluetooth. Sampling is expected to begin in 2026.

The SiXG301 and SiXG302 families will initially include two types of devices: ‘M’ variants (SiMG301 and SiMG302) for multiprotocol support, and ‘B’ variants (SiBG301 and SiBG302) optimized for Bluetooth LE.

Series 3 product page 

Silicon Labs 

The post Wireless SoCs drive IoT efficiency appeared first on EDN.

Antenna-matching ICs cut RF design complexity

Thu, 05/29/2025 - 18:32

ST offers three antenna-matching companion chips for STM32WL33 wireless MCUs to help streamline the development of IoT, smart metering, and remote monitoring systems. The MLPF-WL-01D3, MLPF-WL-02D3, and MLPF-WL-04D3 integrate impedance matching and harmonic filtering on a single glass substrate to boost RF performance.

By integrating antenna protection, matching, and filtering, the devices simplify RF routing, improve reliability, and reduce BOM cost by replacing multiple discrete components. The three Series 3 chips will be joined by four new variants, supporting radio optimization across high-band (826–958 MHz) and low-band (413–479 MHz) ranges, high-power (16/20 dBm) and low-power (10 dBm) modes, and 2-layer or 4-layer PCB designs.

The MLPF-WL-01D3, MLPF-WL-02D3, and MLPF-WL-04D3 antenna-matching ICs are available now in 5-bump chip-scale packages, priced from $0.15 each in 1000-unit quantities. Release dates for the additional variants were not available at the time of this announcement.

MLPF-WL-0xD3 product page

STMicroelectronics

The post Antenna-matching ICs cut RF design complexity appeared first on EDN.

IC safeguards NFC communication

Thu, 05/29/2025 - 18:32

The NTAG X DNA from NXP is an ISO/IEC 14443-4 Type 4 NFC tag that enables secure authentication of NFC-enabled mobile devices. It features 16 kB of memory, high-speed data transfer, and Secure Unique NFC (SUN) authentication to protect devices across healthcare, smart home, consumer electronics, and industrial markets.

Supporting device-only, device-to-device, and device-to-cloud authentication, the NTAG X DNA secures data transfer via NFC or I²C interfaces at speeds up to 848 kbps and 1 MHz, respectively. A direct MCU connection enables device diagnostics, while the tag’s memory allows access to stored authentication data—even without power. Sensitive information can also be erased in power-off conditions to protect user privacy.

Designed to combat counterfeits and support Digital Product Passport (DPP) compliance, the NTAG X DNA offers strong security with Common Criteria EAL 6+ certification and PKI-based asymmetric cryptography. It is backed by NXP’s EdgeLock 2GO service for UID and certificate delivery, as well as on-demand certificate generation.

NTAG X DNA product page

NXP Semiconductors 

The post IC safeguards NFC communication appeared first on EDN.

Power doublers enable smooth DOCSIS 4.0 upgrades

Thu, 05/29/2025 - 18:32

Qorvo’s QPA3311 and QPA3316 hybrid power doubler amplifiers are optimized for DOCSIS 4.0 downstream operations up to 1.8 GHz. They support the transition to Unified DOCSIS and smart amplifier architectures that enhance visibility, efficiency, and adaptability in hybrid fiber-coax (HFC) systems.

Based on a GaAs/GaN die, the devices operate from 45 MHz to 1794 MHz and provide 23 dB of gain. They are well-suited for DOCSIS 4.0 CATV nodes and amplifiers. High total composite power and improved signal integrity reduce cascade requirements and enhance end-of-line performance, helping lower infrastructure costs by eliminating the need for booster amps.

The QPA3311 and QPA3316 power doublers operate from 24-V and 34-V supplies, respectively, with power consumption of 12.5 W and 18 W. At 51 dB CNN, total composite power reaches 74 dBmV for the QPA3311 and over 75 dBmV for the QPA3316.

Both the QPA3311 and QPA3316 power doubler amplifiers are housed in SOT-115J packages and are now in production.

QPA3311 product page

QPA3316 product page 

Qorvo

The post Power doublers enable smooth DOCSIS 4.0 upgrades appeared first on EDN.

Dry film photoresist enables fine circuit formation

Thu, 05/29/2025 - 18:32

Asahi Kasei has developed the Sunfort TA series of dry film photoresist for next-generation semiconductor packages requiring circuit patterns with line/space widths of 2/2 µm or less. The film offers high resolution with conventional stepper and laser direct imaging (LDI) systems—used to transfer circuit patterns onto substrates—enhancing precision in back-end processes.

The TA series supports fine wiring formation in panel-level packages and related applications. It enables patterning with a 1.0-µm resist width using LDI exposure in a 4-µm pitch design, as required for redistribution layer (RDL) formation (Figures a and b). The resulting fine resist pattern can be plated by a semi-additive process, then stripped to yield a 3-µm wide plating pattern within the same 4-µm pitch (Figure c).

Asahi Kasei states that Sunfort dry film photoresist will remain integral to advancing panel-level packaging technology as panel sizes increase. With its ability to achieve finer wiring and improve production efficiency, the TA series addresses the rising demand for advanced semiconductor package substrates and interposers in AI, automotive, communications, and IoT markets.

TA series product page

Asahi Kasei 

The post Dry film photoresist enables fine circuit formation appeared first on EDN.

Revisited: Three discretes suffice to interface PWM to switching regulators

Thu, 05/29/2025 - 15:18
The typical regulator output network

Many voltage regulator chips, both linear and switching, use the same basic two-resistor network for output voltage programming. Figure 1 illustrates this feature in a typical switching (buck type) regulator, see R1 and R2, where:

Vout = Vsense(R1/R2 + 1) = 0.8v(11.5 + 1) = 10v

Figure 1 A typical regulator output programming network where the Vsense feedback node and values for R1 varies from type to type.

Quantitatively, the Vsense feedback node voltage varies from type to type and recommended values for R1 can vary too, but the topology doesn’t. Most conform faithfully to Figure 1. This de facto uniformity is useful if your application involves PWM control of Vout. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

The three-component PWM-to-regulator solution

Figure 2 shows the simple three-component solution that the above topology makes possible. Note, the PWM duty factor (DF) is from 0 to 1, where:

Vout = Vsense(R1/(R2/DF) + 1) = DF(11.5)0.8 + 0.8 = DF*9.2 + 0.8v

Figure 2 Three parts comprise a circuit for linear regulator programming with PWM.

To introduce linear PWM control to the Figure 1 regulator, all that’s required is to add three discrete components: the PWM switch Q1, and the ripple filter capacitors C1 and C2. Note that Vout will go to Vsense(C1/C2 + 1) = 10v for about 6 ms during power up while C1 and C2 are charging, but that should be okay.

The C2 capacitance required for 1 lsb (0.4%) PWM ripple attenuation is C2 = 2(N-2)/(R1*Fpwm), where N is number of PWM bits, and Fpwm is the PWM frequency (10 kHz illustrated).

Then, to avoid messing with U1’s designed loop gain, possibly reducing stability, C1 = C2*R2/R1. This capacitance ratio also provides protection for U1’s Vsense input, since it ensures that even a sudden short of Vout to ground can’t drive Vsense dangerously negative.

 This combination of time constants yields a first-order 8-bit settling time of T8 = R1C2ln(256) = 37ms. More on this lengthy number shortly.

A cool feature of this simple topology is that, unlike many other schemes for digital power supply control, only the precision of R1, R2, and the regulator’s internal voltage reference matter for regulation accuracy. Precision is therefore independent of external voltage sources, e.g., logic rails. Precision, measured as percentage of Vout, is also independent of Df, and remains equal to Vsense precision (e.g., ±1%) for all output voltages.

Speeding up the settling time

What if a 37-ms settling time is too lengthy for your application? What if you wouldn’t mind investing a couple more parts to speed it up? Figure 3 shows what.

Figure 3 Add R3 and C3 to get analog ripple subtraction, second-order filtering, and a 7-ms settling time. The symbol “*” represents a precision of 1% or better.

First disclosed in EDN Design Idea (DI), “Cancel PWM DAC ripple with analog subtraction,” a thrifty way to implement second-order PWM ripple filtering is through the analog subtraction of the AC component in the logic inverse of the PWM signal from the DC result. Figure 3 shows how that can be accomplished by simply adding R3 and C3 to the Figure 2 topology. Note that the impedance ratios of the added parts are equal to the ratio of the 5-Vpp PWM signal at Q1’s gate to the 0.8-Vpp logic complement at its drain = 5v/0.8v = 6.5.  This is why R3 = 6.5*R2 and C3 = C2/6.5.

In closing: This DI revises an earlier submission, Three discretes suffice to interface PWM to switching regulators.” My thanks go to commenters oldrev, Ashutosh Sapre, and Val Filimonov for their helpful advice and constructive criticism. And special thanks go to editor Shaukat for her creation of an environment friendly to the DI teamwork that made this possible.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Revisited: Three discretes suffice to interface PWM to switching regulators appeared first on EDN.

Accelerating silicon carbide (SiC) manufacturing with big data platforms

Thu, 05/29/2025 - 13:48

For decades, we heard silicon was the only answer. However, while the world’s largest fabs were busy taping out silicon, the communities of engineers and scientists working on non-silicon technologies continued pushing forward. Compound semiconductors—semiconductors made from two or more periodic table elements—include indium phosphide (InP), silicon nitride (SiN), gallium arsenide (GaAs), germanium (Ge), indium gallium arsenide (InGaAs), cadmium telluride (CdTe), gallium nitride (GaN), and silicon carbide (SiC).

Once Tesla introduced SiC MOSFETs in its EVs in 2018, SiC would no longer go unnoticed. The market has grown to more than $2.5 billion in 2024, and despite the temporary slowdown in 2025, is expected to continue growing at a staggering pace according to Yole and TrendForce.

Most EV electronics suppliers now offer SiC power ICs, creating a new ecosystem of material suppliers, capital equipment, fabless companies, foundries, and outsourced semiconductor assembly and test (OSAT) service suppliers.

Some integrated device manufacturers (IDMs)—including Bosch, Denso, Infineon, onsemi, Rohm, SanAn, STMicroelectronics, and Wolfspeed—went fully vertical starting with the SiC powder and ending with multi-die power modules.

Figure 1 SiC’s bubble size indicates its manufacturing volume and annual growth. Source: Author

Many newcomers got into the substrate business because of the high cost of the raw material. They invested heavily in mergers and acquisitions and organic growth and are now faced with the challenge of returning investment to shareholders. In this highly competitive environment, manufacturers are pushed to new levels in yield, quality, efficiency, and capacity.

Benefits are costly

SiC offers benefits to designers and consumers. Thanks to the material properties, SiC transistors can be operated at much higher voltages with lower resistance, showing less performance degradation with temperature, making SiC electronics appealing for power conversion and charging applications in vehicles and power grid applications.

However, the raw material is substantially more expensive than silicon. Crystal growth is orders of magnitude slower than silicon—its hardness, second only to diamond, makes it hard to slice, polish, and dice. High operating voltages require thick epitaxial layers that exhibit high defectivity. Next, vertical transistor architecture requires substantial wafer backside processing. All this translates to higher defectivity and lower yield with frequent yield excursions.

To the consumer, it’s higher product cost and lower reliability in the field.

Years behind silicon

“SiC is decades behind silicon,” is the common cliché among manufacturers. Here, the dominant wafer size is a good indication of material platform maturity. Historically, as silicon manufacturing matured, the industry transitioned to a larger wafer size, going through 100-, 150-, 200- and 300-millimeter (mm) wafers over the four decades, as shown in the figure below.

Figure 2 Most of the high-volume manufacturing capacity for SiCs is expected to remain on 150-mm wafers. Source: Author

Presently, SiC is made predominantly on 150-mm substrates. Meanwhile, several companies announced a transition to 200-mm substrates. While Chinese substrate supplier SICC demonstrated 300-mm substrate in 2024, use of such a large substrate is beyond the horizon. In the next several years, most of the capacity is expected to remain on 150-mm wafers.

Yes, SiC is 30 years behind silicon, judging by the substrate sizes in volume manufacturing.

Complexity of SiC circuits resembles silicon chips in the 1980s—integration into complex circuits today is at the package level rather than on a monolithic IC as seen in silicon. While the most complex silicon ICs count billions of transistors, SiC ICs are nowhere near such complexity. The reason is simple—die yield scales exponentially with the die area. At high defectivity levels, this becomes detrimental, and the only answer is going with a smaller die, integrating known good die at the package level into a more complex circuit.

However, while SiC seems decades behind silicon, it does not need decades to catch up.

The big data platform

Methodologies developed over the decades in silicon IC manufacturing are now available. One example is a solution that deploys data analytics for silicon utilized to streamline innovation. The benefits are numerous:

  • Breaking the silos: The technology cycle from IC design to high-volume manufacturing is long with many players and data silos across operations. That’s where end-to-end big data platforms can connect all data end-to-end and make it available to a broad range of functions.
  • Smart factory: Front-end factories are different from their predecessors. Today’s manufacturing ecosystem offers a variety of software capabilities from dozens of suppliers with well-established interoperability.
  • Standardization: Thanks to several organizations—including SEMI, Global Semiconductor Alliance (GSA), and Semiconductor Industry Association (SIA)—there is a broad landscape of industry standards covering everything from equipment connectivity to data formats and specifications. Standards enable better interoperability between tools and suppliers, streamlining equipment and software deployments to support yield ramps.
  • Material traceability: Whether the need is for tracing wafers in a fab or die in an assembly line, the task is complex and ranges from multiple substrate IDs and rework at different steps to substrate grading to cherry-picking. In an assembly line, it’s a challenge solved with traceability standards.
  • Data models: A data model details material data, inline data from the fabs, and assembly and test data from OSATs. It describes physical entities such as equipment, wafers, dies and modules, processes including fab, assembly and test, and their relationships in the context of manufacturing flow.
  • Artificial intelligence/machine learning (AI/ML): Decades ago, scientists had to develop analytical relationships between causes and effects, while software developers came up with software specifications. A myriad of data-centric frameworks and the ubiquity of AI/ML now shorten this cycle, eliminating numerous bottlenecks.
  • Too much data: Vast amounts of data are generated per wafer throughout the manufacturing operations, though most of that data is never used. At the same time, engineers in the automotive segment are putting more stringent requirements on their chip and module suppliers regarding data collection and retention. The data platform must enable a good mix of storage options allowing tradeoffs between performance and cost and provide the knobs for data caching and aging.

Adopting an industry-standard solution allows manufacturers to improve efficiency and ramp yields faster than the competition.

What’s next

According to Yole, TrendForce, McKinsey and SEMI, growth is forecasted for most compound semiconductor devices, with silicon carbide at the top of that list. Following Gartner’s terminology of the “hype cycle,” it’s past disillusionment. Both silicon and GaN have been carving out more space in the power IC market. This change will push SiC for performance and cost.

At the same time, more suppliers are stepping into the market in each segment—material suppliers, foundries, fabless and IDMs. Competition will intensify, pushing manufacturers for higher yields, faster development cycles, and higher levels of integration.

Under pressure for cost and performance, designers and manufacturers must start adopting big data platforms.

Steve Zamek, director of product management at PDF Solutions Inc., is responsible for manufacturing data analytics solutions for foundries and IDMs. Prior to this, he was with KLA (former KLA-Tencor), where he led advanced technologies in imaging systems, image sensors, and advanced packaging.

Related Content

The post Accelerating silicon carbide (SiC) manufacturing with big data platforms appeared first on EDN.

Basic considerations for electronic impulse relay DIY

Thu, 05/29/2025 - 10:07

Interested in DIY a simple electronic impulse relay module? T. K. Hareendran designs an impulse relay circuit that mimics the functions of a conventional electromechanical impulse relay. This module switches the same load from several switching points. He also provides design details of different ICs that can be used in this hobby-level project. That includes pin-by-pin configurations and respective timing steps.

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post Basic considerations for electronic impulse relay DIY appeared first on EDN.

Microsoft Build 2025: Arm (and AI, of course) thrive

Wed, 05/28/2025 - 14:12

Last week was a biggie for those of you into tech conferences. First and foremost, of course, there was the 2025 iteration of the Silicon Valley-located Embedded Vision Summit, for which I have both personal interest and professional association. In parallel (and in Taiwan), a “little” computer conference called Computex was going on. And from a single-company-sponsored event standpoint, there were two dueling ones: Google with I/O in Mountain View, CA, which I’ll cover in my next post, and Microsoft, with Build in Seattle, WA, which I’ll detail today.

2024 launches

To begin, however, I’ll rewind two weeks further in the past. Revising my previous year’s (2024) Build coverage, you’ll note as I did at the time that this was the first time Microsoft launched new generations of the consumer-tailored versions of its various Surface family mobile computer products that exclusively leveraged Qualcomm’s Snapdragon X Arm-based SoCs.

And equally, if not more notable, last year was also the first time Microsoft added Arm-based variants to its “For Business” Surface product portfolio:

2025 launches

Fast forward to early May 2025 and, for some unknown reason, Microsoft decided to decouple its new-hardware unveilings from the main Build event, releasing the earlier announcements on May 6. Once again there were Arm-only Surface systems for consumer:

and business users:

Although this time, there weren’t any full-generation upticks. Instead, portfolio expansion and cost reduction (the latter aided by broader product line tweaks, albeit tempered by looming tariff-induced potential price increases) came to the fore. The Surface Pro is now available in both legacy 13” and new 12” form factors, while the Surface Laptop now comes in both legacy 13.8” and 15” and a new 13” size. Both newcomers are more svelte than their precursors: 0.61” versus 0.69” and 2.7 lbs. versus 2.96 lbs. for the Surface Laptop, and 0.30” vs 0.37” and 1.5 lbs. vs 1.96 lbs. (in both cases absent the optional keyboard case) for the Surface Pro.

The Surface Laptop’s hardware

I’d argue that the Surface Laptop’s form factor evolution is the more critical of the two from a competitive standpoint, an opinion which factors more generally into the fundamental reason why I’m devoting so much of today’s writeup to hardware. x86-based systems increasingly seem to me to be an afterthought for Microsoft, despite the fact that AMD and Intel have belatedly caught up with Qualcomm from a neural processor core performance standpoint and thereby gained the right to put the Copilot+ marketing moniker on systems containing their CPUs, too. Why is Microsoft becoming increasingly Arm-centric? Because I’d hypothesize, Microsoft is also becoming increasingly Apple-fixated, as the latter company’s half-decade-back announced transition from x86 to Arm-based Apple Silicon systems bears increasingly bountiful fruit.

The new 13” Surface Laptop pretty clearly has Apple’s MacBook Air in its sights, although whether it’ll actually hit its target (and if so, whether mortally or resulting only in a flesh wound) is less clear. For one thing, it’s based on the 8-core variant of the Snapdragon X Plus, versus the 10-core “Plus” and 12-core “Elite” SoCs found in the slightly larger system (that said, all Snapdragon X variants deliver the same level of NPU performance). The SSD is (slower) UFS in interface, versus NVMe, and tops out at 512 GBytes of capacity. There’s only one DRAM option offered: 16 GBytes. And although the display is only slightly smaller, its image quality specs are more notably diminished: 1920×1280 pixels at 60 Hz versus 2304×1536 pixels at 120 Hz.

The Surface Pro’s hardware

The 12” Surface Pro is similarly processor core count, mass storage capacity, and system memory size-encumbered, as is its display, albeit not as badly: IPS-based with 2196×1464 pixels at 90 Hz versus either IPS- or OLED-based 2880×1920 pixels at 120 Hz. That said, I concur with Ars Technica’s Andrew Cunningham; the return of the first few Surface Pro generations’ flimsy keyboard is baffling, especially when it had just been further reinforced with last year’s offering. Both new systems drop the proprietary Surface Connect port in favor of USB-C, curiously dispensing with MagSafe-like magnetic-connector charging capabilities in the process (I’m guessing the European Union might have had a little something to do with that decision).

Big picture, Microsoft is seemingly increasingly confident in Windows 11 Arm64’s Prism x86 code virtualization foundation’s robustness. Nobody (including me, repeatedly) was realistically saying so just a few years ago, but by focusing development attention on 64-bit- and Windows 11-only emulation, the Prism team has made tangible progress since then. I’ve got three Windows 11 Arm64-based systems here, and rarely do I encounter a glitch anymore (that said, I’m not a gamer). Further improving the situation, not only from inherent compatibility but also performance and power consumption standpoints, is the increasing prevalence of Arm64-native application variants (such as Dropbox: yay!). And the “dark cloud” of looming lawsuits between Arm and Qualcomm that I’d mentioned a year ago, thankfully, also dissipated a few months back.

AI-related announcements

Early May wasn’t all about hardware for Microsoft. The company also unveiled a raft of Copilot+-only new and expanded capabilities for Windows 11. And two weeks later, this trend extended even more broadly into Microsoft’s operating system and applications with numerous AI-related announcements at Build. Examples included:

Microsoft also revealed a new command-line text editor, an open-source transition for the Windows Subsystem for Linux (WSL), and encryption algorithm enhancements, for example. That said, much of the rest of the keynote (at least; I wasn’t there so can’t speak to the training sessions), was rife with AI technobabble, IMHO, from both Microsoft execs and invited notable guests, complete with innumerable mentions of the “agentic web” and other trendy lingo.

Watch it yourself, or not

See below if you’re up for a slog through the entire 2 hours of oft-tedium:

Conversely, if a 15-minute summary is more to your liking, here’s The Verge’s take:

And with that, having just passed 1,000 words, I won’t force you to slog through any more of my technobabble 😉 As always, I’ll end with an invitation to share your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Microsoft Build 2025: Arm (and AI, of course) thrive appeared first on EDN.

Pages