EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 41 min ago

Oscilloscope input coupling: Which input termination should be used?

Thu, 06/26/2025 - 15:59

Getting signals into an oscilloscope or digitizer without distorting them is a significant concern for instrument designers and users. The critical first point in an instrument is the input port. Oscilloscopes offer 50-Ω and 1-MΩ input terminations for both channels and trigger inputs. When should each be used?

A typical oscilloscope offers input ports terminated in either a 50-Ω, DC-coupled, or 1-MΩ AC- or DC-coupled, or ground (Figure 1).

Figure 1 The input coupling choices for a typical oscilloscope include 50-Ω DC, 1-MΩ AC or DC, and ground. Source: Art Pini

Input termination

The 50-Ω termination is intended for use with 50-Ω sources connected to the oscilloscope using a 50-Ω coaxial cable. The 50-Ω input properly terminates the coaxial cable, preventing reflections and associated signal losses.

The 50-Ω input termination is also used with certain probes. The simplest probe, based on a 50-Ω termination, is the transmission line or low-capacitance probe (Figure 2).

Figure 2 The low capacitance probe is intended to probe low impedance sources like transmission lines. A 10:1 attenuation is achieved by making RIN equal 450 Ω, which results in a 10:1 attenuation of the probed signal. Source: Art Pini

The transmission line probe has a relatively wide bandwidth, typically ranging from 5 GHz or more. Its input impedance is low, and in the case of a 10:1 probe, it is only about 500 Ω. Most other active high-bandwidth probes also utilize the 50-Ω oscilloscope input termination.

The 1-MΩ input termination is intended to connect to 10:1 high-impedance passive probes as shown in Figure 3.

Figure 3 A simplified schematic for a 10:1 high impedance passive probe connected to an oscilloscope’s 1 MΩ input. Source: Art Pini

The high-impedance probe places a 9-MΩ resistor in series with the oscilloscope’s 1-MΩ input termination, forming a 10:1 attenuator. This passive probe has a DC input resistance of 10 MΩ. The compensation capacitor (Ccomp) is adjusted so that the time constants Rin*Cin are the same as Ro*(Co+Ccomp), forming an all-pass filter offering a relatively square low-frequency pulse response. The 1-MΩ termination also serves as a high-impedance input for low-frequency measurements, where reflections are not an issue.

50-Ω vs 1-MΩ inputs

The two input terminations have significant differences. Consider the Teledyne LeCroy HDO 6104B, a 1 GHz mid-range oscilloscope, as an example (Table 1).

Input Termination

Bandwidth

Coupling

Vertical Range (V/div)

Offset Range

Maximum Input

50 Ω

1 GHz

DC, GND

1 mV to 1 V

± 1.6 V to ±10 V

5 Vrms, ± 10 V p-p

1 MΩ

500 MHz

AC, DC, GND

1 m V to 10 V

± 1.6 V to ± 400 V

400 V(DC + Peak AC<10 kHz)

Table 1 The characteristics of the input terminations are quite different. Source: Art Pini

The bandwidth of the 50-Ω termination is usually much greater than that of the 1-MΩ termination. In this example, it is 1 GHz. The oscilloscope’s bandwidth is generally specified for the 50-Ω termination. The 50-Ω input has a more limited input voltage range than the 1-MΩ input. The maximum voltage range of the 50-Ω termination is power-limited to 5 Vrms by the ½-watt rating of the input resistor. The 1-MΩ has a maximum voltage rating of 400 V (DC+AC peak <10 kHz). The 50-Ω input is only available in DC-coupled mode, whereas the 1-MΩ termination is available in both AC and DC-coupled modes. Finally, the offset range of the 1-MΩ input extends up to ±400 V while the 50-Ω offset range is ±10 V.  

DC or AC input coupling

DC coupling applies to the entire frequency spectrum of the signal’s frequency components from DC to the full rated bandwidth of the instrument’s specified input. AC coupling filters out the DC by placing a blocking capacitor in series with the oscilloscope input. The series capacitor acts like a high-pass filter.

In most oscilloscopes, the AC-coupled input has a lower cutoff frequency of about 10 Hz. AC-coupling the 50-Ω termination would require about 20,000 times larger capacitors to achieve the same 10-Hz lower cutoff frequency, so it is not done. This ability to separate a signal’s AC and DC components is utilized in applications such as measuring ripple voltage at the power supply’s output. The AC coupling blocks the power supply’s DC output while passing the ripple voltage. Figure 4 compares an AC- and a DC-coupled waveform with an offset voltage.

Figure 4 A 1 MHz, 381 mVpp, signal with a 100-mV DC offset is acquired using both DC (top trace) and AC (lower trace) coupling. Source: Art Pini

The upper trace shows the DC-coupled waveform. Note that the DC offset shifts the AC component of the input signal upward while the AC signal, shown in the lower trace, has a zero mean value. The AC coupling has removed the DC offset.

The peak-to-peak amplitude is measured using measurement parameter P1 as 381 mV. They are identical because the offset of the DC-coupled signal is canceled by the subtraction operation used in calculating the peak-to-peak value.

The DC-coupled signal has a DC offset of 100 mV, measured by the mean measurement parameters P2. The AC-coupled signal has the same peak-to-peak amplitude (P5) but a mean(P6) of near-zero V. The AC-coupled signal’s RMS amplitude (P3) reads 167.5 V because it includes the RMS value of the DC offset. The RMS value of the AC-coupled signal (P7) reads 134.4 mV because the mean value is zero. The DC-coupled signal’s standard deviation (sdev) (P4) is identical to the RMS values of the AC-coupled signal. Since the standard deviation calculation subtracts the mean value of the signal from the instantaneous value before computing RMS, the standard deviation is sometimes referred to as the AC RMS value.

Trigger input termination and coupling

The trigger input is another oscilloscope input that must be considered as it affects the instrument’s triggering. It is derived from one of the input channels, the external trigger input, or the power line. The trigger coupling for any of the inputs other than line is one of four possibilities: AC, DC, low-frequency reject (LF REJ), or high-frequency reject (HF REJ) (Figure 5).

Figure 5 The trigger input coupling selections include two bandwidth-limited modes (LF and HF REJ) and AC- and DC-coupling. Source: Art Pini

The AC and DC coupling perform as the AC and DC input coupling selections. LF REJ is an AC coupling mode with a high-pass filter in series with the trigger input. HF REJ is DC coupled with a low-pass filter in series with the trigger input. The cutoff frequencies of high-pass and low-pass filters are usually about 50 kHz. The LF and HF REJ coupling modes are usually used for noisy trigger signals, which might be encountered when testing switched-mode power supplies.

If the trigger input source is one of the input channels, then the trigger input inherits the termination impedance of the input channel. If the external trigger input is used, the input impedance can be selected (Figure 6).

Figure 6 The termination of the external trigger input includes both 50-Ω and 1-MΩ DC, along with a 1:1 and a 10:1 attenuator. Source: Art Pini

The termination is either 50 Ω or 1 MΩ. The external trigger is DC-coupled from the physical input to termination. The trigger coupling selection sets the coupling between the termination and the trigger.

Selecting input terminations when using a probe

Most modern oscilloscopes have intelligent probe interfaces that sense the probe’s presence and read its characteristics. The instrument adjusts the input termination and attenuation to match the probe’s requirements. For classical passive probes, simpler probe interfaces sense the probe’s sense pin to detect its presence and attenuation and set the instrument coupling and attenuation to match the probe. If the passive probe lacks a sense pin or an intelligent interface, then the attenuation setting of the input channel must be done manually.

50-Ω termination workarounds

The 50-Ω termination offers the highest bandwidth and is used with signal sources connected via 50-Ω coaxial cables or active probes that expect a 50-Ω termination. Serial in-line attenuations can be used to increase the voltage range of the 50-Ω input. AC coupling of the 50-Ω input can be accomplished using an external blocking capacitor. The lower frequency cutoff will be a function of the block’s capacitance.

Other traditional terminating impedances can be adapted to the 50-Ω termination by using an external in-line impedance pad. This is particularly common in applications such as video, where 75-Ω terminations are the standard. If an impedance pad is used, the pad’s attenuation has to be manually entered into the input channel setup.

1-MΩ termination workarounds

The 1-MΩ termination provides a high input impedance, which reduces circuit loading. It offers the highest voltage and offset ranges, but its bandwidth is restricted to 500 MHz or less. Care should be exercised when using it to measure low-impedance sources with frequencies greater than 40 to 50 MHz to avoid reflections, which will manifest themselves as ringing (Figure 7).

Figure 7 Measuring a low-impedance source using a 1-MΩ input can result in reflections that look like ringing (upper trace). Using a 50-Ω termination (lower trace) does not show the problem. Source: Art Pini

If you must use a 1-MΩ input, reflections can be reduced by soldering a 50-Ω resistor to the low-impedance source and connecting the 1-MΩ input to the resistor. This will help reduce reflections from the high-impedance termination back to the source.

The rail probe is the best of all possible worlds

Given that a typical application of oscilloscopes is measuring power supply ripple, the DC-coupled input’s limited offset voltages and the AC-coupled inputs’ attenuation call for a unique solution. The rail probe is a solution to measuring ripple on power rails that offers a large built-in offset, low attenuation, and high DC input impedance. The rail probe’s built-in offset and low attenuation permit the rail voltage to be offset in the oscilloscope by its mean DC voltage with high oscilloscope vertical sensitivity, achieving a noise-free view of small signal variations. The high DC input impedance eliminates the loading of the DC rail.

The input termination and coupling are important when setting up a measurement. Keep in mind how they can affect the signal acquisition and subsequent analysis.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

The post Oscilloscope input coupling: Which input termination should be used? appeared first on EDN.

A nostalgic technology parade of classic amplifiers

Thu, 06/26/2025 - 15:47

There are numerous evergreen chips in the semiconductor industry, and this blog provides a sneak peek at some of these timeless technology marvels. Take µPC1237, for instance, NEC’s bipolar analog IC, which is still used in stereo audio power amplifiers and loudspeakers. Then, there is Toshiba’s fabled TA7317P, another classic IC used for power amplifier protection. The blog highlights the inner workings of these awesome chips and expands on why they are still in play.

Read the full blog on EDN’s sister publication, Planet Analog.

Related Content

The post A nostalgic technology parade of classic amplifiers appeared first on EDN.

Take back half improves PWM integral linearity and settling time

Wed, 06/25/2025 - 16:03

PWM is a simple, cool, cheap, cheerful, and (therefore) popular DAC technology. Excellent differential nonlinearity (DNL) and monotonicity are virtually guaranteed by PWM. Also guaranteed are a stable zero and a full-scale accuracy that’s generally limited only by the quality of the voltage reference. However, PWM’s integral nonlinearity (INL) isn’t always terrific, and the necessity for low-pass filtering-out of ripple means its speed isn’t too swift either. These messy topics are covered in…

  1. A common cause of, and a software cure for, PWM INL is discussed here in “Minimizing passive PWM ripple filter output impedance: How low can you go?
  2. The slow PWM settling times (Ts) that can be problematic, together with a way to reduce them, are addressed here in “Cancel PWM DAC ripple with analog subtraction.”

Figure 1 offers a tricky, totally analog strategy for both. The ploy in play is Take Back Half (TBH). It relies on two differential relationships that effectively subtract (take back) the error terms.

  1. For signal frequencies less than or equal to 1/Ts (including DC) Xc >> R and Z = 2(Xavg – Yavg/2).
  2. For frequencies greater than or equal to Fpwm, Xc << R and Z = Xripple – Yripple.

Figure 1 All Rs and Cs are nominally equal. The circuit relies on two differential relationships that effectively subtract the error terms for the TBH methodology.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Because only one switch drives load R at node Y while two in parallel drive X, INL due to switch loading at Y is exactly twice that at X. Therefore, Z = 2(Xavg – Yavg/2) takes back, cancels the error, and has (theoretically) zero INL.

Xripple = Yripple, so Z = Xripple – Yripple = 0 nulls it out, has likewise (theoretically) zero ripple, and ripple filter RC time constants can be made faster and settling times shorter.

The DC conversion component at Z = -PWM_duty_factor * Vref. Conversion accuracy is precisely unity, independent of resistance and capacitance tolerances. However, they ideally should be accurately equal for best ripple and nonlinearity cancellation.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Take back half improves PWM integral linearity and settling time appeared first on EDN.

New EDA tools arrive for chiplet integration, package verification

Tue, 06/24/2025 - 16:26

The world we are living in is increasingly becoming software-defined, where artificial intelligence (AI) is adding the next layer of functionality. And it’s driving the need for more compute to enable the software-enabled functionality. However, with this huge progression in compute content, Moore’s Law scaling will be insufficient to support the number of transistors for the needed compute.

Enter 3D ICs, disaggregating the functionality of silicon into a set of chiplets and then heterogeneously integrating them on an advanced integration platform. “Hyperscalers, driving the compute envelope, are particularly pushing the extreme where 3D ICs are needed,” said Michael White, VP of Calibre Design Solutions at Siemens EDA.

White also noted automotive designs where self-driving technology content is driving the need for 3D ICs. At the Design Automation Conference (DAC) held in San Francisco, California, on 22-25 June 2025, Siemens EDA announced two key additions to its EDA portfolio to address and overcome the complexity challenges associated with the design and manufacture of 2.5D and 3D IC devices.

First, the company’s Innovator3D IC suite enables chip designers to efficiently author, simulate, and manage heterogeneously integrated 2.5D and 3D IC designs. Second, its Calibre 3DStress software leverages advanced thermo-mechanical analysis to identify the electrical impact of stress at the transistor level.

Figure 1 The new tools aim to dramatically reduce risk and enhance the design, yield, and reliability of complex, next-generation 2.5D/3D IC designs. Source: Siemens EDA

“These solutions help designers achieve the needed compute performance while increasing yield and reliability and reducing cost,” White added. “They also offer the ability to leverage higher bandwidth between the chiplets placed on an interposer.” He calls this an inflection point in the design process and tools needed for the design flows.

Chiplet integration with Innovator3D IC

Keith Felton, principal technical product manager for 3D IC solutions at Siemens EDA, expanded on 3D IC being an inflection point, marking a transition from single design-centric approach to system-centric approach. “It impacts design flows and tools, necessitating a system-centric approach from early planning through final sign-off in four ways,” he added.

First, chip designers need system floor planning to optimize power, performance, area, and reliability across silicon, package, interposer, and even PCB. Second, they must start using multi-physics modeling to simulate complex thermo-mechanical interactions that impact electrical and structural performance.

Third, IC designers need to have a methodology for scalability to manage and communicate heterogeneous data across enterprise-wide teams and maintain digital continuity because there are hundreds of silicon designs encompassing chiplets. Fourth, designers must have a methodology for multi-die sign-off, enabling 3D verification of connectivity, interfaces, interconnect reliability, and electrostatic discharge (ESD) resiliency.

So, Innovator3D IC suite provides a fast, predictable path for planning and heterogeneous integration, substrate/interposer implementation, interface protocol analysis compliance and data management of designs, and design data IP.

Figure 2 Innovator3D IC suite facilitates design, verification, and data management of 2.5D and 3D IC chiplets. Source: Siemens EDA

Innovator3D IC—comprising four building blocks—offers an AI-infused user experience with extensive multithreading and multicore capabilities to achieve optimal capacity and performance on 5+ million pin designs. First, Innovator3D IC Integrator comes with a consolidated cockpit for constructing a digital twin, using a unified data model for design planning, prototyping, and predictive analysis.

Second, Innovator3D IC Layout facilitates correct-by-construction package interposer and substrate implementation. Third, Innovator3D IC Protocol Analyzer can be used for chiplet-to-chiplet and die-to-die interface compliance analysis. It’ll be critical in ensuring compliance with protocols such as Universal Chiplet Interconnect Express (UCIe). Finally, the Innovator3D IC Data Management part is targeted at the work-in-progress management of designs and design data IP.

“Innovator3D IC is targeting the optimization of 2.5 and 3D IC design performance to eliminate late-stage changes by enabling early prototyping and planning,” Felton said. “It accelerates compliance with protocols for chiplet integration and provides a core workflow that design teams need for 3D IC chiplet integration.”

Calibre 3DStress for package verification

Calibre 3DStress—the second part of Siemens EDA’s solution to streamline the design and analysis of complex, heterogeneously integrated 3D ICs—supports accurate, transistor-level analysis, verification, and debugging of thermo-mechanical stresses and warpage in the context of 3D IC packaging.

It enables IC designers to assess how chip-package interaction will impact the functionality of their designs earlier in the development cycle. Shetha Nolke, principal product manager for Calibre 3DStress at Siemens EDA, told EDN that this tool performs three key tasks for chip-package stress analysis in 3D IC designs.

First, stress simulation ensures accurate die levels under thermal and mechanical conditions. Second, what-if analysis optimizes IP, cell, or chip placement during early design stages. Third, it performs stress-aware circuit analysis using back annotation of device stress to minimize electrical impact.

Figure 3 With the thinner dies and higher package processing temperatures of 2.5D/3D IC architectures, designers often discovered that designs validated and tested at the die level no longer conform to specifications after packaging reflows. Source: Siemens EDA

3D ICs increasingly face stress- and warpage-related packaging challenges. That includes thermal challenges such as non-uniform heat generation and dissipation, which can result in higher temperatures and temperature gradients. Then, there are thermo-mechanical issues, where packaging process stages experience high temperature and fixed constraints.

Finally, thinned dies and ultra-low-k dielectrics increase mechanical stress-induced problems. “As multiple chiplets are integrated into a package, they experience thermal impacts because heat is not able to escape readily,” Nolke said. “While mechanical aspects are coming from incorporating package components, Calibre 3DStress can model it before fabrication.”

Calibre 3DStress delivers accurate die-level stress simulation using finite element analysis at a nano-meter feature scale. It also provides visualization of stress and warpage results while facilitating electrical and mechanical verification.

Related Content

The post New EDA tools arrive for chiplet integration, package verification appeared first on EDN.

DIY isolation transformer enhances Bode analysis with modern DSOs

Tue, 06/24/2025 - 15:01

Keysight, Teledyne LeCroy, Tektronix, Rohde & Schwarz (R&S), and others have offered built-in digital oscilloscope Bode analysis for some time, and this feature has trickled down to low-cost DSOs like the Siglent SDS2000X Plus and the new SDS814X HD. These DSOs feature built-in Bode analysis when operating with a companion AWG, or sometimes include the AWG within (SDS2000X Plus), at an affordable price point.

Wow the engineering world with your unique design: Design Ideas Submission Guide

DIY common-mode choke

One of the interesting applications of this Bode capability is investigating the open-loop response of closed-loop systems, such as oscillators. This often requires an expensive isolation transformer, which can be limiting. However, for those with a DIY spirit, a reconfigured common-mode choke serves as a nice isolation transformer for Bode analysis (Figure 1) [1].

Figure 1 A reconfigured common-mode choke isolation transformer used to investigate the open-loop response of closed-loop systems, e.g., oscillators, using the Bode capability of a benchtop oscilloscope.

Creating the isolation transformer is straightforward. Physically larger common-mode “chokes” utilized in AC mains, like the one shown, make good candidates, especially for lower frequencies.

Here, 5 mH and 2 mH Prod Tech PDMCAT221413 types were utilized after unwinding and rewinding. First, after the unwinding, the pair of wires are stretched, and then the pair of wires are twisted together (a hand drill helps). This leaves a long twisted pair which is threaded through the core as many times as possible.

As shown in Figure 2, the wrapped core now has two ends with the twisted pair, and at each end, a pair of wires. The ends of the wire on each side are common with the other pair of wires’ ends (use an ohmmeter), becoming the primary or secondary. Either way, it doesn’t matter since the isolation transformer has a 1:1 turns ratio and is symmetrical. The primary and secondary can be resistively terminated as needed for specific applications.

Figure 2 A side-view of the DIY isolation transformer showing the wrapped core and terminated with four 2-W, 100-Ω resistors.

Figure 3 shows the test setup utilizing the DIY isolation transformer to measure the open-loop response of a Peltz oscillator, as described in another Design Idea (DI): “Simple 5-component oscillator works below 0.8V.”

Figure 3 Test setup using the DIY isolation transformer to measure the open-loop response of a Peltz oscillator.

Peltz oscillator test circuit and results

The isolation transformer secondary is connected between Q2 base and Q1 collector. Q1 and Q2 are 2N3904s, L is 470 µH, C is 0.022 µF, and R is 510 Ω (Figure 4).

Figure 4 The configuration of the Peltz oscillator circuit, where the isolation transformer is connected between Q2 base and Q1 collector to measure open-loop response.

For comparison, an LTspice circuit model was created. The simulated and measured results using the SDS2504X Plus are shown in Figure 5.

Figure 5 Simulated (top) and measured (bottom) results with the circuit under test in Figure 4 operating with the following values: L is 470 µH, C is 0.022 µF, and R is 510 Ω.

Changing the inductor to 100 µH (measured 97.3 µH) which moves the center frequency to 34.4 kHz (Figure 6).

Figure 6 Simulated (top) and measured (bottom) results with the circuit under test in Figure 4 operating with the following values: L is 100 µH, C is 0.022 µF, and R is 510 Ω.

Typically, physically larger common-mode chokes have higher inductance, which can extend the measurement range to lower frequencies. Having a larger core also allows for more turns, which also helps with lower frequencies.

However, larger cores and more turns limit the upper frequency end, and having more cores, smaller and larger, can cover a wider frequency range than a single-core transformer. I’ve had good results with the cores shown from less than 100 Hz to over 1 MHz.

This is just one of the many uses for modern Bode-enabled DSOs with companion AWGs and a few DIY isolation transformers.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

Related Content

References

  1. https://www.eevblog.com/forum/testgear/diy-transformer-for-use-with-bode-plots/

The post DIY isolation transformer enhances Bode analysis with modern DSOs appeared first on EDN.

Ray-Ban Meta’s AI glasses: A transparency-enabled pseudo-teardown analysis

Mon, 06/23/2025 - 18:24
A look at AI glasses

I’ve been following smart glasses for a while now (and the more embryonic camera-augmented eyewear category a “bit” longer than that). As with smart watches and more recent smart rings, they’re intriguing to me because they take already-familiar, mature and high volume consumer products and make them…umm…smart. Plus, there’s the oft-touted potential for smart glasses to augment if not supplant the equally now-pervasive smartphone (for the record: I’m dubious at best on that latter replacement-potential premise).

With all due respect to Google, with Glass, introduced in 2013 and near-immediately thereafter spawning “glassholes” terminology:

and other technology trendsetters—Snap’s multiple generations of Spectacles, for example:

I’d suggest that the smart glasses category really didn’t “get legs” until Meta and partner Ray-Ban’s second-generation AI Glasses, released in October 2023. Stories, the first-generation product introduced in September 2020 by EssilorLuxottica (Ray-Ban’s parent company) and then-Facebook (rebranded as Meta Platforms a year later) had adopted the iconic Ray-Ban style:

but it was fundamentally a content capture and playback device (plus a fancy Bluetooth headset to a wirelessly tethered smartphone), containing an integrated still and video camera, stereo speakers, and a three-microphone (for ambient noise suppression purposes) array.

The second-gen AI Glasses first and foremost make advancements on these fundamental fronts:

  • A still image capture resolution upgrade from 5 Mpixels to 12 Mpixels
  • Video capture up-resolution from 720p to 1080p (plus added livestreaming support)
  • 8x the integrated content storage capacity (from 4 GBytes to 32 GBytes)
  • An enhanced integrated speaker array with two ports per transducer and virtual surround sound playback support, and
  • A now-five-microphone array for enhanced ambient noise reduction, also capable of “immersive audio capture”
Ray-Ban Meta AI glasses

They’re also now moisture (albeit not dust) resistant, with an IPX4 rating, for example. But the key advancement, at least to this “tech-head”, is their revolutionary AI-powered “smarts” (therefore the product name), enabled by the combo of Qualcomm’s Snapdragon AR1 Gen 1, Meta’s deep learning models running both resident and in the “cloud”, and speedy bidirectional glasses/cloud connectivity. AI features include real-time language Live Translation plus AI View, which visually identifies and audibly provides additional information about objects around the wearer (next-gen glasses due later this year will supposedly also integrate diminutive displays).

The broad market seems to agree; in mid-February, EssilorLuxottica announced that it’d already sold 2 million pairs of Ray-Ban Meta AI Glasses in their first year-plus and aspired to hit a 10 million-per-year run rate by the end of 2026. As I noted in my 2025 CES coverage:

Ray-Ban and Meta’s jointly developed second-generation smart glasses were one of the breakout consumer electronics hits of 2024, with good (initial experience, at least) reason. Their constantly evolving AI-driven capabilities are truly remarkable, on top of the first-generation’s foundational still and video image capture and audio playback support.

That said, within that same coverage, I also wrote:

I actually almost bought a pair of Ray-Ban Meta glasses during Amazon’s Black Friday…err…week-plus promotion to play around with for myself (and subsequently cover here at EDN, of course). But I decided to hold off for the inevitable barely-used (if at all) eBay-posting markdowns to come.

As it turns out, though, and as any of you who read my recent Mercari diatribe may have already noticed, I didn’t end up waiting very long. Turns out, at Meta Connect in September 2024, EssilorLuxottica had unveiled a limited edition (only 7,500 pairs worldwide) transparent version of the AI Glasses (versus the more recent translucent limited edition ones), priced at $429, and which sold out near-immediately. I didn’t snag a pair at the time—admittedly, I didn’t even know they existed at the time. But I ended up buying someone else’s barely used pair a few months ago, at a “bit” of a markup from the original MSRP (but to be clear, nowhere near the five-digit price tags I usually see them for-sale posted for on eBay, etc.). Some stock images to start:

(no, there will not be any pictures of them on my head. Trust me, it’s for the best for all of us.).

So, why’d I buy them? Part of the motivation, admittedly, combines my earlier noted belief that they’re the first truly impactful entrant in this embryonic product category, therefore destined to be a historical classic, with the added limited-edition cachet of this particular variant. Plus:

  • Nobody’s going to confuse these with a normal pair of Ray-Ban sunglasses, such as might be the case with the ones below. I don’t want anyone belatedly noticing the camera and pseudo-camera in the corners of the frame and then go all paranoid on me, worried that I might have been surreptitiously snapping pictures or shooting video of them.

  • They’ve got transition lenses, as the stock photos show. Candidly, it creeps me out when I see someone wearing conventional always-tinted sunglasses indoors. But I still want them to be sunglasses when I’m outdoors (versus also-available always-clear lens variants). And I’d like to use them both places.
  • And, because they’re transparent—no, I’m not going to take mine apart—I can still do a semblance of a teardown on them for you today, in combo with a video I found of someone who did take theirs apart…for science…and viewer traffic revenue, of course.

(in-advance warning; some of the dissection sequences in this teardown video are quite brutal on the eyes and ears, IMHO at least!)

The AI glasses teardown

Follow along as I showcase my AI glasses, periodically referencing specific timestamps in the above video for added visual data point evidence. I’ll start out with some (already opened by the previous owner, obviously) outer box shots, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. This particular AI Glasses variant comes in only one style—Wayfarer—and size option—M (50-22) —and they weigh 48.6 grams/1.71 ounces, with the charging case coming in at an incremental 133 grams/4.69 ounces:

Open sesame:

Before continuing, this shot shows one of the uniqueness aspects of these limited-edition glasses. Standard ones’ cases have a tan-color patina instead:

Onward:

I like black better. Don’t you? Totally worth the incremental price tag all by itself (I jest):

The front LED communicates the charge status of both the case and the glasses inside it:

The USB-C connector on the bottom:

is…drum roll…for charging purposes (surprising absolutely no one by saying that, I realize):

Open sesame, redux:

I’m sure that the cleaning cloth was more neatly packaged when the glasses were brand new; this is how it was presented to me upon my reception of gently-used it:

Look closely and you’ll be able to already see the “3301/7500” limited-edition custom mark on the inside of the right temple (or, if you prefer, “arm”…“temple” is apparently the official name) of mine (#3,301 of 7,500 total, if the verbiage is unclear).

It matches another custom mark on the inside of the case flap (with the “3301” hand-“drawn”, if not already obvious):

And here are a few shots of the charging “dock” built into the case. Per the earlier teardown video (starting at ~6:00), the case’s embedded battery has a capacity of 3,034 mAh (he says “milliamps” in the video, but I’d guessed that my alternative measurement-unit version was what he actually meant, and the markings at the center of the cell shown in the closeup at ~6:20 concur…although the markings on the left end of the cell seem to say 2,940 mAh?).

Now, for our patient, beginning with what the glasses look like immediately post-case removal:

The circular structure in the corner of the left endpiece (upper right corner from this head-on vantage point…I received research validation of my initial suspicion that glasses’ parts are traditionally location-referenced from the wearer’s perspective) is indeed the camera:

In the opposite (upper right from the wearer’s perspective) corner is what looks like another camera, although it’s not:

It’s instead (first and foremost, at least) the capture LED, brighter than the one on the Stories precursor, which alerts those around you when you’re shooting photos or video. For still images, it blinks (along with making a shutter-activation sound in the speakers, for wearer benefit):

while for video, it remains illuminated the entire time you’re recording. That said, it also has sensing capabilities, specifically to ensure others’ privacy. If you attempt to cover the capture LED with a piece of tape, etc., the camera won’t work. By the way, in the first image of this latest series, you may have also noticed other circuitry embedded in the rim on both sides of the right lens, but not around the left lens. Hold that thought for a revisit shortly.

Here’s what they look like from above, both with the temples still folded:

and fully unfolded:

and from below, with the temples now partially unfolded:

Next, let’s dive in for a closer view, beginning with the outsides of the temples. The left one, as the video shows in more detail beginning at ~1:40, contains one of the speakers (with upper and lower ports), two of the microphones (one pointed downward toward the wearer’s mouth, the other outward for ambient noise capture and subtraction purposes) and the main system PCB, comprising 32 GBytes of flash memory (along with, I suspect, an unknown amount of DRAM in a multi-die “sandwich”) and the aforementioned Qualcomm’s Snapdragon AR1 Gen 1 SoC. The packaged memory and application processor are individually covered by Faraday cages (which the video narrator refers to as “cans”), and EMC shielding (plus thermal spreading, I suspect) material spans the entirety of the PCB. Here’s an overview of the outer left temple:

along with a closer look at the front half:

and the back half of it:

Within the right temple, conversely (see the video beginning at ~4:00), although you’ll again unsurprisingly find a matching speaker (and ports) and two-microphone set, the remainder of the “guts” is quite different. First off is the battery, in this case 154 mAh (again misspoken as mA, and mentioned at ~8:30). The narrator also believes that he’s found the Bluetooth and Wi-Fi antenna structure in the right temple, leading to a reasonable assumption that the wireless transceiver chip is there, too. And there’s also a large capacitive touch sensor structure on the outside, used for glasses control via both taps of and pressed-finger movement along it.

Here’s an overview photo of the outer right temple:

Now, a close-up of the front half:

and the back half of it:

Remember those outward-facing mics I mentioned earlier? Haven’t seen them yet, have you? I finally found them while writing thanks to an illuminated loupe, even though I’d already known their general location within (or nearby) the Ray-Ban logos on both sides. Post your specific-location guesses in the comments, and I’ll put the answer there a few weeks post-publication!

Before examining both temples’ insides, let’s first cover their upper and lower regions. Back to the left temple; here’s an overview of the top edge first:

and a close-up of the upper speaker port.

Now, the underside of the left temple:

with another speaker port along with, ahead of it, the aperture for the down-facing microphone.

The right temple is similar, with one exception: a topside switch near the hinge. But you’ve seen it in action before; it’s the camera shutter button. Topside first:

and underside.

Now for the temples’ inner sides. Left first, beginning with an overview shot:

The front half:

A closeup of the power switch (Pro tip: Don’t forget, as I did, to turn the glasses on prior to attempting to pair them with your smartphone. Simply ensuring they’re in the case is insufficient!):

Now the back half:

Moving over to the right side now:

Front half:

There’s that limited-edition notation (numerically matching the other one, thankfully) again!

And the back half:

There’s one area of the glasses left to explore, with many more interesting bits encompassed to showcase than you might initially expect. Behold the backside of the front frame:

Not much of note in the left half, aside from that dark area running horizontally through the bridge and over the lens, which I’ll discuss in detail next:

The right half, on the other hand, is hardware-rich (as alluded to earlier in this writeup):

That embedded structure at far right is the wearer-viewable notification LED, with varying colors (and steady or blinking states) dependent on the glasses’ mode and what’s being communicated:

And the assemblage on the left side of the lens, running along the right portion of the nose piece? It has dual (at least) purposes. Those who remember the charging contacts inside the case may be unsurprised to learn that there’s a matching set here. And those with really good memories may also recall that I earlier mentioned a five-microphone array, although we’ve so far only seen four of ‘em. Where’s the fifth? It’s here, too:

Regarding the mysterious dark region spanning the entirety of the top of the front frame, notably including the bridge, you may have already caught that the camera shutter button is on the opposite side of the glasses from the actual camera. More generally, as already noted, there’s no shortage of bidirectional interaction between the power, communications, and touch electronics on the right and the processing electronics on the left, not to mention the bilateral audio input and output facilities. Turns out there’s a whole mess of wiring in the front frame, as a particularly brutal segment of the teardown video starting at ~4:35 reveals. Fair warning: the use of hand tools, bare hands, and (ultimately) a Dremel to chop the front frame into pieces isn’t for the squeamish. That said, I did learn a new term: insert injection molding. From Wikipedia:

Pre-moulded or machined components can be inserted into the cavity while the mould is open, allowing the material injected in the next cycle to form and solidify around them. This process is known as insert moulding and allows single parts to contain multiple materials.

One feature implementation remains a mystery, although I have a theory. There’s a Wear Detection sensor somewhere that detects whether you’ve put the glasses on your face. I’ve read lots of theories online as to how this function might be implemented, although nobody has seemingly yet definitively determined how it is implemented. One thing that I can say with certainty from my experimentation is that the sensor’s not anywhere on either/both temple(s), since I’ve experimented by covering them with paper “sleeves” (which they protectively come with from the factory) and Wear Detection still works.

My guess is that there’s actually no special sensor at all; that the glasses instead detect the slight current flow caused by skin conduction (also known as galvanic skin response and electrodermal activity, among other terminology) between the two charging contacts when pressed up against the wearer’s nose. Part of the rationale for my theory is that it incurs no additional bill-of-materials cost, assuming that the power management controller between the charging contacts and the battery is sufficiently intelligent to handle this additional discernment task. And part of it is that the function can be user-disabled if found to be unreliable, which inconsistent electrodermal activity certainly is, both person-to-person and moment-to-moment. Not to mention that if you’ve got a wide nose, it may never touch the bridge underside at all.

And with that, nearing 3,000 words and as-always mindful of Aalyia’s wrath (more accurately: her precious, not-unlimited time and energy), I’ll wrap up for today with one more photo, taken using my AI Glasses of the view looking west from my back deck toward the Rocky Mountains:

I’m not terribly fond of the 3024×4032 pixel portrait orientation (which can’t be helped unless I took pictures with my head at an awkward 90° to the usual vertical instead, I suppose). But otherwise, not bad, eh? More on-head AI Glasses usage observations to come in future posts. Until then, let me know what you think so far in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post Ray-Ban Meta’s AI glasses: A transparency-enabled pseudo-teardown analysis appeared first on EDN.

Wafer-scale chip claims to offer GPU alternative for AI models

Mon, 06/23/2025 - 12:49

Wafer-scale technology is making waves again, this time promising to enable artificial intelligence (AI) models with trillions of parameters to run faster and more efficiently than traditional GPU-based systems. Engineers at The University of California, Riverside (UCR) claim to have developed a chip the size of a frisbee that can move massive amounts of data without overheating or consuming excessive electricity.

They call these massive chips wafer-scale accelerators, which Cerebras manufactured on dinner plate-sized silicon wafers. These wafer-scale processors can deliver far more computing power with much greater energy efficiency, traits that are essential as AI models continue to grow larger and more demanding.

The dinner plate-sized silicon wafers are in stark contrast to postage stamp-sized GPUs, which are now considered essential in AI designs because they can perform multiple computational tasks like processing images, language, and data streams in parallel.

However, as AI model complexity increases, even high-end GPUs are starting to hit performance and energy limits, says Mihri Ozkan, a professor of electrical and computer engineering in UCR’s Bourns College of Engineering and the lead author of the paper published in the journal Device.

Figure 1 Wafer-Scale Engine 3 (WSE-3), manufactured by Cerebras, avoids the delays and power losses associated with chip-to-chip communication. Source: The University of California, Riverside

“AI computing isn’t just about speed anymore,” Ozkan added. “It’s about designing systems that can move massive amounts of data without overheating or consuming excessive electricity.” He compared GPUs to busy highways, which are effective, but traffic jams waste energy. “Wafer-scale engines are more like monorails: direct, efficient, and less polluting.”

The Cerebras Wafer-Scale Engine 3 (WSE-3), developed by UCR engineers, contains 4 trillion transistors and 900,000 AI-specific cores on a single wafer. Moreover, as Cerebras reports, inference workloads on the WSE-3 system use one-sixth the power of equivalent GPU-based cloud setups.

Then there is Tesla’s Dojo D1, another wafer-scale accelerator, which contains 1.25 trillion transistors and nearly 9,000 cores per module. These wafer-scale chips are engineered to eliminate the performance bottlenecks that occur when data travels between multiple smaller chips.

Figure 2 Dojo D1 chip, released in 2021, aims to enhance full self-driving and autopilot systems. Source: Tesla

However, as UCR’s Ozkan acknowledges, heat remains a challenge. With thermal design power reaching 10,000 watts, wafer-scale chips require advanced cooling. Here, Cerebras uses a glycol-based loop built into the chip package, while Tesla employs a coolant system that distributes liquid evenly across the chip surface.

Related Content

The post Wafer-scale chip claims to offer GPU alternative for AI models appeared first on EDN.

Installing a car battery

Fri, 06/20/2025 - 16:53

I went to start my car (a 2006 Toyota Camry) and when I turned the key in the ignition switch, NOTHING happened. The car was utterly inert. The radio wouldn’t play, the passenger cabin ceiling light was dark and I wasn’t going anywhere, at least not in that vehicle.

I guessed that the car’s battery had failed, but being an engineer (you know the type.) I just had to measure the battery’s terminal voltage. I went and got my trusty Sears digital multimeter, raised the hood of the car to expose the battery and touched one of the multimeter probes to the positive post of the battery at the exact spot you see in the image below. That one probe slipped into a small crevice between the battery’s positive post and that post’s clamp and when it did, there was a spark right there at the spot where I’d stuck my probe. Nothing was connected (yet) to the other multimeter probe which left me wondering “Why did I see that spark?”

Figure 1 The battery that was probed and the location where the unexpected spark occurred.

This was in May of 2025 which meant that battery was two years old having been installed in May of 2023. The battery had not been touched at all during those two years which led to the

problem at hand. Gradually, the two post clamps had worked themselves loose. The clamp serving the positive post had actually lost its electrical connection to that post. When my multimeter probe got involved, the spark arose from the battery making connection again via the metal of the probe tip to all of the stuff the battery was normally called upon to feed.

Now that I knew what was wrong, I set about making repairs by tightening the two post clamps, BUT there was a very specific safety issue at hand to which I want to draw your most alert attention.

My car uses an internal combustion engine, which incorporates a 12-V lead-acid battery whose negative post is grounded to the frame of the car. This has been a conventional design approach for many years. I think that pre-1950 or so cars with 6-V lead-acid batteries had their positive posts grounded to the car frames, but that’s a whole other thing.

When you are going to do any work on a car such as my own, where that work involves the car’s battery, you MUST, MUST, MUST first disconnect the clamp that connects to the battery’s negative post. If you fail to do so and you accidentally happen to make a connection with some tool (a socket wrench, maybe) from the battery’s positive post to the car frame, that accidental connection will short-circuit the battery. There will be a flash, and according to something I once read, the battery might even explode.

You do not want to risk having that happen.

I disconnected the clamp from the negative post. I then disconnected the clamp from the positive post and scoured both post surfaces and their clamp surfaces. Next, I reattached the positive post’s clamp, I reattached the negative post’s clamp (in that order) and I started the car.

Everything worked. Everything was back to normal. I drove it to the grocery store and back again. We needed some milk.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post Installing a car battery appeared first on EDN.

SP4T RF switch delivers strong linearity, low loss

Thu, 06/19/2025 - 18:36

The PE42448 SP4T RF switch from pSemi spans 10 MHz to 6 GHz, handling peak power of 52 dBm with IIP3 linearity of 88.5 dBm. The UltraCMOS SOI switch also offers low insertion loss, with typical values of 0.6 dB at 2.6 GHz and 0.7 dB at 3.8 GHz. These characteristics make the PE42448 well-suited for hybrid analog beamforming and 5G massive MIMO systems.

Housed in a 20-lead, 4×4-mm LGA package, the PE42448 offers a compact monolithic alternative to complex RF switch assemblies. Features like single logic control for device pairing and straightforward power sequencing ease integration in beamforming systems, as well as in test, land mobile radio, and general-purpose applications. Additionally, the absence of DC bias on the RF ports eliminates the need for blocking capacitors—simplifying system design.

The PE42448 switch operates with a 5-V supply and 1.8-V control voltage across a temperature range of –40°C to +115°C. Pin-for-pin compatibility with the earlier PE42443 and PE42444 SP4T RF switches enables seamless integration.

PE42448 samples are available now with commercial availability in July 2025.

PE42448 product page 

pSemi

The post SP4T RF switch delivers strong linearity, low loss appeared first on EDN.

Smart ToF sensor tracks presence and gestures

Thu, 06/19/2025 - 18:36

ST’s FlightSense 8×8 multizone ToF ranging sensor employs tailored AI algorithms for enhanced human presence detection (HPD) in laptops and PCs. The fifth-generation VL53L8CP reduces power consumption by over 20% with adaptive screen dimming, which tracks head orientation to dim the screen when the user isn’t looking. This also helps extend battery life by reducing unnecessary energy use.

In addition to HPD and multi-person detection, the VL53L8CP supports gesture and hand posture recognition, along with wellness monitoring through human posture analysis. Features such as walk-away lock and wake-on-approach enhance both security and convenience. Unlike webcam-based solutions, the sensor protects user privacy without capturing images or relying on the camera.

With AI-based, low-power algorithms, the VL53L8CP ToF ranging sensor integrates seamlessly into PC sensor hubs, offering a complete hardware and software solution. All FlightSense proximity and ranging sensors include comprehensive documentation, example source code, and software APIs compatible with a wide range of MCUs and processors.

FlightSense product page

STMicroelectronics

The post Smart ToF sensor tracks presence and gestures appeared first on EDN.

S-band switched filters sharpen radar agility

Thu, 06/19/2025 - 18:36

Qorvo offers two S-band switched filter bank (SFB) modules that enhance frequency agility and spectral control in defense and aerospace radar systems. The QPB1034 and QPB1036 integrate bulk acoustic wave (BAW) filters and fast-switching logic in a compact 6.0×6.0×0.78-mm surface-mount package.

Optimized for lower S-band frequencies, the QPB1034 provides two switches flanking four filters and a bypass path. The QPB1036 supports broader frequency coverage and higher channel density, incorporating six filters and a bypass. Both modules improve radar system performance with low insertion loss and BAW technology’s high out-of-band rejection.

“Qorvo’s switched filter bank modules enable radar designers to reduce size and complexity without sacrificing performance,” said Dean White, senior director of Defense and Aerospace Market Strategy at Qorvo. “Our BAW technology enables unmatched rejection and channel density in a fully integrated form factor—making these solutions ideal for agile radar front ends.”

Both the QPB1034 and QPB1036 SFB modules are now sampling.

QPB1034 product page 

QPB1036 product page 

Qorvo

The post S-band switched filters sharpen radar agility appeared first on EDN.

Sensors track RH/temp in tight spaces

Thu, 06/19/2025 - 18:35

Two digital humidity and temperature sensors from Sensirion come in compact DFN packages with removable protective covers for added durability during handling and deployment. With package dimensions of just 1.5×1.5×0.5 mm, the SHT40-AD1P-R2 and SHT41-AD1P-R2 provide accurate measurements in space-constrained applications.

The SHT40-AD1P-R2 delivers ±1.8% RH accuracy (maximum ±3.5%) across 0 to 100% RH, with a response time (τ63%) of 4 s. Temperature accuracy is ±0.2°C, with a response time of 2 s. The SHT41-AD1P-R2 offers the same temperature performance and a tighter maximum RH accuracy of ±2.5%.

Both sensors integrate easily into a wide range of devices and systems via an I²C interface with a fixed 0x45 address. They require a 1.08-V to 3.6-V supply, draw 0.4 µA on average, and operate across a -40°C to +125°C temperature range.

The SHT40-AD1P-R2 and SHT41-AD1P-R2 digital humidity and temperature sensors are now available through Sensirion’s global distribution network.

SHT40-AD1P-R2 product page 

SHT41-AD1P-R2 product page 

Sensirion

The post Sensors track RH/temp in tight spaces appeared first on EDN.

Siemens certifies EDA tools for Samsung nodes

Thu, 06/19/2025 - 18:35

Siemens Digital Industries Software expands Samsung Foundry collaboration, certifying more EDA tools for Samsung’s most advanced process technologies. The certifications cover Samsung’s FinFET and MCBFET processes, spanning 14-nm to 2-nm nodes (SF2/SF2P). Customers can use Siemens’ comprehensive Calibre, Solido, and Aprisa EDA software to design advanced semiconductor devices for manufacture at Samsung Foundry.

The two companies are also deepening their collaboration through joint research efforts and the development of several new solutions aimed at addressing some of the semiconductor industry’s most pressing challenges. As part of this partnership, the companies have introduced innovations that tackle critical design issues in areas such as power integrity, silicon photonics, analog mixed-signal reliability verification, and other essential domains.

Read Siemens’ full announcement about this expansion for more details.

Siemens Digital Industries Software 

The post Siemens certifies EDA tools for Samsung nodes appeared first on EDN.

Elaborations of yet another Flip-On Flop-Off circuit

Thu, 06/19/2025 - 17:17

Applications for using a single pushbutton to advance a circuit to its next logical state are legion. Typically, there are just “on” and “off” states, but there can be more. The heart of the circuit is a toggle flip-flop (or, for more states, a counter or shift register) which responds to a clock transition.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The successful circuit prevents the contact bounce of the mechanical pushbutton from generating more than one “clock” for every push and release of the button. It’s also desirable for the circuit to initialize upon power-up to a specific state and for the press of the pushbutton (from a human point of view) to immediately cause a state change. The basic circuit of Figure 1 has these features.

Figure 1 U1 is a Schottky inverter and U2 a D-type flip-flop. The diodes are small-signal Schottky types. The pushbutton is normally open. See the text for a discussion of resistor and capacitor values.

Upon loss and discharge of the VDD supply, the Schottky diodes discharge C1 and C2 to nearly zero volts. Time constant R1C1 should be at least 10 times larger than the supply turn-on time so that the power-up sequence starts and ends with U2’s Q being cleared.

Also, upon power up, U1’s output starts out as a logic high and transitions low after R2 charges C2. Since U2’s active clocking transition is low to high, this leaves Q initialized low. The R2C2 time constant should be on the order of 1 second.

R3 is optional and limits initial C2 discharge currents when the normally open pushbutton is pressed. If R3 is used, it should be chosen so that momentary contact bounce closures nearly completely discharge C2 in 10 ms or less.

C2 and R2, along with the Schottky inverter U1, work to prevent contact bounce from producing extra transitions, which would otherwise toggle flip-flop U2. After the pushbutton is released and R2 is starting to charge C2, additional button pushes will not toggle U2. This is because the output of U1 is still high and so cannot transition from low-to-high to toggle U2. This is an argument against making the R2C2 time constant too large.

Figure 2 shows how the circuit of Figure 1 can be extended into a multi-state 10-position switch with only one active high output at a time, or into a digital-to-analog converter (DAC).

Figure 2 A 10-position switch with only one active high output at a time, and a DAC are shown.

If fewer than 10 states are desired for the switch, the U2a “D” input can be connected to a different U3 output. For the DAC, resolution can be extended to 12 bits with 12 resistors. Monotonicity will be somewhat less than that, and even with 0.1% resistors, accuracy will be even less than that. To avoid excessive loading of the outputs, no resistor should be less than 10 kΩ.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Elaborations of yet another Flip-On Flop-Off circuit appeared first on EDN.

Simple PWM interface can program regulators for Vout < Vsense

Wed, 06/18/2025 - 17:24

I recently published a Design Idea (DI) showing some very simple circuits for PWM programming of standard regulator chips, both linear and switching, “Revisited: Three discretes suffice to interface PWM to switching regulators.”

Figure 1 shows one of the topologies “Revisited” visited, where:

R1 = recommended value from U1 datasheet
DF = PWM duty factor = 0 to 1
R2 = R1/(Vomax/Vsense – 1)
Vout = Vsense(R1/(R2/DF) + 1) = DF(Vout_max – Vsense) + Vsense
DF = (Vout/Vsense – 1)(R2/R1) = (Vout – Vsense)/(Vout_max – Vsense)
DF = (Vout – 0.8)/9.2 for parts shown

Figure 1 Five discrete parts comprise a circuit for linear regulator programming with PWM.

Wow the engineering world with your unique design: Design Ideas Submission Guide

An inherent limitation of the Figure 1 circuit is its inability to program Vout < Vsense. Its minimum

Vout = Vsense @ DF = 0. For most applications this doesn’t amount to much, if any, of a problem. But sometimes it would be useful, or at least convenient, for Vout to be zero (or thereabout) when DF = zero. Figure 2 shows an easy modification that can make that happen, where:

R1 and R2 chosen as in Figure 1
(R4 + R5/2) = (5v – Vsense)/(Vsense/R1) – R2
R5 ~ R4/5
Vout = R1 DF(Vsense(1/R2 + 1/R1)) = DF Vout_max
DF = Vo/(R1(Vsense(1/R2 + 1/R1)) = Vout/Vout_max
DF = Vout/10 for part values shown

Figure 2 In order to make Vout programmable down to zero volts, add R4 and (optionally) R5 trimmer.

A cool feature of the Figure 1 topology is that, unlike some other schemes for digital power supply control, only the precision of R1, R2, and the regulator’s own internal voltage reference determines regulation accuracy. Precision is wholly independent of external voltage references. It remains equal to the precision of R1, R2, and Vsense (e.g., ±1%) for all output voltages.

Unfortunately, as the ancient maxim says, something’s (usually) lost when something’s gained. In gaining Vout < Vsense capability, the Figure 2 circuit loses that feature, and for outputs less than full scale, Vout precision becomes somewhat dependent on the +5v rail. This is where the R5 trimmer comes in handy. 

The design equation (R4 + R5/2) = (5v – Vsense)/(Vsense/R1) – R2, makes the values chosen for the R4, R5 pair dependent on the accuracy of the 5v rail. They can only be as correct as it is. This makes output voltages somewhat suspect, especially when they approach zero. Including R5 and adjusting it for Vout = 0 @ DF = 0 makes low Vout settings accurately programmable. If that isn’t a critical factor, then R5 can be omitted, just make R4 = (5v – Vsense)/(Vsense/R1) – R2.

The simplicity of the arithmetic for computing DF from the desired Vout is also a desirable feature of Figure 2.

In closing: This DI revises an earlier submission: “Three discretes suffice to interface PWM to switching regulators.” My thanks go to comment makers oldrev, Ashutosh Sapre, and Val Filimonov for their helpful advice and constructive criticism. And special thanks go to editor Shaukat for her creation of an environment friendly for the DI teamwork that made it possible.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Simple PWM interface can program regulators for Vout < Vsense appeared first on EDN.

Wheatstone bridge measurements with instrumentation amplifiers

Wed, 06/18/2025 - 15:28

What are signal-conditioning aspects for amplifiers that design engineers must grasp for precision applications? What are the design considerations for selecting and implementing a signal-conditioning solution for a Wheatstone bridge sensor? Here is a technology brief on instrumentation amplifiers (INAs) and ASSPs carrying out Wheatstone bridge measurements. It covers areas such as intrinsic noise, gain drift, nonlinearity, and diagnostics.

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post Wheatstone bridge measurements with instrumentation amplifiers appeared first on EDN.

A teardown tale of two not-so-different switches

Tue, 06/17/2025 - 18:27

Eleven years ago, my wife and I experienced the aftereffects of our first close-proximity lightning blast here in the Rocky Mountain foothills, clobbering (among other things) both five-port and eight-port Gigabit Ethernet (GbE) switches, both of which ended up going under the teardown knife. The failure mechanism for the first switch ended up being non-obvious, in sharp contrast to the second, whose controller chip ended up with multiple holes blown in its package top:

One year (and a decade ago) later, lightning struck again. No Gigabit Ethernet switches expired this second time, although we still lost some other devices.

Fast forward to 2024, and…yep. This time four GbE switches ended up zapped failed, two of them eight-port and two more five-port (fate was apparently playing catch-up after previously taking a pass on switches). The former two will be showcased today, with the others following soon. Then there’s the three-bay NAS; you will have already have seen that teardown by the time this piece is published. And another CableCard receiver (we’re three for three on those), along with another MoCA transceiver…you’ll get teardowns of those in the near future, too.

Today’s dissection patients are from the same supplier—TRENDnet. They hail from the same product family generation. And, as you’ll soon see, although their outsides are (somewhat) dissimilar, their insides are essentially identical (given the naming and release date similarities, that’s not exactly a surprise). Behold the metal-case TEG-S82g (hardware v2.0r, to be precise):

which, per Amazon’s listing, dates from September 2004, and the plastic-case TEG-S81g (again, hardware v2.0r), also initially available that same month and year:

Let’s start with the metal-case TEG-S82g. Following up on the stock photos shown earlier, here are some views of my specific device, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the TEG-S82g has dimensions of 150 x 97 x 28 mm/5.9 x 3.8 x 1.1 in. and weighs 364 g/12.8 oz.). Front:

Left side:

This next one, of the device’s backside, begs for a bit more explanation. Port 8, the one originally connected to one of the two spans of shielded Ethernet cable running around the outside of the house, is unsurprisingly the one that failed (therefore the electric tape I applied to identify it).

The other ports actually still work, at least for the first minute or few after I power on the switch, but eventually all the front panel LEDs begin blinking and further functionality ceases:

Onward. Right side:

Top:

and bottom:

Here’s its “wall wart”:

Those screw heads you might have noticed on both device sides? They’re our pathway inside:

Here’s our first view of the PCB inside:

Four screws hold it in place. Let’s get rid of those next:

Let’s see what we’ve got here. At the bottom are the eight Ethernet ports, next to (at the bottom right) the DC input power connector. Thick PCB traces running from there to the circuitry cluster in the upper right quadrant suggest that the latter handles power generation for the remainder of the board. And above, each two-port combo is a Bi-TEK FM-3178LLF dual port magnetic transformer. Here’s the specific one (at far right) associated with failed port 8:

At the top edge are (at far right) the power LED, next to eight activity LEDs, one for each of the ports. And below them is the system’s “brains”, a Realtek RTL8370N 8-port 10/100/1000 switch controller. It may very well be the same as the IC in the 8-port switch teardown from 11 years ago, although I can’t say for sure, as that one had chunks of its packaging (therefore topside markings) blown away! That said, this design does use the same transformers as last time.

Here’s a close-up of the RTL8370N and the aforementioned circuitry to its right:

Now let’s flip the PCB over and have a look at its backside:

No obvious evidence of damage here, either. Here’s another port 8 area closeup (as I was writing this, I paused to revisit the hardware and confirm that those white globs are just dust):

Now for its plastic-case TEG-S81g sibling, with listed dimensions again 150 x 97 x 28 mm/5.9 x 3.8 x 1.1 in. (albeit this time tapered in the front), although the weight is (unsurprisingly, given the shift in case material construction) decreased this time around: 186 g/6.6 oz.:

This time, port 5 failed. The other seven ports remain fully functional to this very day, although for how much longer I can’t say; therefore, I’ve decided to retire it from active service, as well, in the interest of future-hassle avoidance:

The “wall wart” looks different this time, but the specs are the same:

No screws on the case sides this time, as you may have already noticed, but remove the four rubber “feet” on the underside:

and underneath the front two are visible screw heads.

You know what comes next:

And we’re in (with the tape still stuck to the top):

Let’s put that tape back in place so I can keep track of which port (5) is the failing one:

The earlier-shown two screws did double-duty, not only holding the two halves of the chassis together but also helping keep the PCB inside in place. Two more, toward the back, also need to be dealt with before the PCB can be freed from its plastic-case captivity:

That’s better:

Another set of closeups, first of the affected-port region:

and the bulk of the topside circuitry:

And now, flipping the PCB over, another set as before:

I hope you’ll agree with me on the following two points:

  • The two PCBs look identical, and
  • There’s no visually obvious reason why either one failed.

So then, what happened? Let’s begin with the plastic-case TEG-S81g. Truth be told, the tape on top of port 5 originally existed so that I could remember which port was bad down the road, after I pressed it back into service, and in the same “use it until it completely dies” spirit that prompted my recent UPS repair. That said, long-term sanity aspirations eventually overrode my usual thriftiness. My guess is that, given the remainder of the ports (and therefore the common controller chip that manages them) remain operational, port 8’s associated transformer got zapped.

And the metal-case TEG-S82g? Here, I suspect, the lightning-strike spike effects made it through the port 5 transformer, all the way to the Realtek RTL8370N controller nexus, albeit interestingly only with derogatory effects seemingly after the chip had been operational for a bit and had “warmed up” (note, as previously mentioned in the earlier eight-port GbE switch teardown, the lack of a heatsink in this design). As the block diagram in this RTL8370N datasheet makes clear, the chip is highly integrated, including all the ports’ MAC and PHY circuits (among other things).

~1,300 words in, that’s “all” I’ve got for you today. Please share your thoughts in the comments!

 Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post A teardown tale of two not-so-different switches appeared first on EDN.

Unlocking compound semiconductor manufacturing’s potential requires yield management

Tue, 06/17/2025 - 11:05

This article is the second in a series from PDF Solutions on why adopting big data platforms will transform the compound semiconductor industry. The first part “Accelerating silicon carbide (SiC) manufacturing with big data platforms” was recently published on EDN.

Compound semiconductors such as SiC are revolutionizing industries with their ability to handle high-power, high-frequency, and high-temperature technologies. However, as they climb in demand across sectors like 5G, electric vehicles, and renewable energy, the manufacturing challenges are stacking up. The semiconductor sector, particularly with SiC, trails behind the mature silicon industry when it comes to adopting advanced analytics and streamlined yield management systems (YMS).

The roadblock is high defectivity levels in raw materials and complex manufacturing processes that stretch across multiple sites. Unlocking the full potential of compound semiconductors requires a unified and robust end-to-end yield management approach to optimize SiC manufacturing.

A variety of advanced tools, industry approaches, and enterprise-wide analytics hold the potential to transform the growing field of compound semiconductor manufacturing.

Addressing challenges in compound semiconductor manufacturing

While traditional silicon IC manufacturing has largely optimized its processes, the unique challenges posed by SiC and other compound semiconductors require targeted solutions.

  • Material defectivity at the source

Unlike silicon ICs, where costs are distributed across numerous fabrication steps, SiC manufacturing sees the most significant costs and yield challenges in the early stages of production, such as crystal growth and epitaxy. These stages are prone to producing defects that may only manifest later in the process during electrical testing and assembly, leading to inefficiencies and high costs.

As material defects evolve during manufacturing, traceability is essential to pinpoint their origin and mitigate their impact. Yet, the lack of robust systems for tracking substrates throughout the process remains a significant limitation.

  • Siloed data and disparate systems

Compound semiconductor manufacturing often involves multi-site operations where substrates move between fabs and assembly facilities. These operations frequently operate on legacy systems that lack standardization and advanced data integration capabilities.

Data silos created by disconnected manufacturing execution systems (MES) and statistical process control (SPC) tools hinder enterprises from forming a centralized view of their production. Without cross-operational alignment enabled by unified analytics platforms, root cause analysis and yield optimization are nearly impossible.

  • Nuisance defects and variability

Wafer inspection in compound semiconductors reveals a high density of “nuisance defects”—spatially dispersed points that do not affect performance but can overwhelm defect maps. Distinguishing between critical and benign defects is critical to minimizing false positives while optimizing resource allocation.

Furthermore, varying IDs for substrates through processes like polishing, epitaxy, and sawing hamper effective wafer-level traceability (WLT). Using unified semantic data models can alleviate confusion stemming from frequent lot splits, wafer reworks, and substrate transformations.

How big data analytics and AI catalyze yield management

Compound semiconductor manufacturers can unlock yield lifelines by deploying comprehensive big data platforms across their enterprises. These platforms go beyond traditional point analytics tools, providing a unified foundation to collect, standardize, and analyze data across the entire manufacturing spectrum.

  • Unified data layers

The heart of end-to-end yield management lies in breaking down data silos through an enterprise-wide data layer. By standardizing data inputs from multiple MES systems, YMSs, and SPC tools, manufacturers can achieve a holistic view of product flow, defect origins, and yield drop-off points.

For example, platforms using standard models like SEMI E142 facilitate single device tracking (SDT), enabling precise identification and alignment of defect data from crystal growth to final assembly and testing.

  • Root cause analysis tools

Big data platforms offer methodologies like kill ratio (KR) analysis to isolate critical defect contributors, optimize inspection protocols, and rank manufacturing steps by their yield impact. For example, a comparative KR analysis on IC front-end fabs can expose the interplay between substrate supplier quality, epitaxy reactor performance, and defect propagation rates. These insights lead to actionable corrections earlier in production.

By ensuring that defect summaries feed directly into analytics dashboards, enterprises can visualize spatial defect patterns, categorize issues by defect type, and thus rapidly deploy solutions.

  • Predictive analytics and simulation

AI-driven predictive tools are vital for anticipating potential yield crashes or equipment wear that can bottleneck production. Using historical defect patterns and combining them with contextual process metadata, yield management systems can simulate “what-if” outcomes for different manufacturing strategies.

For instance, early detection of a batch with high-risk characteristics during epitaxy can prevent costly downstream failures during assembly and final testing. AI-enhanced traceability also enables companies to correlate downstream failure patterns back to specific substrate lots or epitaxy tools.

  • SiC manufacturing case study

Consider a global compound semiconductor firm transitioning to 200-mm SiC wafers to expand production capacity. By deploying a big data-centric YMS across multi-site operations, the manufacturer would achieve the following milestones within 18 months:

  • Reduction of nuisance defects by 30% post-implementation of advanced defect stacking filters.
  • Yield improvement of 20% via optimized inline inspection parameters identified from predictive KR analysis.
  • Defect traceability enhancements enabling root cause identification for more than 95% of module-level failures.

These successes underscore the importance of incorporating AI and data-driven approaches to remain competitive in the fast-evolving compound semiconductor space.

Building a smarter compound semiconductor fabrication process

The next frontier for compound semiconductor manufacturing lies in adopting fully integrated smart manufacturing workflows that include scalability in the data architecture, proactive process control, and an iterative improvement culture.

  • Scalability in data architecture

Introducing universal semantic models enables tracking device IDs across every transformation from input crystals to final modules. This end-to-end visibility ensures enterprises can scale into higher production volumes seamlessly while maintaining enterprise-wide alignment.

  • Proactive process control

Setting an enterprise-wide baseline for defect classification, detection thresholds, and binmap merging algorithms ensures uniformity in manufacturing outcomes while minimizing variability stemming from site-specific inconsistencies.

  • Iterative improvement culture

Yield management thrives when driven by continuous learning cycles. The integration of defect analysis insights and predictive modeling into day-to-day decision-making accelerates the feedback loop for manufacturing teams at every touchpoint.

Pioneering the future of yield management

The compound semiconductor industry is at an inflection point. SiC and its analogues will form the backbone of the next generation of technologies, from EV powertrains to renewable energy innovations and next-generation communication.

Investing in end-to-end data analytics with enterprise-scale capabilities bridges the gap between fledgling experimentation and truly scalable operations. Unified yield management platforms are essential to realizing the economic and technical potential of this critical sector.

By focusing on robust data infrastructures, predictive analytics, and AI integrations, compound semiconductor enterprises can maintain a competitive edge, cut manufacturing costs, and ensure the high standards demanded by modern applications.

Steve Zamek, director of product management at PDF Solutions, is responsible for manufacturing gata analytics solutions for fabs and IDMs. Prior to this, he was with KLA (former KLA-Tencor), where he led advanced technologies in imaging systems, image sensors, and advanced packaging.

 

Jonathan Holt, senior director of product management at PDF Solutions, has more than 35 years of experience in the semiconductor industry and has led manufacturing projects in large global fabs.

 

Dave Huntley, a seasoned executive providing automation to the semiconductor manufacturing industry, is responsible for business development for Exensio Assembly Operations at PDF Solutions. This solution enables complete traceability, including individual devices and substrates through the entire assembly and packaging process.

Related Content

The post Unlocking compound semiconductor manufacturing’s potential requires yield management appeared first on EDN.

A simulated 100-MHz VFC

Mon, 06/16/2025 - 18:02

Stephen Woodward, a prolific circuit designer with way more than 100 published Design Ideas (DIs), had his “80 MHz VFC with prescaler and preaccumulator” [1] published on October 17, 2024, as a DI on the EDN website. 

Upon reading his article, I was eager to simulate it and try to push its operation up to 100 MHz, if possible, while maintaining its basic simplicity and accuracy. However, Stephen Woodward got there before I did [2]! For the record, I had almost finished my design before I saw his latest one on the EDN website. 

I won’t discuss the details of the circuit operation because they are so similar to those of the above-referenced DIs. However, there are added features, and the functionality has been tested by simulation.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Features

My voltage-to-frequency converter (VFC) circuit (Figure 1) has a high impedance input stage, it can operate reliably beyond 100 MHz, it can be operated with a single 5.25-V supply (or a single 5-V supply with a few added components), and it has been successfully simulated. Also, adjustments are provided for calibration.

Figure 1 VFC design that operates from 100 kHz to beyond 100 MHz with a single 5.25-V supply, providing square wave outputs at 1/2 and 1/4 the main oscillator frequency.  

This circuit provides square wave outputs at one-half and one-fourth the main oscillator frequency. These signals will, in many cases, be more useful than the very narrow oscillator signal, which will be in the 2 ns to 5 ns range.

The NE555 (U8) provides a 500 kHz signal, which drives both a negative voltage generator for a -2.5-V reference and a voltage doubler used to generate a 5.25-V regulated supply that is used when a single 5-V supply is desired. TLA431As are used as programmable Zener diodes, NOT TL431As. Unlike the TL431A, the TLA431A is stable for all values of capacitance connected from the cathode to the anode.

Two adjustments are provided: Both a positive and a negative offset adjustment are provided by R11, and R9 adjusts the gain of the current-to-voltage converter, U2. I suggest using R11 to set the 100-kHz signal with 5 mV applied to the input and using R9 to set the 100-MHz signal with a 5-V input. Repeat this procedure as required to maximize the accuracy of the circuit.

Possible limitations

This circuit may not give highly accurate operation below 100 kHz because of diode and transistor leakage currents, but I expect it to  operate at the lower frequencies at least as well as Woodward’s circuits. Operation down to 1 Hz or 10 Hz is, in my opinion, mostly for bragging rights, and I am not concerned about that.

I expect this VFC to be useful mostly in the 100 kHz to 100 MHz frequency range: a 1 to 1000 span. Minute diode/transistor leakage currents in the nanoamp range and PCB surface leakage may cause linearity inaccuracies at the lower frequencies. The capacitor charging current provided by transistor Q1 is in the several microamps range at 100 kHz; below that, it is in the nanoamp range. Having had some experience with environmental testing, I think it would be difficult to build this circuit so that it would provide accurate operation below 100 kHz in an environment of humidity/temperature of 75%/50oC.   

Some details

When simulated with LTspice, the Take Back Half circuit [3] with 1N4148 diodes did not provide acceptable results above about 3.5 MHz when driven by a square wave signal with 2-ns rise/fall times, so I used Schottky barrier diodes instead, which worked well beyond 25 MHz, the maximum frequency seen by the Take Back Half circuit [1,3]. The Schottky diodes have somewhat higher leakage current than the 1N4148s, but the 1N4148 diodes would require the highest frequency signal to be divided down to 3.5 MHz to operate well in this application.

I used two 74LVC1G14s to drive C4, the ramp capacitor, because I was not convinced one of them was rated to continuously drive the peak or rms current required to reset the capacitor when operating at or near 100 MHz. And using a 25-pF capacitor instead of just using parasitic and stray capacitance allows better operation at low frequencies because leakage currents are a smaller percentage of the capacitor charging current. (Obviously, more ramp capacitance requires more charging current.)

The op-amp

If you want to use a different op amp, check the specs to be sure the required supply current is not greater than 3 mA worst case. Also, it must accommodate the necessary 7.75 V with some margin. Critically, the so-called rail-to-rail output must swing to within 100 mV of the positive rail with a 1.3-mA load at the maximum operating temperature.

Be advised

Look at renowned Jim Williams’ second version of his 1 Hz to 100 MHz VFC for more information about the effort required to make his circuit operate well over the full frequency range [4][5]. See reference 5 and look at the notes in Figure 1 and Table 1.

Jim McLucas retired from Hewlett-Packard Company after 30 years working in production engineering and on design and test of analog and digital circuits.

References/Related Content

  1. 80 MHz VFC with prescaler and preaccumulator
  2. 100-MHz VFC with TBH current pump
  3. Take-Back-Half precision diode charge pump
  4. Designs for High Performance Voltage-to-Frequency Converters
  5. 1-Hz to 100-MHz VFC features 160-dB dynamic range

The post A simulated 100-MHz VFC appeared first on EDN.

Chiplet basics: Separating hype from reality

Mon, 06/16/2025 - 08:55

There’s currently a significant buzz within the semiconductor industry around chiplets, bare silicon dies intended to be combined with others into a single packaged device. Companies are beginning to plan for chiplet-based designs, also known as multi-die systems. Yet, there is still uncertainty about what designing chiplet architecture entails, which technologies are ready for use, and what innovations are on the horizon.

Understanding the technology and supporting ecosystem is necessary before chiplets begin to see widespread adoption. As technology continues to emerge, chiplets are a promising solution for many applications, including high-performance computing, AI acceleration, mobile devices, and automotive systems.

Figure 1 Understanding the technology is necessary before chiplets begin to see widespread adoption. Source: Arteris

The rise of chiplets

Until recently, integrated circuits (ICs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), and system-on-chip (SoC) devices were monolithic. These devices are built on a single piece of silicon, which is then enclosed in its dedicated package. Depending on its usage, the term chip can refer to either the bare die itself or the final packaged component.

Designing monolithic devices is becoming increasingly cost-prohibitive and harder to scale. The solution is to break the design into several smaller chips, known as chiplets, which are mounted onto a shared base called a substrate. All of this is then enclosed within a single package. This final assembly is a multi-die system.

Building on this foundation, the following use cases illustrate how chiplet architectures are being implemented. Split I/O and logic is a chiplet use case in which the core digital logic is implemented on a leading-edge process node. Meanwhile, I/O functions such as transceivers and memory interfaces are offloaded to chiplets built on older, more cost-effective nodes. This approach, used by some high-end SoC and FPGA manufacturers, helps optimize performance and cost by leveraging the best technology for each function.

A reticle limit partitioning use case implements a design that exceeds the current reticle limit of approximately 850 mm2 and partitions it into multiple dies. For example, Nvidia’s Blackwell B200 graphics processing unit (GPU) utilizes a dual-chiplet design, where each die is approximately 800 mm² in size. A 10 terabytes-per-second link enables them to function as a single GPU.

Homogeneous multi-die architecture integrates multiple identical or functionally similar dies, such as CPUs, GPUs, or NPUs, on a single package or via an ‘interposer’, a connecting layer similar to a PCB but of much higher density and typically made of silicon using lithographic techniques. Each die performs the same or similar tasks and is often fabricated using the same process technology.

This approach enables designers to scale performance and throughput beyond monolithic die designs’ physical and economic limits, mainly as reticle limits of approximately 850 mm² constrain single-die sizes or decreasing yield with increasing die size makes the solution cost-prohibitive.

Functional disaggregation is the approach most people think of when they hear the word chiplets. This architecture disaggregates a design into multiple heterogeneous dies, where each die is realized at the best node in terms of cost, power, and performance for its specific function.

For example, a radio frequency (RF) die might be implemented using a 28 nm process, analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) could be realized in a 16 nm process, and the core digital logic might be fabricated using a 3 nm process. Large SRAMs may be implemented in 7 nm or 5 nm, as RAM has not scaled significantly in finer geometries.

The good news

There are multiple reasons why companies are planning to transition or have transitioned to chiplet-based architectures. These include the following:

  • Chiplets can build larger designs than are possible on a single die.
  • Higher yields from smaller dies reduce overall manufacturing costs.
  • Chiplets can mix and match best-in-class processing elements, such as CPUs, GPUs, NPUs, and other hardware accelerators, along with in-package memories and external interface and memory controllers.
  • Multi-die systems may feature arrays of homogeneous processing elements to provide scalability, or collections of heterogeneous elements to implement each function using the most advantageous process.
  • Modular chiplet-based architectures facilitate platform-based design coupled with design reuse.

Figure 2 There are multiple drivers pushing semiconductor companies toward chiplet architectures. Source: Arteris

The ecosystem still needs to evolve

While the benefits are clear, several challenges must be addressed before chiplet-based architectures can achieve widespread adoption. While standards like PCIe are established, die-to-die (D2D) communication standards like UCIe and CXL continue to emerge, and ecosystem adoption remains uneven. Meanwhile, integrating different chiplets under a common set of standards is still a developing process, complicating efforts to build interoperable systems.

Effective D2D communication must also deliver low latency and high bandwidth across varied physical interfaces. Register maps and address spaces, once confined to a single die, now need to extend across all chiplets forming the design. Coherency protocols such as AMBA CHI must also span multiple dies, making system-level integration and verification a significant hurdle.

To understand the long-term vision for chiplet-based systems, it helps first to consider how today’s board-level designs are typically implemented. This usually involves the design team selecting off-the-shelf components from distributors like Avnet, Arrow, DigiKey, Mouser, and others. These components all support well-defined industry-standard interfaces, including I2C, SPI, and MIPI, allowing them to be easily connected and integrated.

In today’s SoC design approach, a monolithic IC is typically developed by licensing soft intellectual property (IP) functional blocks from multiple trusted third-party vendors. The team will also create one or more proprietary IPs to distinguish and differentiate their device from competitive offerings. All these soft IPs are subsequently integrated, verified, and implemented onto the semiconductor die.

The long-term goal for chiplet-based designs is an entire chiplet ecosystem. In this case, the design team would select a collection of off-the-shelf chiplets created by trusted third-party vendors and acquired via chiplet distributors rather as board-level designers do today. The chiplets will have been pre-verified with ‘golden’ verification IP that’s trusted industry-wide, enabling seamless integration of pre-designed chiplets without the requirement for them to be verified together prior to tape-out.

The team may also develop one or more proprietary chiplets of their own, utilizing the same verification IP. Unfortunately, this chiplet-based ecosystem and industry-standard specification levels are not expected to become reality for several years. Even with standards such as UCIe, there are many options and variants within the specification, meaning there is no guarantee of interoperability between two different UCIe implementations, even before considering higher-level protocols.

The current state-of-play

Although the chiplet ecosystem is evolving, some companies are already creating multi-die systems. In some cases, this involves large enterprises such as AMD, Intel, and Nvidia, who control all aspects of the development process. Smaller companies may collaborate with two or three others to form their own mini ecosystem. These companies typically leverage the current state-of-play of D2D interconnect standards like UCIe but often implement their own protocols on top and verify all chiplets together prior to tape-out.

Many electronic design automation (EDA) and IP vendors are collaborating to develop standards, tool flows, and crucially VIP. These include companies like Arteris, Cadence, Synopsys, and Arm, as well as RISC-V leaders such as SiFive and Tenstorrent.

Everyone is jumping on the chiplet bandwagon these days. Many are making extravagant claims about the wonders to come, but most are over-promising and under-delivering. While a truly functional chiplet-based ecosystem may still be five to 10 years away, both large and small companies are already creating chiplet-based designs.

Ashley Stevens, director of product management and marketing at Arteris, is responsible for coherent NoCs and die-to-die interconnects. He has over 35 years of industry experience and previously held roles at Arm, SiFive, and Acorn Computers.

Related Content

The post Chiplet basics: Separating hype from reality appeared first on EDN.

Pages