Українською
  In English
EDN Network
Elaborations of yet another Flip-On Flop-Off circuit

Applications for using a single pushbutton to advance a circuit to its next logical state are legion. Typically, there are just “on” and “off” states, but there can be more. The heart of the circuit is a toggle flip-flop (or, for more states, a counter or shift register) which responds to a clock transition.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The successful circuit prevents the contact bounce of the mechanical pushbutton from generating more than one “clock” for every push and release of the button. It’s also desirable for the circuit to initialize upon power-up to a specific state and for the press of the pushbutton (from a human point of view) to immediately cause a state change. The basic circuit of Figure 1 has these features.
Figure 1 U1 is a Schottky inverter and U2 a D-type flip-flop. The diodes are small-signal Schottky types. The pushbutton is normally open. See the text for a discussion of resistor and capacitor values.
Upon loss and discharge of the VDD supply, the Schottky diodes discharge C1 and C2 to nearly zero volts. Time constant R1C1 should be at least 10 times larger than the supply turn-on time so that the power-up sequence starts and ends with U2’s Q being cleared.
Also, upon power up, U1’s output starts out as a logic high and transitions low after R2 charges C2. Since U2’s active clocking transition is low to high, this leaves Q initialized low. The R2C2 time constant should be on the order of 1 second.
R3 is optional and limits initial C2 discharge currents when the normally open pushbutton is pressed. If R3 is used, it should be chosen so that momentary contact bounce closures nearly completely discharge C2 in 10 ms or less.
C2 and R2, along with the Schottky inverter U1, work to prevent contact bounce from producing extra transitions, which would otherwise toggle flip-flop U2. After the pushbutton is released and R2 is starting to charge C2, additional button pushes will not toggle U2. This is because the output of U1 is still high and so cannot transition from low-to-high to toggle U2. This is an argument against making the R2C2 time constant too large.
Figure 2 shows how the circuit of Figure 1 can be extended into a multi-state 10-position switch with only one active high output at a time, or into a digital-to-analog converter (DAC).
Figure 2 A 10-position switch with only one active high output at a time, and a DAC are shown.
If fewer than 10 states are desired for the switch, the U2a “D” input can be connected to a different U3 output. For the DAC, resolution can be extended to 12 bits with 12 resistors. Monotonicity will be somewhat less than that, and even with 0.1% resistors, accuracy will be even less than that. To avoid excessive loading of the outputs, no resistor should be less than 10 kΩ.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Flip ON flop OFF
- Flip ON flop OFF without a flip/flop
- Latching D-type CMOS power switch: A “Flip ON Flop OFF” alternative
- Another simple flip ON flop OFF circuit
The post Elaborations of yet another Flip-On Flop-Off circuit appeared first on EDN.
Simple PWM interface can program regulators for Vout < Vsense

I recently published a Design Idea (DI) showing some very simple circuits for PWM programming of standard regulator chips, both linear and switching, “Revisited: Three discretes suffice to interface PWM to switching regulators.”
Figure 1 shows one of the topologies “Revisited” visited, where:
R1 = recommended value from U1 datasheet
DF = PWM duty factor = 0 to 1
R2 = R1/(Vomax/Vsense – 1)
Vout = Vsense(R1/(R2/DF) + 1) = DF(Vout_max – Vsense) + Vsense
DF = (Vout/Vsense – 1)(R2/R1) = (Vout – Vsense)/(Vout_max – Vsense)
DF = (Vout – 0.8)/9.2 for parts shown
Figure 1 Five discrete parts comprise a circuit for linear regulator programming with PWM.
Wow the engineering world with your unique design: Design Ideas Submission Guide
An inherent limitation of the Figure 1 circuit is its inability to program Vout < Vsense. Its minimum
Vout = Vsense @ DF = 0. For most applications this doesn’t amount to much, if any, of a problem. But sometimes it would be useful, or at least convenient, for Vout to be zero (or thereabout) when DF = zero. Figure 2 shows an easy modification that can make that happen, where:
R1 and R2 chosen as in Figure 1
(R4 + R5/2) = (5v – Vsense)/(Vsense/R1) – R2
R5 ~ R4/5
Vout = R1 DF(Vsense(1/R2 + 1/R1)) = DF Vout_max
DF = Vo/(R1(Vsense(1/R2 + 1/R1)) = Vout/Vout_max
DF = Vout/10 for part values shown
Figure 2 In order to make Vout programmable down to zero volts, add R4 and (optionally) R5 trimmer.
A cool feature of the Figure 1 topology is that, unlike some other schemes for digital power supply control, only the precision of R1, R2, and the regulator’s own internal voltage reference determines regulation accuracy. Precision is wholly independent of external voltage references. It remains equal to the precision of R1, R2, and Vsense (e.g., ±1%) for all output voltages.
Unfortunately, as the ancient maxim says, something’s (usually) lost when something’s gained. In gaining Vout < Vsense capability, the Figure 2 circuit loses that feature, and for outputs less than full scale, Vout precision becomes somewhat dependent on the +5v rail. This is where the R5 trimmer comes in handy.
The design equation (R4 + R5/2) = (5v – Vsense)/(Vsense/R1) – R2, makes the values chosen for the R4, R5 pair dependent on the accuracy of the 5v rail. They can only be as correct as it is. This makes output voltages somewhat suspect, especially when they approach zero. Including R5 and adjusting it for Vout = 0 @ DF = 0 makes low Vout settings accurately programmable. If that isn’t a critical factor, then R5 can be omitted, just make R4 = (5v – Vsense)/(Vsense/R1) – R2.
The simplicity of the arithmetic for computing DF from the desired Vout is also a desirable feature of Figure 2.
In closing: This DI revises an earlier submission: “Three discretes suffice to interface PWM to switching regulators.” My thanks go to comment makers oldrev, Ashutosh Sapre, and Val Filimonov for their helpful advice and constructive criticism. And special thanks go to editor Shaukat for her creation of an environment friendly for the DI teamwork that made it possible.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Revisited: Three discretes suffice to interface PWM to switching regulators
- Three discretes suffice to interface PWM to switching regulators
- Cancel PWM DAC ripple with analog subtraction
- Add one resistor to allow DAC control of switching regulator output
The post Simple PWM interface can program regulators for Vout < Vsense appeared first on EDN.
Wheatstone bridge measurements with instrumentation amplifiers

What are signal-conditioning aspects for amplifiers that design engineers must grasp for precision applications? What are the design considerations for selecting and implementing a signal-conditioning solution for a Wheatstone bridge sensor? Here is a technology brief on instrumentation amplifiers (INAs) and ASSPs carrying out Wheatstone bridge measurements. It covers areas such as intrinsic noise, gain drift, nonlinearity, and diagnostics.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- Signal conditioning for high-impedance sensors
- What’s behind signal conditioner growth in sensors
- Conditioning network delivers precision for sensor calibration
- Using a strain-gauge transducer in a Wheatstone-bridge configuration
- Design solutions: Latest MEMS and sensor signal conditioning architectures
The post Wheatstone bridge measurements with instrumentation amplifiers appeared first on EDN.
A teardown tale of two not-so-different switches

Eleven years ago, my wife and I experienced the aftereffects of our first close-proximity lightning blast here in the Rocky Mountain foothills, clobbering (among other things) both five-port and eight-port Gigabit Ethernet (GbE) switches, both of which ended up going under the teardown knife. The failure mechanism for the first switch ended up being non-obvious, in sharp contrast to the second, whose controller chip ended up with multiple holes blown in its package top:
One year (and a decade ago) later, lightning struck again. No Gigabit Ethernet switches expired this second time, although we still lost some other devices.
Fast forward to 2024, and…yep. This time four GbE switches ended up zapped failed, two of them eight-port and two more five-port (fate was apparently playing catch-up after previously taking a pass on switches). The former two will be showcased today, with the others following soon. Then there’s the three-bay NAS; you will have already have seen that teardown by the time this piece is published. And another CableCard receiver (we’re three for three on those), along with another MoCA transceiver…you’ll get teardowns of those in the near future, too.
Today’s dissection patients are from the same supplier—TRENDnet. They hail from the same product family generation. And, as you’ll soon see, although their outsides are (somewhat) dissimilar, their insides are essentially identical (given the naming and release date similarities, that’s not exactly a surprise). Behold the metal-case TEG-S82g (hardware v2.0r, to be precise):
which, per Amazon’s listing, dates from September 2004, and the plastic-case TEG-S81g (again, hardware v2.0r), also initially available that same month and year:
Let’s start with the metal-case TEG-S82g. Following up on the stock photos shown earlier, here are some views of my specific device, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the TEG-S82g has dimensions of 150 x 97 x 28 mm/5.9 x 3.8 x 1.1 in. and weighs 364 g/12.8 oz.). Front:
Left side:
This next one, of the device’s backside, begs for a bit more explanation. Port 8, the one originally connected to one of the two spans of shielded Ethernet cable running around the outside of the house, is unsurprisingly the one that failed (therefore the electric tape I applied to identify it).
The other ports actually still work, at least for the first minute or few after I power on the switch, but eventually all the front panel LEDs begin blinking and further functionality ceases:
Onward. Right side:
Top:
and bottom:
Here’s its “wall wart”:
Those screw heads you might have noticed on both device sides? They’re our pathway inside:
Here’s our first view of the PCB inside:
Four screws hold it in place. Let’s get rid of those next:
Let’s see what we’ve got here. At the bottom are the eight Ethernet ports, next to (at the bottom right) the DC input power connector. Thick PCB traces running from there to the circuitry cluster in the upper right quadrant suggest that the latter handles power generation for the remainder of the board. And above, each two-port combo is a Bi-TEK FM-3178LLF dual port magnetic transformer. Here’s the specific one (at far right) associated with failed port 8:
At the top edge are (at far right) the power LED, next to eight activity LEDs, one for each of the ports. And below them is the system’s “brains”, a Realtek RTL8370N 8-port 10/100/1000 switch controller. It may very well be the same as the IC in the 8-port switch teardown from 11 years ago, although I can’t say for sure, as that one had chunks of its packaging (therefore topside markings) blown away! That said, this design does use the same transformers as last time.
Here’s a close-up of the RTL8370N and the aforementioned circuitry to its right:
Now let’s flip the PCB over and have a look at its backside:
No obvious evidence of damage here, either. Here’s another port 8 area closeup (as I was writing this, I paused to revisit the hardware and confirm that those white globs are just dust):
Now for its plastic-case TEG-S81g sibling, with listed dimensions again 150 x 97 x 28 mm/5.9 x 3.8 x 1.1 in. (albeit this time tapered in the front), although the weight is (unsurprisingly, given the shift in case material construction) decreased this time around: 186 g/6.6 oz.:
This time, port 5 failed. The other seven ports remain fully functional to this very day, although for how much longer I can’t say; therefore, I’ve decided to retire it from active service, as well, in the interest of future-hassle avoidance:
The “wall wart” looks different this time, but the specs are the same:
No screws on the case sides this time, as you may have already noticed, but remove the four rubber “feet” on the underside:
and underneath the front two are visible screw heads.
You know what comes next:
And we’re in (with the tape still stuck to the top):
Let’s put that tape back in place so I can keep track of which port (5) is the failing one:
The earlier-shown two screws did double-duty, not only holding the two halves of the chassis together but also helping keep the PCB inside in place. Two more, toward the back, also need to be dealt with before the PCB can be freed from its plastic-case captivity:
That’s better:
Another set of closeups, first of the affected-port region:
and the bulk of the topside circuitry:
And now, flipping the PCB over, another set as before:
I hope you’ll agree with me on the following two points:
- The two PCBs look identical, and
- There’s no visually obvious reason why either one failed.
So then, what happened? Let’s begin with the plastic-case TEG-S81g. Truth be told, the tape on top of port 5 originally existed so that I could remember which port was bad down the road, after I pressed it back into service, and in the same “use it until it completely dies” spirit that prompted my recent UPS repair. That said, long-term sanity aspirations eventually overrode my usual thriftiness. My guess is that, given the remainder of the ports (and therefore the common controller chip that manages them) remain operational, port 8’s associated transformer got zapped.
And the metal-case TEG-S82g? Here, I suspect, the lightning-strike spike effects made it through the port 5 transformer, all the way to the Realtek RTL8370N controller nexus, albeit interestingly only with derogatory effects seemingly after the chip had been operational for a bit and had “warmed up” (note, as previously mentioned in the earlier eight-port GbE switch teardown, the lack of a heatsink in this design). As the block diagram in this RTL8370N datasheet makes clear, the chip is highly integrated, including all the ports’ MAC and PHY circuits (among other things).
~1,300 words in, that’s “all” I’ve got for you today. Please share your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- UPS resurrection: Thriftiness strikes again
- Lightning strikes…thrice???!!!
- Devices fall victim to lightning strike, again
- Lightning strike becomes EMP weapon
The post A teardown tale of two not-so-different switches appeared first on EDN.
Unlocking compound semiconductor manufacturing’s potential requires yield management

This article is the second in a series from PDF Solutions on why adopting big data platforms will transform the compound semiconductor industry. The first part “Accelerating silicon carbide (SiC) manufacturing with big data platforms” was recently published on EDN.
Compound semiconductors such as SiC are revolutionizing industries with their ability to handle high-power, high-frequency, and high-temperature technologies. However, as they climb in demand across sectors like 5G, electric vehicles, and renewable energy, the manufacturing challenges are stacking up. The semiconductor sector, particularly with SiC, trails behind the mature silicon industry when it comes to adopting advanced analytics and streamlined yield management systems (YMS).
The roadblock is high defectivity levels in raw materials and complex manufacturing processes that stretch across multiple sites. Unlocking the full potential of compound semiconductors requires a unified and robust end-to-end yield management approach to optimize SiC manufacturing.
A variety of advanced tools, industry approaches, and enterprise-wide analytics hold the potential to transform the growing field of compound semiconductor manufacturing.
Addressing challenges in compound semiconductor manufacturing
While traditional silicon IC manufacturing has largely optimized its processes, the unique challenges posed by SiC and other compound semiconductors require targeted solutions.
- Material defectivity at the source
Unlike silicon ICs, where costs are distributed across numerous fabrication steps, SiC manufacturing sees the most significant costs and yield challenges in the early stages of production, such as crystal growth and epitaxy. These stages are prone to producing defects that may only manifest later in the process during electrical testing and assembly, leading to inefficiencies and high costs.
As material defects evolve during manufacturing, traceability is essential to pinpoint their origin and mitigate their impact. Yet, the lack of robust systems for tracking substrates throughout the process remains a significant limitation.
- Siloed data and disparate systems
Compound semiconductor manufacturing often involves multi-site operations where substrates move between fabs and assembly facilities. These operations frequently operate on legacy systems that lack standardization and advanced data integration capabilities.
Data silos created by disconnected manufacturing execution systems (MES) and statistical process control (SPC) tools hinder enterprises from forming a centralized view of their production. Without cross-operational alignment enabled by unified analytics platforms, root cause analysis and yield optimization are nearly impossible.
- Nuisance defects and variability
Wafer inspection in compound semiconductors reveals a high density of “nuisance defects”—spatially dispersed points that do not affect performance but can overwhelm defect maps. Distinguishing between critical and benign defects is critical to minimizing false positives while optimizing resource allocation.
Furthermore, varying IDs for substrates through processes like polishing, epitaxy, and sawing hamper effective wafer-level traceability (WLT). Using unified semantic data models can alleviate confusion stemming from frequent lot splits, wafer reworks, and substrate transformations.
How big data analytics and AI catalyze yield management
Compound semiconductor manufacturers can unlock yield lifelines by deploying comprehensive big data platforms across their enterprises. These platforms go beyond traditional point analytics tools, providing a unified foundation to collect, standardize, and analyze data across the entire manufacturing spectrum.
- Unified data layers
The heart of end-to-end yield management lies in breaking down data silos through an enterprise-wide data layer. By standardizing data inputs from multiple MES systems, YMSs, and SPC tools, manufacturers can achieve a holistic view of product flow, defect origins, and yield drop-off points.
For example, platforms using standard models like SEMI E142 facilitate single device tracking (SDT), enabling precise identification and alignment of defect data from crystal growth to final assembly and testing.
- Root cause analysis tools
Big data platforms offer methodologies like kill ratio (KR) analysis to isolate critical defect contributors, optimize inspection protocols, and rank manufacturing steps by their yield impact. For example, a comparative KR analysis on IC front-end fabs can expose the interplay between substrate supplier quality, epitaxy reactor performance, and defect propagation rates. These insights lead to actionable corrections earlier in production.
By ensuring that defect summaries feed directly into analytics dashboards, enterprises can visualize spatial defect patterns, categorize issues by defect type, and thus rapidly deploy solutions.
- Predictive analytics and simulation
AI-driven predictive tools are vital for anticipating potential yield crashes or equipment wear that can bottleneck production. Using historical defect patterns and combining them with contextual process metadata, yield management systems can simulate “what-if” outcomes for different manufacturing strategies.
For instance, early detection of a batch with high-risk characteristics during epitaxy can prevent costly downstream failures during assembly and final testing. AI-enhanced traceability also enables companies to correlate downstream failure patterns back to specific substrate lots or epitaxy tools.
- SiC manufacturing case study
Consider a global compound semiconductor firm transitioning to 200-mm SiC wafers to expand production capacity. By deploying a big data-centric YMS across multi-site operations, the manufacturer would achieve the following milestones within 18 months:
- Reduction of nuisance defects by 30% post-implementation of advanced defect stacking filters.
- Yield improvement of 20% via optimized inline inspection parameters identified from predictive KR analysis.
- Defect traceability enhancements enabling root cause identification for more than 95% of module-level failures.
These successes underscore the importance of incorporating AI and data-driven approaches to remain competitive in the fast-evolving compound semiconductor space.
Building a smarter compound semiconductor fabrication process
The next frontier for compound semiconductor manufacturing lies in adopting fully integrated smart manufacturing workflows that include scalability in the data architecture, proactive process control, and an iterative improvement culture.
- Scalability in data architecture
Introducing universal semantic models enables tracking device IDs across every transformation from input crystals to final modules. This end-to-end visibility ensures enterprises can scale into higher production volumes seamlessly while maintaining enterprise-wide alignment.
- Proactive process control
Setting an enterprise-wide baseline for defect classification, detection thresholds, and binmap merging algorithms ensures uniformity in manufacturing outcomes while minimizing variability stemming from site-specific inconsistencies.
- Iterative improvement culture
Yield management thrives when driven by continuous learning cycles. The integration of defect analysis insights and predictive modeling into day-to-day decision-making accelerates the feedback loop for manufacturing teams at every touchpoint.
Pioneering the future of yield management
The compound semiconductor industry is at an inflection point. SiC and its analogues will form the backbone of the next generation of technologies, from EV powertrains to renewable energy innovations and next-generation communication.
Investing in end-to-end data analytics with enterprise-scale capabilities bridges the gap between fledgling experimentation and truly scalable operations. Unified yield management platforms are essential to realizing the economic and technical potential of this critical sector.
By focusing on robust data infrastructures, predictive analytics, and AI integrations, compound semiconductor enterprises can maintain a competitive edge, cut manufacturing costs, and ensure the high standards demanded by modern applications.
Steve Zamek, director of product management at PDF Solutions, is responsible for manufacturing gata analytics solutions for fabs and IDMs. Prior to this, he was with KLA (former KLA-Tencor), where he led advanced technologies in imaging systems, image sensors, and advanced packaging.
Jonathan Holt, senior director of product management at PDF Solutions, has more than 35 years of experience in the semiconductor industry and has led manufacturing projects in large global fabs.
Dave Huntley, a seasoned executive providing automation to the semiconductor manufacturing industry, is responsible for business development for Exensio Assembly Operations at PDF Solutions. This solution enables complete traceability, including individual devices and substrates through the entire assembly and packaging process.
Related Content
- The Road to 200-mm SiC Production
- Silicon carbide (SiC) counterviews at APEC 2024
- Wolfspeed to Build 200-mm SiC Wafer Fab in Germany
- Silicon carbide’s wafer cost conundrum and the way forward
- Reducing Costs, Improving Efficiency in SiC Wafer Production with CMP
The post Unlocking compound semiconductor manufacturing’s potential requires yield management appeared first on EDN.
A simulated 100-MHz VFC

Stephen Woodward, a prolific circuit designer with way more than 100 published Design Ideas (DIs), had his “80 MHz VFC with prescaler and preaccumulator” [1] published on October 17, 2024, as a DI on the EDN website.
Upon reading his article, I was eager to simulate it and try to push its operation up to 100 MHz, if possible, while maintaining its basic simplicity and accuracy. However, Stephen Woodward got there before I did [2]! For the record, I had almost finished my design before I saw his latest one on the EDN website.
I won’t discuss the details of the circuit operation because they are so similar to those of the above-referenced DIs. However, there are added features, and the functionality has been tested by simulation.
Wow the engineering world with your unique design: Design Ideas Submission Guide
FeaturesMy voltage-to-frequency converter (VFC) circuit (Figure 1) has a high impedance input stage, it can operate reliably beyond 100 MHz, it can be operated with a single 5.25-V supply (or a single 5-V supply with a few added components), and it has been successfully simulated. Also, adjustments are provided for calibration.
Figure 1 VFC design that operates from 100 kHz to beyond 100 MHz with a single 5.25-V supply, providing square wave outputs at 1/2 and 1/4 the main oscillator frequency.
This circuit provides square wave outputs at one-half and one-fourth the main oscillator frequency. These signals will, in many cases, be more useful than the very narrow oscillator signal, which will be in the 2 ns to 5 ns range.
The NE555 (U8) provides a 500 kHz signal, which drives both a negative voltage generator for a -2.5-V reference and a voltage doubler used to generate a 5.25-V regulated supply that is used when a single 5-V supply is desired. TLA431As are used as programmable Zener diodes, NOT TL431As. Unlike the TL431A, the TLA431A is stable for all values of capacitance connected from the cathode to the anode.
Two adjustments are provided: Both a positive and a negative offset adjustment are provided by R11, and R9 adjusts the gain of the current-to-voltage converter, U2. I suggest using R11 to set the 100-kHz signal with 5 mV applied to the input and using R9 to set the 100-MHz signal with a 5-V input. Repeat this procedure as required to maximize the accuracy of the circuit.
Possible limitationsThis circuit may not give highly accurate operation below 100 kHz because of diode and transistor leakage currents, but I expect it to operate at the lower frequencies at least as well as Woodward’s circuits. Operation down to 1 Hz or 10 Hz is, in my opinion, mostly for bragging rights, and I am not concerned about that.
I expect this VFC to be useful mostly in the 100 kHz to 100 MHz frequency range: a 1 to 1000 span. Minute diode/transistor leakage currents in the nanoamp range and PCB surface leakage may cause linearity inaccuracies at the lower frequencies. The capacitor charging current provided by transistor Q1 is in the several microamps range at 100 kHz; below that, it is in the nanoamp range. Having had some experience with environmental testing, I think it would be difficult to build this circuit so that it would provide accurate operation below 100 kHz in an environment of humidity/temperature of 75%/50oC.
Some detailsWhen simulated with LTspice, the Take Back Half circuit [3] with 1N4148 diodes did not provide acceptable results above about 3.5 MHz when driven by a square wave signal with 2-ns rise/fall times, so I used Schottky barrier diodes instead, which worked well beyond 25 MHz, the maximum frequency seen by the Take Back Half circuit [1,3]. The Schottky diodes have somewhat higher leakage current than the 1N4148s, but the 1N4148 diodes would require the highest frequency signal to be divided down to 3.5 MHz to operate well in this application.
I used two 74LVC1G14s to drive C4, the ramp capacitor, because I was not convinced one of them was rated to continuously drive the peak or rms current required to reset the capacitor when operating at or near 100 MHz. And using a 25-pF capacitor instead of just using parasitic and stray capacitance allows better operation at low frequencies because leakage currents are a smaller percentage of the capacitor charging current. (Obviously, more ramp capacitance requires more charging current.)
The op-ampIf you want to use a different op amp, check the specs to be sure the required supply current is not greater than 3 mA worst case. Also, it must accommodate the necessary 7.75 V with some margin. Critically, the so-called rail-to-rail output must swing to within 100 mV of the positive rail with a 1.3-mA load at the maximum operating temperature.
Be advisedLook at renowned Jim Williams’ second version of his 1 Hz to 100 MHz VFC for more information about the effort required to make his circuit operate well over the full frequency range [4][5]. See reference 5 and look at the notes in Figure 1 and Table 1.
Jim McLucas retired from Hewlett-Packard Company after 30 years working in production engineering and on design and test of analog and digital circuits.
References/Related Content
- 80 MHz VFC with prescaler and preaccumulator
- 100-MHz VFC with TBH current pump
- Take-Back-Half precision diode charge pump
- Designs for High Performance Voltage-to-Frequency Converters
- 1-Hz to 100-MHz VFC features 160-dB dynamic range
The post A simulated 100-MHz VFC appeared first on EDN.
Chiplet basics: Separating hype from reality

There’s currently a significant buzz within the semiconductor industry around chiplets, bare silicon dies intended to be combined with others into a single packaged device. Companies are beginning to plan for chiplet-based designs, also known as multi-die systems. Yet, there is still uncertainty about what designing chiplet architecture entails, which technologies are ready for use, and what innovations are on the horizon.
Understanding the technology and supporting ecosystem is necessary before chiplets begin to see widespread adoption. As technology continues to emerge, chiplets are a promising solution for many applications, including high-performance computing, AI acceleration, mobile devices, and automotive systems.
Figure 1 Understanding the technology is necessary before chiplets begin to see widespread adoption. Source: Arteris
The rise of chiplets
Until recently, integrated circuits (ICs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), and system-on-chip (SoC) devices were monolithic. These devices are built on a single piece of silicon, which is then enclosed in its dedicated package. Depending on its usage, the term chip can refer to either the bare die itself or the final packaged component.
Designing monolithic devices is becoming increasingly cost-prohibitive and harder to scale. The solution is to break the design into several smaller chips, known as chiplets, which are mounted onto a shared base called a substrate. All of this is then enclosed within a single package. This final assembly is a multi-die system.
Building on this foundation, the following use cases illustrate how chiplet architectures are being implemented. Split I/O and logic is a chiplet use case in which the core digital logic is implemented on a leading-edge process node. Meanwhile, I/O functions such as transceivers and memory interfaces are offloaded to chiplets built on older, more cost-effective nodes. This approach, used by some high-end SoC and FPGA manufacturers, helps optimize performance and cost by leveraging the best technology for each function.
A reticle limit partitioning use case implements a design that exceeds the current reticle limit of approximately 850 mm2 and partitions it into multiple dies. For example, Nvidia’s Blackwell B200 graphics processing unit (GPU) utilizes a dual-chiplet design, where each die is approximately 800 mm² in size. A 10 terabytes-per-second link enables them to function as a single GPU.
Homogeneous multi-die architecture integrates multiple identical or functionally similar dies, such as CPUs, GPUs, or NPUs, on a single package or via an ‘interposer’, a connecting layer similar to a PCB but of much higher density and typically made of silicon using lithographic techniques. Each die performs the same or similar tasks and is often fabricated using the same process technology.
This approach enables designers to scale performance and throughput beyond monolithic die designs’ physical and economic limits, mainly as reticle limits of approximately 850 mm² constrain single-die sizes or decreasing yield with increasing die size makes the solution cost-prohibitive.
Functional disaggregation is the approach most people think of when they hear the word chiplets. This architecture disaggregates a design into multiple heterogeneous dies, where each die is realized at the best node in terms of cost, power, and performance for its specific function.
For example, a radio frequency (RF) die might be implemented using a 28 nm process, analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) could be realized in a 16 nm process, and the core digital logic might be fabricated using a 3 nm process. Large SRAMs may be implemented in 7 nm or 5 nm, as RAM has not scaled significantly in finer geometries.
The good news
There are multiple reasons why companies are planning to transition or have transitioned to chiplet-based architectures. These include the following:
- Chiplets can build larger designs than are possible on a single die.
- Higher yields from smaller dies reduce overall manufacturing costs.
- Chiplets can mix and match best-in-class processing elements, such as CPUs, GPUs, NPUs, and other hardware accelerators, along with in-package memories and external interface and memory controllers.
- Multi-die systems may feature arrays of homogeneous processing elements to provide scalability, or collections of heterogeneous elements to implement each function using the most advantageous process.
- Modular chiplet-based architectures facilitate platform-based design coupled with design reuse.
Figure 2 There are multiple drivers pushing semiconductor companies toward chiplet architectures. Source: Arteris
The ecosystem still needs to evolve
While the benefits are clear, several challenges must be addressed before chiplet-based architectures can achieve widespread adoption. While standards like PCIe are established, die-to-die (D2D) communication standards like UCIe and CXL continue to emerge, and ecosystem adoption remains uneven. Meanwhile, integrating different chiplets under a common set of standards is still a developing process, complicating efforts to build interoperable systems.
Effective D2D communication must also deliver low latency and high bandwidth across varied physical interfaces. Register maps and address spaces, once confined to a single die, now need to extend across all chiplets forming the design. Coherency protocols such as AMBA CHI must also span multiple dies, making system-level integration and verification a significant hurdle.
To understand the long-term vision for chiplet-based systems, it helps first to consider how today’s board-level designs are typically implemented. This usually involves the design team selecting off-the-shelf components from distributors like Avnet, Arrow, DigiKey, Mouser, and others. These components all support well-defined industry-standard interfaces, including I2C, SPI, and MIPI, allowing them to be easily connected and integrated.
In today’s SoC design approach, a monolithic IC is typically developed by licensing soft intellectual property (IP) functional blocks from multiple trusted third-party vendors. The team will also create one or more proprietary IPs to distinguish and differentiate their device from competitive offerings. All these soft IPs are subsequently integrated, verified, and implemented onto the semiconductor die.
The long-term goal for chiplet-based designs is an entire chiplet ecosystem. In this case, the design team would select a collection of off-the-shelf chiplets created by trusted third-party vendors and acquired via chiplet distributors rather as board-level designers do today. The chiplets will have been pre-verified with ‘golden’ verification IP that’s trusted industry-wide, enabling seamless integration of pre-designed chiplets without the requirement for them to be verified together prior to tape-out.
The team may also develop one or more proprietary chiplets of their own, utilizing the same verification IP. Unfortunately, this chiplet-based ecosystem and industry-standard specification levels are not expected to become reality for several years. Even with standards such as UCIe, there are many options and variants within the specification, meaning there is no guarantee of interoperability between two different UCIe implementations, even before considering higher-level protocols.
The current state-of-play
Although the chiplet ecosystem is evolving, some companies are already creating multi-die systems. In some cases, this involves large enterprises such as AMD, Intel, and Nvidia, who control all aspects of the development process. Smaller companies may collaborate with two or three others to form their own mini ecosystem. These companies typically leverage the current state-of-play of D2D interconnect standards like UCIe but often implement their own protocols on top and verify all chiplets together prior to tape-out.
Many electronic design automation (EDA) and IP vendors are collaborating to develop standards, tool flows, and crucially VIP. These include companies like Arteris, Cadence, Synopsys, and Arm, as well as RISC-V leaders such as SiFive and Tenstorrent.
Everyone is jumping on the chiplet bandwagon these days. Many are making extravagant claims about the wonders to come, but most are over-promising and under-delivering. While a truly functional chiplet-based ecosystem may still be five to 10 years away, both large and small companies are already creating chiplet-based designs.
Ashley Stevens, director of product management and marketing at Arteris, is responsible for coherent NoCs and die-to-die interconnects. He has over 35 years of industry experience and previously held roles at Arm, SiFive, and Acorn Computers.
Related Content
- A closer look at Arm’s chiplet game plan
- TSMC, Arm Show 3DIC Made of Chiplets
- Chiplets Get a Formal Standard with UCIe 1.0
- How the Worlds of Chiplets and Packaging Intertwine
- Imec’s Van den hove: Moving to Chiplets to Extend Moore’s Law
The post Chiplet basics: Separating hype from reality appeared first on EDN.
Time-to-digital conversion for space applications

A time-to-digital converter (TDC) is like a stopwatch measuring the elapsed interval between two events with picosecond precision, converting this into a digital value for post-processing. Many space applications require time-of-flight measurements to calculate distance, delay, or velocity. For example, an in-space servicing, assembly, and manufacturing (ISAM) spacecraft needs to determine precisely the relative location of debris before initiating rendezvous and retrieval operations. Similarly, space-domain awareness must understand the proximity and trajectory of other orbiting objects to assess any potential threat.
A TDC receives two inputs: a start signal (or edge) to mark the beginning of the time interval to be measured, and a stop pulse. The delay between these is converted to a digital number for post-processing. Different architectures are typically used to implement the logic, e.g., counters, delay lines, or a time amplifier.
Time-to-digital conversion is a technique used in space-based LiDAR systems to measure the time taken for a light pulse to travel to and from an object to calculate its distance. A LiDAR emits a laser pulse towards a target, which reflects off the latter’s surface, returning to the sensor. The TDC starts counting when the pulse is transmitted, stops when it is detected by the receiver, and using the speed of light, the distance is calculated as:
where c is the speed of light and t is the time-of-flight.
TDC in LiDARFor example, LiDAR is used by some Earth-Observation operators to measure altitude and surface changes over time, to, for example, monitor vegetation height, ice sheet or glacier thickness and melt, sea-ice elevation or relative sea level. Spaceborne LiDAR altimetry is capable of centimetre-level vertical precision and is illustrated in Figure 1.
Figure 1 The use of TDC in space-based LiDAR applications. Source: Rajan Bedi
Similarly, a TDC is used in mass spectroscopy to measure how long it takes ions to travel from a source to a detector. This time-of-flight information is used to calculate the mass-to-charge ratio (m/z) of ions, and since the kinetic energy given to all ions is the same, time-of-flight is directly related to their mass as follows:
where t is the time of flight, k a calibration constant, m the ion mass and z the ion charge.
TDC in mass spectroscopySpace-based mass spectroscopy has many applications to identify and quantify chemical composition by measuring the mass-to-charge ratio of ions. For example, Earth Observation and space weather monitor the make-up of the Earth’s ionosphere and magnetosphere, including solar wind particles. Space science analyses the chemical structure of planetary atmospheres, lunar and asteroid surface composition, as well as soil or ice samples to detect organic molecules and potential signs of life. Time-of-flight mass spectroscopy is illustrated in Figure 2.
Figure 2 The use of TDC in time-of-flight mass spectroscopy. Source: Rajan Bedi
TDC in optical communicationsOptical communication is increasingly being used to transmit data wirelessly between orbiting satellites, such as intersatellite links, or links from the ground to a spacecraft. High-throughput payloads are now using fibre to send data within sub-systems to overcome the bandwidth, loss, mass ,and EMI limitations of traditional copper communications.
TDCs are used to detect when photons arrive, for timing-jitter analysis to prevent degradation of system performance, clock recovery, and synchronization for aligning and decoding incoming data streams, as shown in Figure 3.
Figure 3 The use of TDC and fibre-based optical communications within a payload. Source: Rajan Bedi
TDC to calculate absolute timeTDC is also used to calculate absolute time with the help of satellite navigation for applications such as quantum key distribution over long distances. Both the transmitter and the receiver use GNSS-disciplined oscillators to synchronize their local system clocks to a global time reference such as UTC or GPS time. A precise timestamp (Tevent) can be calculated as:
where TGNSSepoch is the absolute time of the last GNSS PPS signal, e.g., 14:23:08 UTC, N is the number of clock cycles since the PPS, Tclk is the clock period, and tfine is the sub-nanosecond fine time from interpolation.
For example, if the TDC counts 8,700 clock cycles with a period of 1 ns and tfine = 0.217 ns, the resulting timestamp can be calculated as:
The system concept based on optical communications is illustrated in Figure 4.
Figure 4 The use of TDC to calculate absolute time for quantum key distribution. Source: Rajan Bedi
MAG-TDC00002-Sx TDCMagics Technologies NV has just released a rad-hard TDC for space applications: the MAG-TDC00002-Sx is shown in Figure 5 and can measure time delays with picosecond precision, converting this to a digital value for post processing. The device offers an SPI slave interface to connect to FPGAs/MCUs for configuration and read-out of the elapsed time.
Figure 5 Magics’ TDC00002-Sx, Rad-Hard TDC that can measure time delays with picosecond precision. Source: Magics Technologies NV
The MAG-TDC00002-Sx operates from a core voltage of 1.2 V, and its I/O can be powered from 1V8 to 3V3. The device consumes 20 mW (typical) and has a specified operating temperature from -40°C to 125°C. The MAG-TDC00002-Sx comes in a 17.9 x 10.8 mm, 28-pin, hermetic, ceramic COIC package.
The architecture of the MAG-TDC00002-Sx and an application drawing are shown in Figure 6: following power-up and initialization (lock) of the internal PLL, the TDC enters an IDLE state. When the device is configured, a pulse is generated on the TRIGGER output, and the TDC changes to a LISTEN mode. In this state, the internal 1.25-GHz counter is running and will be sampled on receipt of external start and stop signals. The values are saved to their corresponding registers, and both coarse and fine measurements can be read out via SPI to calculate time-of-flight.
The MAG-TDC00002-Sx has automatic, internal self-calibration, which corrects for drifts due to process, voltage, temperature, and radiation degradation.
Figure 6 The architecture and application drawing of MAG-TDC00002-Sx. Source: Magics Technologies NV
As an example, a time-of-flight measurement between a single start and stop event resulted in the following MAG-TDC00002-Sx register data:
START BIN DEL = 121 STOP BIN DEL0 = 28 START BIN CAL PERIOD = 110 STOP BIN CAL PERIOD0 = 110 START BIN CAL OFFSET = 8 STOP BIN CAL OFFSET = 9 START CNT VAL L = 4 START CNT VAL H = 0 STOP CNT VAL L0 = 14 STOP CNT VAL H0 = 0The calculation of time-of-flight is:
In terms of radiation hardness, the MAG-TDC00002-Sx has a specified SET/SEU tolerance of 60 MeV·cm2/mg and a total-dose immunity > 100 kRad (Si) / 1 kGy (Si). Radiation reports and ESCC9000 qualification are expected in Q3 of this year, and EM and EQM parts can be ordered today. The device is European and ITAR-free, which is advantageous if you have import/export concerns!
MAG-TDC00002-Sx evaluation kitTo prototype and de-risk the MAG-TDC00002-Sx, an evaluation kit is available comprising a base board and a TDC PCB as shown below. The latter fits on top of the former using the socket headers, and the base board connects to a PC using a USB Type-C cable as shown in Figure 7.
Figure 7 MAG-TDC00002-Sx evaluation kit with a base board and TDC PCB. Source: Magics Technologies NV
The evaluation kit comes with software that communicates with the base board using SCPI commands to configure and use the MAG-TDC00002-Sx as shown in Figure 8.
Figure 8 A screenshot of MAG-TDC00002-Sx evaluation kit software using SCPI commands to configure the device. Source: Magics Technologies NV
The rad-hard TDC is well-suited for manufacturers of Earth-Observation LiDAR instruments, space-science mass spectrometers, ISAM/space-domain awareness proximity detectors or high-throughput optical communications transceivers for calculating time-of-flight. The MAG-TDC00002-Sx can also be used to calculate absolute time for applications such as secure quantum key exchange via satellite.
Further information about Magics’ MAG-TDC00002-Sx will be shared in a webinar to be broadcast on 22nd May, and you can register using this link.
Dr. Rajan Bedi is the CEO and founder of Spacechips, which designs and builds a range of advanced, AI-enabled, re-configurable, L to K-band, ultra high-throughput transponders, SDRs, Edge-based on-board processors and Mass-Memory Units for telecommunication, Earth-Observation, ISAM, SIGINT, navigation, 5G, internet and M2M/IoT satellites. The company also offers Space-Electronics Design-Consultancy, Avionics Testing, Technical-Marketing, Business-Intelligence and Training Services. (www.spacechips.co.uk).
Related Content
- Clock architectures and their impact on system performance and reliability
- Oscilloscope special acquisition modes
- Resolve picoseconds using FPGA techniques
- Oscilloscope probing your satellite
The post Time-to-digital conversion for space applications appeared first on EDN.
Altum amps speed RF design with Quantic blocks

Altum RF’s MMIC amplifiers are now part of Quantic’s plug-and-play X-MWblocks, enabling seamless integration into RF designs. The modular format streamlines design, evaluation, prototyping, and production for rapid RF and microwave system assembly.
The initial offering includes five of Altum RF’s low-noise and driver amplifiers: ARF1200Q2, ARF1201Q2, ARF1202Q2, ARF1203Q2, and ARF1205Q2. These devices cover frequency bands from 13 GHz to 43.5 GHz, with noise figures as low as 1.6 dB. Additional Altum RF MMICs will join the X-MWblocks platform in the coming months.
Quantic X-Microwave offers a catalog of over 6000 RF components for configuring modules, assemblies, and subassemblies. Find Altum RF products here.
The post Altum amps speed RF design with Quantic blocks appeared first on EDN.
Amp elevates K-band throughput for LEO sats

Expanding Qorvo’s GaN-on-SiC SATCOM portfolio, the QPA1722 K-band power amplifier improves Low Earth Orbit (LEO) satellite performance. Qorvo reports the amplifier delivers three times the instantaneous bandwidth and 10% higher efficiency than comparable devices, all within a 38% smaller footprint. These enhancements enable higher data throughput and support more compact, efficient satellite payload designs.
The QPA1722 operates from 17.7 GHz to 20.2 GHz, delivering up to 10 W (40 dBm) saturated and 6 W (37 dBm) linear output power. It provides more than 1 GHz of instantaneous bandwidth to support high data-rate applications, with 36% efficiency for improved power handling and thermal performance. Additional specifications include 26 dB small-signal gain, 35% power-added efficiency, and –25 dBc third-order intermodulation distortion.
Housed in a 6.0×5.0×1.64-mm surface-mount package, the QPA1722 is fully matched to 50 Ω with DC-grounded input and output ports. On-chip blocking capacitors follow the DC grounds at both ports.
The QPA1722 power amplifier is sampling now, with volume production planned for fall 2025. Evaluation kits are available upon request.
The post Amp elevates K-band throughput for LEO sats appeared first on EDN.
Simultaneous sweep boosts multi-VNA test speed

Anritsu has added a simultaneous sweep feature to its ShockLine MS46131A 1-port vector network analyzer (VNA), which operates up to 43.5 GHz. The capability supports parallel 1-port S-parameter measurements across up to four MS46131A units.
Simultaneous sweep enables coordinated triggering through an external signal, aligning the start of sweeps across multiple VNAs. Each unit can be configured independently with different start and stop frequencies, IF bandwidths, and point counts while performing synchronized sweeps.
Well-suited for multi-band, multi-configuration test environments, the MS46131A supports synchronized antenna characterization for LTE and Wi-Fi 7, sub-6 GHz and mmWave 5G (FR2 and FR3), and phased array validation. Remote operation is enabled via SCPI commands over uniquely assigned TCP port numbers, allowing full automation and integration into distributed test systems.
The simultaneous sweep feature is available with software version 2025.4.1 and supported on all MS46131A VNAs.
The post Simultaneous sweep boosts multi-VNA test speed appeared first on EDN.
Eval board eases battery motor-drive design

Powered by an eGaN FET, EPC’s EPC9196 is a 25-A RMS, 3-phase BLDC inverter optimized for 96-V to 150-V battery systems. The reference design targets medium-voltage motor drives, including steering in AGVs, traction in compact autonomous vehicles, and robotic joints.
The EPC9196 is built around the EPC2304, a 200-V, 3.5- mΩ (typical) eGaN FET in a thermally enhanced QFN package. This device enables high-efficiency operation with a peak phase current of 35 A and switching frequencies up to 150 kHz. GaN technology reduces switching losses and dead time, enabling smoother, quieter motor operation even at high PWM frequencies.
Featuring a wide input voltage range from 30 V to 170 V, the EPC9196 integrates gate drivers, housekeeping power, current and voltage sensing, overcurrent protection, and thermal monitoring. The reference design provides dv/dt control below 10 V/ns and supports both sensor-less and encoder-based control configurations. It is compatible with motor drive controller platforms from Microchip, ST, TI, and Renesas.
EPC9196 reference design boards cost $812.50 each and are available from DigiKey. The EPC2304 eGaN FET sells for $3.68 each in reels of 3,000 units.
The post Eval board eases battery motor-drive design appeared first on EDN.
MCUs enable USB-C Rev 2.4 designs

Renesas offers one of the first microcontroller families to support USB-C Revision 2.4 with its RA2L2 group of low-power Arm Cortex-M23 MCUs. The updated USB Type-C cable and connector specification lowers voltage detection thresholds to 0.613 V for 1.5 -A sources and 1.165 V for 3.0-A sources, enhancing compatibility with newer USB-C cables and devices.
Low-power operation makes the RA2L2 MCUs well-suited for portable devices such as USB data loggers, charging cases, barcode readers, and PC peripherals like gaming mice and keyboards. These entry-level MCUs consume just 87.5 µA/MHz in active mode, dropping to 250 nA in software standby. An independent UART clock enables wake-up from standby when receiving data from Wi-Fi or Bluetooth LE modules.
In addition to USB-C cable and connector detection up to 15 W (5 V/3A) and USB Full-Speed support, the MCUs offer a low-power UART, I3C, SSI, and CAN interfaces for design flexibility. The 48-MHz Cortex-M23 core is paired with up to 128 KB of code flash, 4 KB of data flash, and 16 KB of SRAM.
RA2L2 microcontrollers are now available. Samples and evaluation kits can be ordered from the Renesas website and authorized distributors.
The post MCUs enable USB-C Rev 2.4 designs appeared first on EDN.
The 2025 WWDC: From Intel, Apple’s Nearly Free, and the New Interfaces Are…More Shiny?

Have you ever heard the idiom “ripping off the band-aid? With thanks to Wiktionary for the definition, it means:
To perform a painful or unpleasant but necessary action quickly so as to minimize the pain or fear associated with it.
That’s not what Apple’s doing right now with the end stages of its Intel-to-Apple Silicon transition, which kicked off five years ago (fifteen years, ironically, after its previous announced transition to x86 CPUs, two decades ago). And although I’m (mostly) grateful for it on a personal level, I’m also annoyed by what’s seemingly the company’s latest (but definitely not the first, and probably also not the last) example of “obsolescence by (software, in this case) design”.
Upcoming MacOS “Tahoe” 26, publicly unveiled at this week’s Worldwide Developer Conference (WWDC) and scheduled for a “gold” release later this year (September or October, judging from recent history), still supports legacy Intel-based systems, but only four model/variant combos. That’s right; four (and in those few cases still absent any Apple Intelligence AI capabilities):
- MacBook Pro (16-inch, 2019)
- MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports)
- iMac (27-inch, 2020)
- Mac Pro (2019)
My wife owns the first one on the list, courtesy of a Christmas present from yours truly last year. I’m typing these words on the second one. The other two are the “end of the line” models of the Intel-based iMac and Mac Pro series, both of which subsequently also switched to Apple Silicon-based varieties. Not included, long-time readers may have already noticed, is my storage capacity-constrained 2018 Mac mini; its M2 Pro successor is already sitting on a shelf downstairs in storage, awaiting its turn in the spotlight (that said, I’ll probably cling to my 2018 model longer than I should in conjunction with OpenCore Legacy Patcher, even if only motivated by hacker curiosity and because I’m so fond of the no-longer-offered Space Gray color scheme…).
But let’s go back to the second (also my) system in the previous bullet list. Did it also seem strange to you that Apple specifically referenced the model with “Four Thunderbolt 3 ports”? That’s because Apple also sold a 2020 model year variant with two Thunderbolt 3 ports. If you compare the specs of the two options, you’ll see that there’s at least some tech rationalization for the supported-or-not differentiation; the two-port model is based on a 8th-generation “Coffee Lake” Intel Core i5 8257U SoC, while my four-port model totes a 10th-generation “Ice Lake” Core i5 1038NG7. That said, they both support the same foundation x86 instruction set, right? And the integrated graphics is Intel Iris Plus in generation for both, too. So…
A one-year delay in sentencing for (some of) the Dipert family system stable aside, the endgame verdict for Intel on Apple is now coming into clear view. Apple confirmed that “Tahoe” is x86’s last hurrah; MacOS will be Arm-only beginning with next year’s (2026) spin. The subsequent 2028 edition will also drop Rosetta 2 dynamic software-translation support, so any x86-only coded apps will no longer run. And given these moves, along with Apple’s longstanding tradition of supporting not only the current but also prior two major O/S generations, it would make no sense for any developer to bother continuing to make and support “Universal” versions of apps (dual-coded for both x86 and Arm) once “Tahoe” drops off the support list in 2028…if they even wait that long, that is, considering that the predominant percentage of legacy Intel systems will be incapable of running a supported MacOS variant way before then. This forecast echoes what played out last time, when PowerPC was phased out in favor of x86.
The Liquid Glass interfaceThe other key announcement at the pre-recorded 1.5-hour keynote that kicked off this week:
which Apple itself condensed down to a 2 minute summary (draw your own “sizzle vs steak” conclusions per my recent comments on Microsoft and Google’s full and abridged equivalents):
involves the Liquid Glass UI revamp which, after conceptually originating with the two-years-back Vision Pro headset, now spans the broader product line. Translucency, rounded corners and expanded color vibrancy are its hallmarks; Apple even did a standalone video on it:
It looks…OK, I guess. On the Vision Pro, the translucency makes total sense, because UI elements need to be not only spatially arranged with respect to each other but also in front of the real-life scene behind them (and in front of the user), reproduced for the eyes by front-facing cameras and embedded micro-OLEDs. But for phones, tablets, watches, and the like…again, it’s OK, I guess. I’m not trying to minimize the value of periodic visual-experience refreshes, mind you; it’s the same underlying motivation behind re-painting houses and the rooms inside them. It just feels not only derivative, given the Vision Pro heritage, but also reactionary, considering that Google announced its own UI refresh less than a month ago (Android 16 is apropos downloading to my Pixels as I type these words, in fact), and Samsung had unveiled its own in January (six months later than originally promised, but I digress).
The iPad (finally?) grows upThere is, however, one aspect of Apple’s UI revamp that I’m very excited about, but which ironically has little (but not nothing) to do with Liquid Glass. For many years now, iPads (particularly the high-end Pro variants) have offered substantial hardware potential, largely untapped by the platform’s unrealized software capabilities. Specifically, I’m talking about Apple’s ham-fisted Split View and Slide Over schemes for (supposedly) unlocking multitasking. Frankly, the only times I ever used either of them were accidental, and my only reaction was to (struggle to figure out how to) undo whatever UI abominations I’d unintentionally activated.
Well, they’re both gone as of later this year. In their place is proper MacOS-like windowing, with menu bars, user customizable window locations and sizes, and the like. Hallelujah. Reiterating a point I’ve made before (although software imperfections blunted its at-the-time reality), Apple will need to be careful to not cannibalize its computer sales by tablet sales going forward. That said, as I’ve also previously noted, if you’re going to get cannibalized, it might as well be by yourself, not to mention that tablets offer Apple more competitive isolation than do computers.
Deep learning (local) model developer accessApple also this week announced that it was opening up developer access to its devices’ locally housed deep learning inference models for use by third-party applications. Near term, I’d frankly be more enthusiastic about this move if the models themselves were better. That said, given that we’re talking about “walled garden” Apple here, they’re the only game in town, so I guess something’s better than nothing. And longer term, Apple now clearly realizes it’s behind its competitors in the AI race and is revamping its management and dev teams in response (not to mention dissing its competitors in the presumed hope of slowing down the overall market until it can catch up), so circumstances will likely improve tangibly here, in fairly short order.
Too much…too little…just right?By the way, I’m sure many of you have already noticed the across-the-board naming revamp of the various operating systems to a consistent “dominant model year” approach…i.e. although the new versions will likely all roll out later this year, they’ll be majority-in-use in 2026. Whatever (in all seriousness, the numerical disparity between, for example, current-gen iOS 18, MacOS 15 and WatchOS 11 likely resulted in at least some amount of consumer confusion).
Broadly speaking, while I’m not trying to sound like Goldilocks with the header title of this concluding section of my 2025 WWDC coverage, I am feeling a bit of whiplash. Last year, Apple’s event was bloated with unrelenting AI hype (therefore my title “jab”), much of which still hasn’t achieved even a semblance of implementation reality even a year later (to developer and pundit dismay alike). This year, it felt like the pendulum swung (too far?) in the opposite direction, with excessive attention being drawn to minutia such as newly added gestures for AirPods earbuds and the Apple Watch (not that I can even remember the existing ones!), and no matter that the AI-powered real-time language translation facilities are welcome (albeit predictable).
Maybe next year (and, dare I hope, in future years as well) Apple will navigate to the “middle way” between these extremes. Until then, I welcome your thoughts on this year’s event and associated announcements in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Microsoft Build 2025: Arm (and AI, of course) thrive
- The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance
- The 2024 WWDC: AI stands for Apple Intelligence, you see…
- The Apple 2023 WWDC: One more thing? Let’s wait and see
- Apple’s 2022 WWDC: Generational evolution and dramatic obsolescence
- Apple’s WWDC 2021: Software and services updates dominate
- Workarounds (and their tradeoffs) for integrated storage constraints
The post The 2025 WWDC: From Intel, Apple’s Nearly Free, and the New Interfaces Are…More Shiny? appeared first on EDN.
A two-way Wilson current mirror

Recently (5/21/25), I published a Design Idea (DI) describing a bidirectional current mirror using just two transistors and a diode, “A two-way mirror—current mirror that is.”
That circuit works well in some applications, like the one shown in the DI itself, but it nevertheless has limitations due to various error-introducing effects, some of which are described in this reference.
One of the more important factors of these is the “Early effect”, that kicks in to cause significant current imbalance when the voltage drop across the mirror becomes more than a couple of volts. Happily, a fix for the Early effect was devised back in 1967 by George R. Wilson, an IC design engineer at Tektronix. Wilson’s simple yet ingenious fix consisted of adding a third transistor to the basic current mirror, Q3 in Figure 1.
Figure 1 The Wilson mirror improves accuracy despite Early-effect by adding Q3.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Due to Q3, the difference in voltage drop across the active mirror transistors Q1 and Q2 never differs by more than one Vbe, which reduces Early effect error to less than 1%. Figure 2 shows a variation of the basic two-sided current mirror incorporating Wilson’s accuracy-enhancing extra transistor.
Figure 2 A two-way current mirror with Wilson enhancement.
This advantage is exploited in the Figure 3 triangle-wave oscillator to achieve good T-wave symmetry despite as much as 10v being applied across the mirror.
Figure 3 D1, Q3, Q4, and Q5 comprise the two-way Wilson current mirror. It passively conducts Q2’s collector current to C1 via Q3’s base-collecter when OUT versus Vc1 is negative, but becomes an active inverting unity-gain current source when OUT goes positive and reverses the polarity. Keep variable Vin < 4v so Q2 won’t saturate on negative output peaks.
The Wilson mirror is the better choice for Figure 3, but it isn’t always the superior option. That’s because the extra transistor adds an extra Vbe to the minimum voltage drop when the mirror is active. This is no problem for Figure 3 because of its operation from 15v, but would be a deal breaker in the previous DI with its 5v-powered oscillator. Luckily, for the same reason, Early already wasn’t much of an issue there.
Courses for horses?
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A two-way mirror—current mirror that is
- Use a current mirror to control a power supply
- A comparison between mirrors and Hall-effect current sensing
- Current Mirrors with a DCP offer precision and current gain control
- LED strings driven by current source/mirror
- Current mirror drives multiple LEDs from a low supply voltage
The post A two-way Wilson current mirror appeared first on EDN.
A two-wire temperature transmitter using an RTD sensor

Designing an analog circuit can be a frustrating experience, as noted by Nick Cornford in his Design Idea (DI), “DIY RTD for a DMM.” With a fortnight’s struggle, I completed the design of 4 mA to 20mA, two-wire temperature transmitter using a platinum resistance temperature detector (RTD) sensor, as shown in Figure 1.
Figure 1 A two-wire transmitter with a platinum RTD sensor, R4. R1 and R4 will determine the currents through the respective limbs; these currents must be kept to 0.5 mA. The circuit is designed to operate from 0 °C to 300 °C, where R5 can be adjusted to change this temperature range.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In industrial processing applications, a two-wire topology is used to connect the temperature sensor in the field to the control room equipment, such as a distributed control system (DCS), programmable logic controller (PLC), or indicator. A 24-V DC supply is fed from the control room, and the current drawn is proportional to the temperature. Since the power and signal travel in the same pair of wires, this arrangement offers cable savings.
In Figure 1’s circuit, R4 is an RTD. As per the platinum RTD’s temperature versus resistance table (DIN EN 60751), R4 is 100 Ω at 0 °C and 212.2 Ω at 300 °C.
This circuit is designed for the range of 0 °C to 300 °C, where the load current will be 4 mA for 0 °C and 20 mA for 300 °C (you may change R5 to achieve other ranges). The current is proportional to the resistance of the RTD, which is slightly non-linear with respect to temperature. The accuracy claimed in this circuit is ±1%, which is adequate for many applications.
To avoid self-heating of the RTD, only 0.5 mA is sent through it. R1 and R2 must be adjusted to pass 0.5 mA in each limb. U1 and U4 are wired as zero tempco current sources. The difference in voltage between RTD (R4) and R3 is amplified in the instrumentation amplifier U3.
The RTD at 300 °C (or R4 at 212.2 Ω) sends 160 µA through R8. R10 sets the current through it as 40 µA. Hence, a total of 200 µA is sent as the input to the transmitter IC, U5.
U5 then amplifies this current by 100 to 20 mA and converts it into a two-wire format. The U2 circuit generates -3.3 VDC to feed the negative supply voltage pin of U3. Accurate results were only achieved when operating U3 with a dual supply. The RTD at 0 °C gives a 4-mA current at the output (LOAD).
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- DIY RTD for a DMM
- Two-wire interface has galvanic isolation
- Two-wire remote sensor preamp
- Designing with temperature sensors, part three: RTDs
The post A two-wire temperature transmitter using an RTD sensor appeared first on EDN.
Tony Pialis’ design journey to high-speed SerDes

Alphawave Semi is in the news after being acquired by Qualcomm for $4.2 billion, and so is its co-founder and CEO Tony Pialis, now widely seen as a semiconductor connectivity IP veteran. Here is a brief profile of Pialis, highlighting how the design of analog and mixed-signal semiconductors fascinated him early in his career and how this led to his work on DSP-centric SerDes architectures.
Read the full story at EDN’s sister publication, Planet Analog.
Related Content
- IP players prominent in chiplet’s 2024 diary
- Alphawave Semi’s quest for open chiplet ecosystem
- BoW Strengthens Pathway to Chiplet Standardization
- How the Worlds of Chiplets and Packaging Intertwine
- Why UCIe is Key to Connectivity for Next-Gen AI Chiplets
The post Tony Pialis’ design journey to high-speed SerDes appeared first on EDN.
Fooled by fake Apple AirPods 2: Fool me once, shame on you

Until recently, at least to the best of my awareness and recollection, I’ve only been fooled into three purchases of counterfeit tech devices, all of which I’ve documented in past blog posts:
- Hands-on review: Is a premium digital audio player worth the price? (Specifically note the mention within it of my acquisition of two fake-capacity 400GB microSD cards)
- Memory cards: Specifications and (more) deceptions, and
- USB activation dongles: Counterfeit woes
In the first and third cases, I should, in retrospect, have known better, since the devices I bought were substantially lower priced than equivalents from more “legitimate” seller sources. The second case was the seeming result of someone returning to Amazon a falsely labeled subpar substitute for something they’d bought, and Amazon not catching the switcheroo and reselling it as legit on the “Warehouse” area of their site. In all three cases, happily, I got my money back.
Mercari’s previous track recordThis fourth time, unfortunately, I wasn’t so lucky. And the deception was, if anything. more impressive (among other, less-positive adjectives) than before. Mercari, for those of you not already familiar with it, is a Japan-based buyer/seller intermediary online service in the same vein as eBay or (for audio gear) Reverb. It’s generally considered to be more “seller-friendly” than eBay; how much so will become clear shortly. My first (recent) purchase there, which went smoothly and successfully, was of a headphone amplifier. I subsequently picked up a matching equalizer from the same manufacturer (Schiit Audio) via another seller, along with two Raspberry Pi AI Cameras and a special-edition Ray-Ban Meta AI Glasses set, the latter which will be showcased in another writeup (a pseudo-teardown, to be precise) this month. All went fine.
The sixth Macari-coordinated transaction? Not so fine. Perhaps, in retrospect, the prior track record lulled me into a false sense of complacency. More likely, I just “wanted to believe” too much, referencing the memorable X-Files poster- and character-delivered quote:
I’d been watching the for-sale postings of a seller with username “Skii” (apparently short for “skiitheflipper”) for a while. This person regularly listed claimed-legitimate and factory-sealed 2nd-generation Apple AirPods for sale, and Mercari dutifully let me know each time “Skii” lowered his sales price on the particular set I was watching. When they hit $93, I was seriously tempted. And when Mercari did a one-day free-shipping promotion, I bit. The total price, including sales tax and a $3.54 “buyer protection fee” (cue foreshadow snort), was $100.88.
A pause here for some background: first off, given that I already had both a primary and backup set of first-generation AirPods Pros, why was I interested in a second-generation set the first place, particularly with the AirPods Pro 3 rumored to arrive later this year? At the normal $249 price, I wasn’t. Even at the fairly common $189-or-thereabouts promotion price, I didn’t pull the trigger. But why the interest at all? The AirPods Pro 2, in comparison to the first-generation forebears I already owned, have claimed superior active noise cancellation, for example, along with dynamically adaptive “transparency” mode and longer battery life. More recent gen-2-only enhancements include the Hearing Aid feature, which I don’t require (at least I don’t think so! What did you say?) but still wanted to try. And the further temptation of possibly upcoming Live Language Translation pushed me over the edge…once the price tag dipped below $100, that is.
About that price…many of you are likely right now thinking something along the lines of “Were you really so stupid as to think that something sold at a roughly 2/3 discount to MSRP was legit?”. Actually, I wasn’t. Although, hey, who knows…per the “flipper” portion of the seller’s username, maybe he or she had scooped up the inventory of a going-out-of-business retailer and was reselling it. Regardless, I figured that I’d give the purchase a shot, and if it was a fake (as I assumed would be immediately obvious) I’d file a dispute and get my money back. Which was conceptually feasible, mind you. But as a Mercari newbie, I didn’t realize how difficult it would be, both in an absolute sense and in relative comparison to eBay (on which I’d been a regular participant since 1997), to translate the buyer-refund concept into reality. Hold that thought.
The listingBack to my story. To his credit, the seller at least was speedy from a delivery standpoint; the package shipped on Tuesday, April 1 (April Fool’s Day: how prophetic) and arrived that same Saturday. At first glance, at least to my uneducated-recipient’s eyes, everything looked authentic, although had I known more/better I never would have clicked on “buy” in the first place. Here are the photos that accompanied the listing:
Looks legit, right? Only one problem, I later learned: Apple reportedly no longer sells AirPods in shrink-wrapped boxes. But pretend for now that, like me at the time, you’re not aware of that critical nuance.
The deliveryHere’s the (shrink-wrap already removed) box of the actual product I received, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
The tape strips at the top and the bottom were seemingly “stiffer” than I’d remembered before with other Apple devices, but I didn’t give that much thought, eager to get inside:
Getting the two halves of the box apart was similarly more difficult than I’d remembered …but again, I now have the benefit of hindsight in making these key-nitpick observations. The literature package up-top seemed to be legit:
The protective plastic sleeve around the case admittedly also seemed flimsier than I’d remembered from other Apple earbuds (there’s that rear-view-mirror perspective again):
Putting the case aside for a minute, let’s look at the rest of the contents (again, a reminder that these shots were taken post-initial removal). Extra earbud tip sizes:
And below them, an authentic-looking and functional (at least for charging) USB-C cable:
Now for that case. The LED in front seemingly operates as expected (both green and orange illumination mode options), as do the speakers and USB-C port on the bottom edge:
Open the case: it still looks, sounds (beep!), and otherwise acts legit.
Earbuds out: Charging contacts at the bottom of each receptacle.
The earbuds themselves, from various perspectives. Apologies for the earwax remnants









Back in the case for initial pairing, which is where my initial “that’s odd” moment happened (hold that thought for a possible explanation to follow shortly):
I didn’t recall this particular message before, particularly for supposedly brand-new Apple earbuds, but I plunged on and got them paired and associated with my Apple user account straightaway. They automatically also appeared in the paired-Bluetooth-devices listings of all my other Apple widgets. One thing that seemed a little strange upfront was that “Handoff” mode, wherein I could connect to them from another account-associated device even if they were connected to a different device at the time, didn’t seem to work. I instead needed to manually disconnect them from, for example, my iPad before I could connect them to my MacBook Pro. But after a bit of online research, I chalked that up to a potential bug with the earbuds’ current firmware version (7A305, which dated from late September 2024). And about that…check out the three-part post-activation settings listing:
All looks legit, right? The earbuds even claimed that valid warranty coverage existed through June 2025! Activating noise cancellation seemed to do something, although I can’t say it was notably superior (or even equivalent) to what I’d experienced with their first-generation forebears. And when I tried to activate hearing aid mode, it told me that I’d have to update the firmware before that particular feature was available for my use. All reasonable. When I tried “Find My” on them, it couldn’t locate them, but I figured that since I’d just activated them, Apple’s servers were just slow and they’d show up eventually. So, I connected them to my iPad, put them in close proximity to it so that the firmware would auto-update, and…
Mercari’s seller rating policyAnother background-info pause. As soon as USPS delivered the earbuds, Mercari as-usual sent email alerting me that they were at my front door and—this is key—immediately encouraging me to “rate the seller” so he or she could get paid. From past experience, those emails would continue at a one-to-multiple times-per-day cadence until I either reported a problem or went ahead and rated the seller. And here’s the twist that I didn’t realize until afterwards:
- The buyer has only 72 hours after package delivery to report a problem
- In the absence buyer response to the contrary after (only) 72 hours, Mercari goes ahead and automatically marks the package as received and pays the seller anyway (remember my earlier “seller-friendly” comment about the service?), and
- Once the transaction is complete, Mercari washes its hands (words ironically written by me within the Triduum) of the matter and accepts no further fiscal responsibility.
Did I go ahead and rate the seller that same evening so that he/she could get promptly paid? Bathed in the “giddiness glow” of a seeming legit transaction…yes, I did. Sigh. The next morning, of course, when I checked the settings and saw that the firmware had not updated, alarm bells belatedly started going off in my head. Multiple subsequent factory-reset and re-pairs were equally unsuccessful in getting the firmware updated. And then I found this video:
Different serial numbers on each earbudwherein I remembered that although the serial number on the packaging matched that on the case and (by default) reported in settings, the serial numbers for the right and left earbuds were supposed to be different in both places. Here, for example, are the serial number markings on the case and earbuds for my first-generation AirPods Pros (apologies again for the earwax bits):
The print is a bit faint, but hopefully you can see that the serial numbers on the earbuds both don’t match those on the case (H6VHW8651059) and are different from each other (H6QHWQRX06CJ and H6VHX88Q0C6K).
Now here’s the settings listing entry of interest for the “fakes”. If you click on it, it’ll report unique serial numbers for the case and each earbud:
And the model numbers stamped on the case and both earbuds are also different, and match what they should be. But the stamped serial numbers on all three? Identical: DT601W1T41.
And are, I’m guessing, also identical to the serial numbers stamped on and reported by who-knows-how-many other identical “fakes” also sold by “Skii” and others, which is why I got the initial “not your” notification at the beginning of the pairing process. Apple, is it not possible to detect and more meaningfully alert owners when multiple sets of earbuds with identical serial numbers are activated? You have both fiscal and reputational motivations to do so, after all.
I’m begrudgingly impressed with the degree to which these counterfeit earbuds mimic the real thing. And here’s the twist: were I the owner of only a single Apple device, such as an iPhone, therefore unaware of the “Handoff” glitch, and were I a typical non-geeky (non-Brian) consumer, unaware how to determine the existing firmware version, far from what the current version is, how to update it and what it would add to my usage experience…I might be blissfully unaware far into the future that I’d bought a cheap-but-fake set of earbuds …maybe forever.
Reporting the fake to MercariI reported the situation in detail, complete with pictures, to Mercari. Throughout the subsequent email back-and-forth, they several times reminded me that:
When you submit a rating for your transaction, you are prompted to confirm you understand that once the rating is submitted, the sale is final. Once submitted, funds are released to the seller and we are no longer able to process a return or refund. Moving forward, if you receive an item that is not as described, please do not submit a rating for your seller and contact us immediately.
That said, they also made a promise:
Please know that if a report is made against the seller we will conduct an investigation to confirm the suspicions. If the seller is found guilty immediate action will be taken against them regarding their listings. I appreciate for your concerns and just for you we will further review this for you. Your case will be taken as a feedback and we do value your time in reaching out to us to inform us of this inquiry.
and:
Regarding your seller’s action, please note that our team is very vigilant with those kinds of activities. We will conduct monitoring where we will need to check all their transactions and make sure that our Trust and Safety team will review this concern. Please leave it to us, rest assured that the right sanction will be given accordingly.
It’s been two weeks since I filed my report. “Skii” is still on Mercari. And as I write these words, he or she has three more “authentic” AirPods Pro 2 sets for sale. Admittedly, “Skii” may not even know that the earbuds being sold are fakes. Although I doubt it. Caveat emptor, indeed.
Then there was this closing email from Mercari:
Thank you for reaching out to Mercari! We hope your issue was resolved to your satisfaction.
We would appreciate it if you could take a moment to provide us with your feedback. Your input helps us improve our service. Thank you for helping us serve you better!
Grrr.
I plan to “turn lemons into lemonade” via future teardowns of both the case and one of the earbuds, comparing them to existing published teardowns of legit alternatives. Until then, I welcome your thoughts in the comments on what I’ve written so far on this “fakes” saga.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Hands-on review: Is a premium digital audio player worth the price?
- Memory cards: Specifications and (more) deceptions
- USB activation dongles: Counterfeit woes
- Apple’s latest product launch event takes flight
- Apple’s fall 2024 announcements: SoC and memory upgrade abundance
The post Fooled by fake Apple AirPods 2: Fool me once, shame on you appeared first on EDN.
Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion.

Editor’s Note: This DI is a two-part series.
In Part 1, Nick Cornford deliberately oscillates the TDA7052A audio power amplifier to produce a siren-like sound and, given the device’s distortion characteristics, a functional Wien bridge oscillator.
In Part 2, Cornford minimizes this distortion and adds amplitude control to the circuit.
In the first part of this Design Idea (DI), we saw how to use the TDA7052A (or similar) power amp to build a minimalist siren-type power oscillator, and also, having looked at that device’s distortion characteristics under various operating conditions, a simple but half-decent Wien bridge oscillator. In this second part, we’ll turn that semi-decency into something even more respectable, concentrating on minimizing distortion and ignoring the siren song of raw power. We will also stick with a 5-V supply, even though the chip can handle up to 18 V. Figure 1 shows the new circuit.
Figure 1 Adding more precise level-sensing allows much better amplitude control and also reduces distortion.
Most of the changes are in the control loop, but input resistor R5 has been increased because we have more gain available now that Vcon can be driven high as well as just being pulled low. In passing, the series combination of R5 and U1’s nominally 20k input resistance shunts R2/R4 a little; a slightly disturbing operation. Adding 120k across R1/R3 could compensate for that, but the practical difference is tiny. The gangs on twin pots never quite match anyway, so using the lower-value half for R1 is good enough.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The control loop, detailed
Peak output levels are now sensed by comparison with the reference voltage defined by R9 (or its wiper) and R10. (Be careful: pot R9 and fixed resistor R10 should match, so the latter may need to be trimmed.) The junction of R11 and R12 floats at ~650 mV until the peak amplitude exceeds the reference, when D1 or D2 will briefly pull it lower by ~450 mV. The resulting attenuated pulse stream is filtered by R6/C6 and R7/C7 and level-shifted and buffered by Q1 into U1’s Vcon pin. That attenuation is necessary because the effective control-loop gain would otherwise be excessive. U2 is an MCP6022, which has a low input-offset voltage and is fast enough to work well as a comparator. The overall temperature stability is good, the output level scarcely changing between 20°C and ~50°C.
Any residual ripple in a control loop like this will cause distortion by modulating the signal. It’s minimized by the lead-lag filter R6/C6 and C7/R7, which also controls the settling time. The values used leave the loop somewhat under-damped at low frequencies, but are a good compromise.
The spectra with unloaded outputs at 0 dBV and -10 dBV are both shown in Figure 2.
Figure 2 The spectra for 1 kHz outputs at 0 dBV and -10 dBV, showing THDs of around -60 and -73 dB respectively—say 0.1% and 0.02%.
Further reducing the output levels doesn’t improve distortion much, partly because this control scheme then becomes less stable. Any offset of one output relative to the other can result in one “phase” of pulses being missed, leading to increased second-harmonic distortion, as implied by the relative levels of second and third harmonics for the two levels shown in Figure 2. Trimming the power amp’s offset makes surprisingly little difference, so it isn’t implemented.
Trying to derive the control voltage from a more complex full-wave rectifier gave virtually the same distortion figures, so we’ll stick to Figure 1’s circuit, run it at -10 dBV or a little less, and accept its THD.
Some finishing touches
Swapping the capacitors C1 and C2 for 3n3 and 330n parts gave the expected performance on higher and lower ranges, though distortion was a few dB higher on the low range, and the level was less well-controlled at the highest frequencies. All that’s needed to turn this into a useful kit is an output stage, for which the fairly conventional circuit of Figure 3 works well.
Figure 3 A simple output stage can deliver up to about +5 dBV, which is the rail-to-rail swing for a 5 V supply.
The extra stage adds almost no distortion, and leaves a spare op-amp (U3b) which could be used to invert the signal from U3a to provide antiphase outputs. One final spectrum, taken with the oscillator set for -10 dBV and the output for a swing of nearly 5 V, is in Figure 4. That confirms an overall THD of -73 dB, or 0.022%. Hardly earth-shattering, but still quite decent.
Figure 4 The spectrum at the output of Figure 3 for an output of ~+5 dBV.
No tweeters were harmed in the making of this DI, though a 10-Ω, 1/4-W load resistor started to smoke. A 1-W part would have been healthier, as it would only have vaped.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Power amplifiers that oscillate—deliberately. Part 1: A simple start.
- Ultra-low distortion oscillator, part 1: how not to do it.
- Ultra-low distortion oscillator, part 2: the real deal
- Distortion in power amplifiers, Part IV: the power amplifier stages
- A pitch-linear VCO, part 1: Getting it going
- A pitch-linear VCO, part 2: taking it further
The post Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion. appeared first on EDN.
What does Qualcomm’s Alphawave acquisition stand for?

Qualcomm’s $2.4 billion acquisition of Alphawave Semi is expected to take the San Diego, California-based semiconductor powerhouse beyond mobile processors and into the world of AI and data center chips. Here is a closer look at how Alphawave’s SerDes and chiplet technologies could aid Qualcomm in making a foray into the booming markets for AI and data centers.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- IP players prominent in chiplet’s 2024 diary
- Alphawave Semi’s quest for open chiplet ecosystem
- BoW Strengthens Pathway to Chiplet Standardization
- How the Worlds of Chiplets and Packaging Intertwine
- Why UCIe is Key to Connectivity for Next-Gen AI Chiplets
The post What does Qualcomm’s Alphawave acquisition stand for? appeared first on EDN.