EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 26 min ago

Achieving analog precision via components and design, or just trim and go

Tue, 11/04/2025 - 07:28

I finally managed to get to a book that has been on my “to read” list for quite a while: “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constant” (2022) by James Vincent (Figure 1). It traces the evolution of measurement, its uses, and its motivations, and how measurement has shaped our world, from ancient civilizations to the modern day. On a personal note, I found the first two-thirds of the book a great read, but in my opinion, the last third wandered and meandering off topic. Regardless, it was a worthwhile book overall.

Figure 1 This book provides a fascinating journey through the history of measurement and how different advances combined over the centuries to get us to our present levels. Source: W. W. Norton & Company Inc.

Several chapters deal with the way measurement and the nascent science of metrology were used in two leading manufacturing entities of the early 20th century: Rolls-Royce and Ford Motor Company, and the manufacturing differences between them.

Before looking at the differences, you have to “reset” your frame of reference and recognize that even low-to-moderate volume production in those days involved a lot of manual fabrication of component parts, as the “mass production” machinery we now take as a given often didn’t exist or could only do rough work.

Rolls-Royce made (and still makes) fine motor cars, of course. Much of their quality was not just in the finish and accessories; the car was entirely mechanical except for the ignition system. They featured a finely crafted and tuned powertrain. It’s not a myth that you could balance a filled wine glass on the hood (bonnet) while the engine was running and not see any ripples in the liquid’s surface. Furthermore, you could barely hear the engine at all at a time when cars were fairly noisy.

They achieved this level of performance using careful and laborious manual adjustments, trimming, filing, and balancing of the appropriate components to achieve that near-perfect balance and operation. Clearly, this was a time-consuming process requiring skilled and experienced craftspeople. It was “mass production” only in terms of volume, but not in terms of production flow as we understand it today.

In contrast, Henry Fiord focused on mass production with interchangeable parts that would work to design objective immediately when assembled. Doing so required advances in measurement of the components at Ford’s factory to weed out incoming substandard parts and statistical analysis of quality, conformance, and deviations. Ford also sent specialists to suppliers’ factories to improve both their production processes and their own metrology.

Those were the days

Of course, those were different times in terms of industrial production. When Wright brothers needed a gasoline engine for the 1903 Flyer, few “standard” engine choices were available, and none came close to their size, weight, and output-power needs.

So, their in-house mechanic Charlie Taylor machined an aluminum engine block, fabricated most parts, and assembled an engine (1903 Wright Engine) in just six weeks using a drill press and a lathe; it produced 12 horsepower, 50% above the 8 horsepower that their calculations indicated they needed (Figure 2).

Figure 2 Perhaps in an ultimate do-it-yourself project, Charlie Taylor, mechanic for Wright brothers, machined the aluminum engine block and fabricated most of the parts and then assembled the complete engine in six weeks (reportedly working only from rough sketches). Source: Wright Brothers Aeroplane Company

Which approach is better—fine adjusting and trims, or use of a better design and superior components? There’s little doubt that the “quality by components” approach is the better tactic in today’s world where even customized cars make use of many off-the-shelf parts.

Moreover, the required volume for a successful car-production line mandates avoiding hand-tuning of individual vehicles to make their components plug-and-play properly. Even Rolls-Royce now uses the Ford approach, of course; the alternative is impractical for modern vehicles except for special trim and accessories.

Single unit “perfection” uses both approaches

In some cases, both calibration and use of better topology and superior components combine for a unique design. Not surprisingly, a classic example is one of the first EDN articles by late analog-design genius Jim Williams, “This 30-ppm scale proves that analog designs aren’t dead yet”. Yes, I have cited it in previous blogs, and that’s no accident (Figure 3).

Figure 3 This 1976 EDN article by Jim Williams set a standard for analog signal-chain technical expertise and insight that has rarely been equaled. Source: EDN

In the article, he describes his step-by-step design concept and fabrication process for a portable weigh scale that would offer 0.02% absolute accuracy (0.01 lb over a 300-pound range). Yet, it would never need adjustment to be put into use. Even though this article is nearly 50 years old, it still has relevant lessons for our very different world.

I believe that Jim did a follow-up article about 20 years later, where he revisited and upgraded that design using newer components, but I can’t find it online.

Today’s requirements were unimaginable—until recently

Use of in-process calibration is advancing due to techniques such as the use of laser-based interferometry. For example, the positional accuracy of the carriage, which moves over the wafer, needs to be in the sub-micrometer range.

While this level of performance can be achieved with friction-free air bearings, they cannot be used in extreme-ultraviolet (EUV) systems since those operate in an ultravacuum environment. Instead, high-performance mechanical bearings must be used, even though they are inferior to air bearings.

There are micrometer-level errors in the x-axis and y-axis, and the two axes are also not perfectly orthogonal, resulting in a system-level error typically greater than several micrometers across the 300 × 300-mm plane. To compensate, manufacturers add interferometry-based calibration of the mechanical positioning systems to determine the error topography of a mechanical platform.

For example, with a 300-mm wafer, the grid is scanned in 10-mm steps, and the interferometer determines the actual position. This value is compared against the motion-encoder value to determine a corrective offset. After this mapping, the system accuracy is improved by a factor of 10 and can achieve an absolute accuracy of better than 0.5 µm in the x-y plane

Maybe too smart?

Of course, there are times when you can be a little too clever in the selection of components when working to improve system performance. Many years ago, I worked for a company making controllers for large machines, and there was one circuit function that needed coarse adjustment for changes in ambient temperature. The obvious way would have been to use a thermistor or a similar component in the circuit.

But our lead designer—a circuit genius by any measure—had a “better” idea. Since the design used a lot of cheap, 100-kΩ pullup resistors with poor temperature coefficient, he decided to use one of those instead of the thermistor in the stabilization loop, as they were already on the bill of materials. The bench prototype and pilot-run units worked as expected, but the regular production units had poor performance.

Long story short: our brilliant designer had based circuit stabilization on the deliberate poor tempco of these re-purposed pull-up resistors and associated loop dynamic range. However, our in-house purchasing agent got a good deal on some resistors of the same value and size, but with a much tighter tempco. Additionally, getting a better component that was functionally and physically identical for less money seemed like a win-win.

That was fine for the pull-up role, but it meant that the transfer function of temperature to resistance was severely compressed. Identifying that problem took a lot of aggravation and time.

What’s your preferred approach to achieving a high-precision, accurate, stable analog-signal chain and front-end? Have you used both methods, or are you inherently partial to one over the other? Why?

Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.

Related Content

The post Achieving analog precision via components and design, or just trim and go appeared first on EDN.

LED illumination addresses ventilation (at the bulb, at least)

Mon, 11/03/2025 - 15:37

The bulk of the technologies and products based on them that I encounter in my everyday interaction with the consumer electronics industry are evolutionary (and barely so in some cases) versus revolutionary in nature. A laptop computer, a tablet, or a smartphone might get a periodic CPU-upgrade transplant, for example, enabling it to complete tasks a bit faster and/or a bit more energy-efficiently than before. But the task list is essentially the same as was the case with the prior product generation…and the generation before that…and…not to mention that the generational-cadence physical appearance also usually remains essentially the same.

Such cadence commonality is also the case with many LED light bulbs I’ve taken apart in recent years, in no small part because they’re intended to visually mimic incandescent precursors. But SANSI has taken a more revolutionary tack, in the process tackling an issue—heat–with which I’ve repeatedly struggled. Say what you (rightly) will about incandescent bulbs’ inherent energy inefficiency, along with the corresponding high temperature output that they radiate—there’s a fundamental reason why they were the core heat source for the Easy-Bake Oven, after all:

But consider, too, that they didn’t integrate any electronics; the sole failure points were the glass globe and filament inside it. Conversely, my installation of both CFL and LED light bulbs within airflow-deficient sconces in my wife’s office likely hastened both their failure and preparatory flickering, due to degradation of the capacitors, voltage converters and regulators, control ICs and other circuitry within the bulbs as well as their core illumination sources.

Evolutionary vs revolutionary

That’s why SANSI’s comparatively fresh approach to LED light bulb design, which I alluded to in the comments of my prior teardown, has intrigued me ever since I first saw and immediately bought both 2700K “warm white” and 5000K “daylight” color-temperature multiple-bulb sets on sale at Amazon two years ago:

They’re smaller A15, not standard A19, in overall dimensions, although the E26 base is common between the two formats, so they can generally still be used in place of incandescent bulbs (although, unlike incandescents, these particular LED light bults are not dimmable):

Note, too, their claimed 20% brighter illumination (900 vs 750 lumens) and 5x estimated longer usable lifetime (25,000 hours vs 5,000 hours). Key to that latter estimation, however, is not only the bulb’s inherent improved ventilation:

Versus metal-swathed and otherwise enclosed-circuitry conventional LED bulb alternatives:

But it is also the ventilation potential (or not) of wherever the bulb is installed, as the “no closed luminaires” warning included on the sticker on the left side of the SANSI packaging makes clear:

That said, even if your installation situation involves plenty of airflow around the bulb, don’t forget that the orientation of the bulb is important, too. Specifically, since heat rises, if the bulb is upside-down with the LEDs underneath the circuitry, the latter will still tend to get “cooked”.

Perusing our patient

Enough of the promo pictures. Let’s now look at the actual device I’ll be tearing down today, starting with the remainder of the box-side shots, in each case, and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Open ‘er up:

lift off the retaining cardboard layer, and here’s our 2700K four-pack, which (believe it or not) had set me back only $4.99 ($1.25/bulb) two years back:

The 5000K ones I also bought at that same time came as a two-pack, also promo-priced, this time at $4.29 ($2.15/bulb). Since they ended up being more expensive per bulb, and because I have only two of them, I’m not currently planning on also taking one of them apart. But I did temporarily remove one of them and replace it in the two-pack box with today’s victim, so you could see the LED phosphor-tint difference between them. 5000K on left, 2700K on right; I doubt there’s any other design difference between the two bulbs, but you never know…🤷‍♂️

Aside from the aforementioned cardboard flap for position retention above the bulbs and a chunk of Styrofoam below them (complete with holes for holding the bases’ end caps in place):

There’s no other padding inside, which might have proven tragic if we were dealing with glass-globe bulbs or flimsy filaments. In this case, conversely, it likely suffices. Also note the cleverly designed sliver of literature at the back of the box’s insides:

Now, for our patient, with initial overview perspectives of the top:

Bottom:

And side:

Check out all those ventilation slots! Also note the clips that keep the globe in place:

Before tackling those clips, here are six sequential clockwise-rotation shots of the side markings. I’ll leave it to you to mentally “glue” the verbiage snippets together into phrases and sentences:

Diving in for illuminated understanding

Now for those clips. Downside: they’re (understandably, given the high voltage running around inside) stubborn. Upside: no even-more-stubborn glue!

Voila:

Note the glimpses of additional “stuff” within the base, thanks to the revealing vents. Full disclosure and identification of the contents is our next (and last) aspiration:

As usual, twist the end cap off with a tongue-and-groove slip-joint (“Channellock”) pliers:

and the ceramic substrate (along with its still-connected wires and circuitry, of course) dutifully detaches from the plastic base straightaway:

Not much to see on the ceramic “plate” backside this time, aside from the 22µF 200V electrolytic capacitor poking through:

Integrated and otherwise simple = Cheap

The frontside is where most of the “action” is:

At the bottom is a mini-PCB that mates the capacitor and wires’ soldered leads to the ceramic substrate-embedded traces. Around the perimeter, of course, is the series-connected chain of 17 (if I’ve counted correctly) LEDs with their orange-tinted phosphor coatings, spectrum-tuned to generate the 2700K “warm white” light. And the three SMD resistors scattered around the substrate, two next to an IC in the upper right quadrant (33Ω “33R0” and 20Ω “33R0”) and another (33Ω “334”) alongside a device at left, are also obvious.

Those two chips ended up generating the bulk of the design intrigue, in the latter case still an unresolved mystery (at least to me). The one at upper right is marked, alongside a company logo that I’d not encountered before, as follows:

JWB1981
1PC031A

The package also looks odd; the leads on both sides are asymmetrically spaced, and there’s an additional (fourth) lead on one side. But thanks to one of the results from my Google search on the first-line term, in the form of a Hackaday post that then pointed at an informative video:

This particular mystery has, at least I believe, been solved. Quoting from the Hackaday summary (with hyperlinks and other augmentations added by yours truly):

The chip in question is a Joulewatt JWB1981, for which no datasheet is available on the internet [BD note: actually, here it is!]. However, there is a datasheet for the JW1981, which is a linear LED driver. After reverse-engineering the PCB, bigclivedotcom concluded that the JWB1981 must [BD note: also] include an onboard bridge rectifier. The only other components on the board are three resistors, a capacitor, and LEDs.

 The first resistor limits the inrush current to the large smoothing capacitor. The second resistor is to discharge the capacitor, while the final resistor sets the current output of the regulator. It is possible to eliminate the smoothing capacitor and discharge resistor, as other LED circuits have done, which also allow the light to be dimmable. However, this results in a very annoying flicker of the LEDs at the AC frequency, especially at low brightness settings.

Compare the resultant schematic shown in the video with one created by EDN’s Martin Rowe, done while reverse-engineering an A19 LED light bulb at the beginning of 2018, and you’ll see just how cost-effective a modern design approach like this can be.

That only leaves the chip at left, with two visible soldered contacts (one on each end), and bare on top save for a cryptic rectangular mark (which leaves Google Lens thinking it’s the guts of a light switch, believe it or not). It’s not referenced in “Big Clive’s” deciphered design, and I can’t find an image of anything like it anywhere else. Diode? Varistor to protect against voltage surges? Resettable fuse to handle current surges? Multiple of these? Something(s) else? Post your [educated, preferably] guesses, along with any other thoughts, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post LED illumination addresses ventilation (at the bulb, at least) appeared first on EDN.

Makefile vs. YAML: Modernizing verification simulation flows

Mon, 11/03/2025 - 12:12

Automation has become the backbone of modern SystemVerilog/UVM verification environments. As designs scale from block-level modules to full system-on-chips (SoCs), engineers rely heavily on scripts to orchestrate compilation, simulation, and regression. The effectiveness of these automation flows directly impacts verification quality, turnaround time, and team productivity.

For many years, the Makefile has been the tool of choice for managing these tasks. With its rule-based structure and wide availability, Makefile offered a straightforward way to compile RTL, run simulations, and execute regressions. This approach served well when testbenches were relatively small and configurations were simple.

However, as verification complexity exploded, the limitations of Makefile have become increasingly apparent. Mixing execution rules with hardcoded test configurations leads to fragile scripts that are difficult to scale or reuse across projects. Debugging syntax-heavy Makefiles often takes more effort than writing new tests, diverting attention from coverage and functional goals.

These challenges point toward the need for a more modular and human-readable alternative. YAML, a structured configuration language, addresses many of these shortcomings when paired with Python for execution. Before diving into this solution, it’s important to first examine how today’s flows operate and where they struggle.

Current scenario and challenges

In most verification environments today, Makefile remains the default choice for controlling compilation, simulation, and regression. A single Makefile often governs the entire flow—compiling RTL and testbench sources, invoking the simulator with tool-specific options, and managing regressions across multiple testcases. While this approach has been serviceable for smaller projects, it shows clear limitations as complexity increases.

Below is an outline of key challenges.

  • Configuration management: Test lists are commonly hardcoded in text or CSV files, with seeds, defines, and tool flags scattered across multiple scripts. Updating or reusing these settings across projects is cumbersome.
  • Readability and debugging: Makefile syntax is compact but cryptic, which makes debugging errors non-trivial. Even small changes can cascade into build failures, demanding significant engineer time.
  • Scalability: As testbenches grow, adding new testcases or regression suites quickly bloats the Makefile. Managing hundreds of tests or regression campaigns becomes unwieldy.
  • Tool dependence: Each Makefile is typically tied to a specific simulator, for instance, VCS, Questa, and Xcelium. Porting the flow to a different tool requires major rewrites.
  • Limited reusability: Teams often reinvent similar flows for different projects, with little opportunity to share or reuse scripts.

These challenges shift the engineer’s focus away from verification quality and coverage goals toward the mechanics of scripting and tool debugging. Therefore, the industry needs a cleaner, modular, and more portable way to manage verification flows.

Makefile-based flow

A traditional Makefile-based verification flow centers around a single file containing multiple targets that handle compilation, simulation, and regression tasks. See the representative structure below.

This approach offers clear strengths: immediate familiarity with software engineers, no additional tool requirements, and straightforward dependency management. For small teams with stable tool chains, this simplicity remains compelling.

However, significant challenges emerge with scale. Cryptic syntax becomes problematic; escaped backslashes, shell expansions, and dependencies create arcane scripting rather than readable configuration. Debug cycles lengthen with cryptic error messages, and modifications require deep Maker expertise.

Tool coupling is evident in the above structure—compilation flags, executable names, and runtime arguments are VCS-specific. Supporting Questa requires duplicating rules with different syntax, creating synchronization challenges.

So, maintenance of overhead grows exponentially. Adding tests requires multiple modifications, parameter changes demand careful shell escaping, and regression management quickly outgrows Maker’s capabilities, forcing hybrid scripting solutions.

These drawbacks motivate the search for a more human-readable, reusable configuration approach, which is where YAML’s structured, declarative format offers compelling advantages for modern verification flows.

YAML-based flow

YAML (YAML Ain’t Markup Language) provides a human-readable data serialization format that transforms verification flow management through structured configuration files. Unlike Makefile’s imperative commands, YAML uses declarative key-value pairs with intuitive indentation-based hierarchy.

See below this YAML configuration structure that replaces complex Makefile logic:

The modular structure becomes immediately apparent through organized directory hierarchies. As shown in Figure 1, a well-structured YAML-based verification environment separates configurations by function and scope, enabling different team members to modify their respective domains without conflicts.

Figure 1 The block diagram highlights the YAML-based verification directory structure. Source: ASICraft Technologies

Block-level engineers manage component-specific test configurations, IP1 andIP2, while integration teams focus on pipeline and regression management. Instead of monolithic Makefiles, teams can organize configurations across focused files: build.yml for compilation settings, sim.yml for simulation parameters, and various test-specific YAML files grouped by functionality.

Advanced YAML features like anchors and aliases eliminate configuration duplication using the DRY (Don’t Repeat Yourself) principle.

Tool independence emerges naturally since YAML contains only configuration data, not tool-specific commands. The same YAML files can drive VCS, Questa, or XSIM simulations through appropriate Python parsing scripts, eliminating the need for multiple Makefiles per tool.

Of course, YAML alone doesn’t execute simulations; it needs a bridge to EDA tools. This is achieved by pairing YAML with lightweight Python scripts that parse configurations and generate appropriate tool commands.

Implementation of YAML-based flow

The transition from YAML configuration to actual EDA tool execution follows a systematic four-stage process, as illustrated in Figure 2. This implementation addresses the traditional verification challenge where engineers spend excessive time writing complex Makefiles and managing tool commands instead of focusing on verification quality.

Figure 2  The YAML-to-EDA phase bridges the YAML configuration. Source: ASICraft Technologies

YAML files serve as comprehensive configuration containers supporting diverse verification needs.

  • Project metadata: Project name, descriptions, and version control
  • Tool configuration: EDA tool selection, licenses, and version specifications
  • Compilation settings: Source files, include directories, definitions, timescale, and tool-specific flags
  • Simulation parameters: Tool flags, snapshot paths, and log directory structures
  • Test specifications: Test names, seeds, plusargs, and coverage options
  • Regression management: Test lists, reporting formats, and parallel execution settings

Figure 3 Here is a view of Python YAML parsing workflow phases. Source: ASICraft Technologies

The Python implementation demonstrates the complete flow pipeline. Starting with a simple YAML configuration:

The Python script loads and processes the configuration below:

When executed, the Python script produces clear output, showing the command translation, as illustrated below:

The complete processing workflow operates in four systematic phases, as detailed in Figure 3.

  1. Load/parse: The PyYAML library converts YAML file content into native Python dictionaries and lists, making configuration data accessible through standard Python operations.
  2. Extract: The script accesses configuration values using dictionary keys, retrieving tool names, file lists, compilation flags, and simulation parameters from the structured data.
  3. Build commands: The parser intelligently constructs tool-specific shell commands by combining extracted values with appropriate syntax for the target simulator (VCS or Xcelium).
  4. Display/execute: Generated commands are shown for verification or directly executed through subprocess calls, launching the actual EDA tool operations.

This implementation creates true tool-agnostic operation. The same YAML configuration generates VCS, Questa, or XSIM commands by simply updating the tool specification. The Python translation layer handles all syntax differences, making flows portable across EDA environments without configuration changes.

The complete pipeline—from human-readable YAML to executable simulation commands—demonstrates how modern verification flows can prioritize engineering productivity over infrastructure complexity, enabling teams to focus on test quality rather than tool mechanics.

Comparison: Makefile vs. YAML

Both approaches have clear strengths and weaknesses that teams should evaluate based on their specific needs and constraints. Table 1 provides a systematic comparison across key evaluation criteria.

Table 1 See the flow comparison between Makefile and YAML. Source: ASICraft Technologies

Where Makefiles work better

  • Simple projects with stable, unchanging requirements
  • Small teams already familiar with Make syntax
  • Legacy environments where changing infrastructure is risky
  • Direct execution needs required for quick debugging without intermediate layers
  • Incremental builds where dependency tracking is crucial

Where YAML excels

  • Growing complexity with multiple test configurations
  • Multi-tool environments supporting different simulators
  • Team collaboration where readability matters
  • Frequent modifications to test parameters and configurations
  • Long-term maintenance across multiple projects

The reality is that most teams start with Makefiles for simplicity but eventually hit scalability walls. YAML approaches require more expansive initial setup but pay dividends as projects grow. The decision often comes down to whether you’re optimizing for immediate simplicity or long-term scalability.

For established teams managing complex verification environments, YAML-based flows typically provide better return on investment (ROI). However, teams should consider practical factors like migration effort and existing tool integration before making the transition.

Choosing between Makefile and YAML

The challenges with traditional Makefile flows are clear: cryptic syntax that’s hard to read and modify, tool-specific configurations that don’t port between projects, and maintenance overhead that grows with complexity. As verification environments become more sophisticated, these limitations consume valuable engineering time that should focus on actual test development and coverage goals.

The YAML-based flows address these fundamental issues through human-readable configurations, tool-independent designs, and modular structures that scale naturally. Teams can simply describe verification intent—run 100 iterations with coverage—while the flow engine handles all tool complexity automatically. The same approach works from block-level testing to full-chip regression suites.

Key benefits realized with YAML

  • Faster onboarding: New team members understand YAML configurations immediately.
  • Reduced maintenance: Configuration changes require simple text edits, not scripting.
  • Better collaboration: Clear syntax eliminates the “Makefile expert” bottleneck.
  • Tool flexibility: Switch between VCS, Questa, or XSIM without rewriting flows.
  • Project portability: YAML configurations move cleanly between different projects.

The choice between Makefile and YAML approaches ultimately depends on project complexity and team goals. Simple, stable projects may continue benefiting from Makefile simplicity. However, teams managing growing test suites, multiple tools, or frequent configuration changes will find YAML-based flows providing better long-term returns on their infrastructure investment.

Meet Sangani is ASIC verification engineer at ASICraft Technologies.

Hitesh Manani is senior ASIC verification engineer at ASICraft Technologies.

Shailesh Kavar is ASIC verification technical manager at ASICraft Technologies.

Related Content

The post Makefile vs. YAML: Modernizing verification simulation flows appeared first on EDN.

Computer-on-module architectures drive sustainability

Fri, 10/31/2025 - 23:46
congatec’s credit-card-sized COM-HPC Mini with carrier.

Sustainability has moved from corporate marketing to a board‑level mandate. For technology companies, this shift is more than meeting environmental, social, and governance frameworks; it reflects the need to align innovation with environmental and social responsibility among all key stakeholders.

Regulators are tightening reporting requirements while investors respond favorably to sustainable strategies. Customers also want tangible progress toward these goals. The debate is no longer about whether sustainability belongs in technology roadmaps but how it should be implemented.

The hidden burden of embedded and edge systems

Electronic systems power a multitude of devices in our daily lives. From industrial control systems and vital medical technology to household appliances, these systems usually run around the clock for years on end. Consequently, operating them requires a lot of energy.

Usually, electronic systems are part of a larger ecosystem and are difficult to replace in the event of failure. When this happens, complete systems are often discarded, resulting in a surplus of electronic waste.

Rapid advances in technology make this issue more pronounced. Processor architectures, network interfaces, and security protocols become obsolete in shorter cycles than they did just a few years ago. As a result, organizations often retire complete systems after a brief service life, even though the hardware still meets its original requirements. The continual need to update to newer standards drives up costs and can undermine sustainability goals.

Embedded and edge systems are foundational technologies driving critical infrastructure in industrial automation, healthcare, and energy applications. As such, the same issues with short product lifecycles and limited upgradeability put them in the same unfortunate bucket of electronic waste and resource consumption.

Bridging the gap between performance demands and sustainability targets requires rethinking system architectures. This is where off-the-shelf computer-on-module (COM) designs come in, offering a path to extended lifecycles and reduced waste while simultaneously future-proofing technology investments.

How COMs extend product lifecycles

Open embedded computing standards such as COM Express, COM-HPC, and Smart Mobility Architecture (SMARC) separate computing components—including processors, memory, network interfaces, and graphics—from the rest of the system. By separating the parts from the whole, they allow updates by swapping modules instead of by requiring a complete system redesign.

This approach reduces electronic waste, conserves resources, and lowers long‑term costs, especially in industries where certifications and mechanical integration make complete redesigns prohibitively expensive. These sustainability benefits go beyond waste reduction: A modular system is easier to maintain, repair, and upgrade, meaning fewer devices end up prematurely as electronic waste.

Recommended Why system consolidation for IT/OT convergence matters

Open standards that enable longevity

To simplify the development and manufacturing of COMs and to ensure interchangeability across manufacturers, consortia such as the PCI Industrial Computer Manufacturing Group (PICMG) promote and ratify open standards.

One of the most central standards in the embedded sector is COM Express. This standard defines various COM sizes, such as Type 6 or Type 10, to address different application areas; it also offers a seamless transition from legacy interfaces to modern differential interfaces, including DisplayPort, PCI Express, USB 3.0, or SATA. COM Express, therefore, serves a wide range of use cases from low-power handheld medical equipment to server-grade industrial automation infrastructure.

Expanding on these efforts, COM-HPC is the latest PICMG standard. Addressing high-performance embedded edge and server applications, COM-HPC arose from the need to meet increasing performance and bandwidth requirements that previous standards couldn’t achieve. COM-HPC COMs are available with three pinout types and six sizes for simplified application development. Target use cases range from powerful small-form-factor devices to graphics-oriented multi-purpose designs and robust multi-core edge servers.

congatec’s credit-card-sized COM-HPC Mini with carrier.COM-HPC, including congatec’s credit-card-sized COM-HPC Mini, provides high performance and bandwidth for all AI-powered edge computing and embedded server applications. (Source: congatec)

Alongside COM Express and COM-HPC, the Standardization Group for Embedded Technologies developed the SMARC standard to meet the demands of power-saving, energy-efficient designs requiring a small footprint. Similar in size to a credit card, SMARC modules are ideal for mobile and portable embedded devices, as well as for any industrial application that requires a combination of small footprint, low power consumption, and established multimedia interfaces.

Congatec's conga-SMX95 SMARC module.As credit-card-sized COMs, SMARC modules are designed for size-, weight-, power-, and cost-optimized AI applications at the rugged edge. (Source: congatec)

As a company with close involvement in developing COM Express, COM-HPC, and SMARC, congatec is invested in the long-term success of more sustainable architectures. Offering designs for common carrier boards that can be used for different standards and/or modules, congatec’s approach allows product designers to use a single carrier board across many applications, as they simply swap the module when upgrading performance, removing the need for complex redesigns.

Virtualization as a path to greener systems

On top of modular design, extending hardware lifecycles requires intelligent software management. Hypervisors, a software tool that creates and manages virtual machines, add an important software layer to the sustainability benefits of COM architectures.

Virtualization allows multiple workloads to coexist securely on a single module, meaning that separate boards aren’t required to run essential tasks such as safety, real-time control, and analytics. This consolidation simultaneously lowers energy consumption while decreasing the demand for the raw materials, manufacturing, and logistics associated with more complex hardware.

Congatec aReady.VT hypervisor.Hypervisors such as congatec aReady.VT are real-time virtualization software tools that consolidate functionality that previously required multiple dedicated systems in a single hardware platform. (Source: congatec) Enhancing sustainability through COM-based designs

The rapid adoption of technologies such as edge AI, real‑time analytics, and advanced connectivity has inspired industries to strive for scalable platforms that also meet sustainability goals. COM architectures are a great example, demonstrating that high performance and environmental responsibility are compatible. They show technology and business leaders that designing sustainability into product architectures and technology roadmaps, rather than treating it as an afterthought, makes good practical and financial sense.

With COM-based modules already providing a flexible and field-proven foundation, the embedded sector is off to a good start in shrinking environmental impact while preserving long-term innovation capability.

The post Computer-on-module architectures drive sustainability appeared first on EDN.

Solar-powered cars: is it “déjà vu” all over again?

Fri, 10/31/2025 - 14:56

I recently came across a September 18 article by the “future technology” editor at The Wall Street Journal, “Solar-Powered Cars and Trucks Are Almost Here” (sorry, behind paywall, but your local library may have free access). The author was positively gushing about companies such as Aptera Motors (California), which will “soon” be selling all-solar-powered cars. On a full daylight charge, they can do a few tens of miles, then it’s time to park in the Sun for that totally guilt-free “fill up.”

Figure 1 The Aptera solar-powered three-wheel “car” can go between 15 and 40 miles on a full all-solar charge. Source: Aptera Motors

The article focused on the benefits and innovations, such as how Aptera claims to have developed solar panels that withstand road hazards, including rocks kicked up at high speed, and similar advances.

The solar exposure-versus-distance numbers are very modest, to be polite. While people living in a sunny environment could add up to 40 miles (64 km) of range a day in summer months, from panels alone, that drops to around 15 miles (24 km) a day in northern climates in winter. Aptera says its front-wheel-drive version goes from 0 to 60 mph (96 km/hour) in 6 seconds, and has a top speed of 101 mph (163 km/hr).

The article also mentions that Aptera is planning to sell its ruggedized panels to Telo Trucks, a San Carlos (Calif) maker of a 500-horsepower mini-electric truck estimated to ship next year, which uses solar panels to extend its range by 15 to 30 supplemental miles per day.

Then I closed my eyes and thought, “Wait, haven’t I heard this story before?” Sure enough, I looked through my notes and saw that I had commented on Aptera’s efforts and those of others back in a 2021 blog, “Are solar-powered cars the ultimate electric vehicles?  Perhaps it’s no surprise, but the timeline then was also “coming soon.”

The laws of physics conspire to make this a very tough project. This sort of ambitious project requires advances across multiple disciplines. There are the materials for the vehicle itself, batteries, rugged solar panels, battery-management electronics —  it’s a long list. These are closely tied to key ratios beginning with power and energy to weight.

Did I mention it’s a three-wheel vehicle (with all the stability issues that brings), seats two people, and is technically classified as a motorcycle despite its fully enclosed cabin? Or that it has to meet vehicle safety mandates and regulations? Will drivers likely need power-draining air conditioning unless they drive open-air, especially as the vehicle needs to be parked in the sun by definition?

I don’t intend to disparage the technological work, innovation, and hard work (and money) they have put into the project. Nonetheless, no matter how you look at it, it’s a lot of effort and retail price (estimated to be around $40,000) for a modest 15 to 40 miles of range. That’s a lot of dollar pain for very modest environmental gain, if any.

Is the all-electric vehicle analogous to the flying car?
Given today’s technology and that of the foreseeable future, I think the path of a truly viable all-solar car (at any price) is similar to that other recurrent dream: the flying car. Many social observers say that the hybrid vehicle (different meaning of “hybrid” here, of course) was brought into popular culture in 1962 by the TV show The Jetsons – but there had been articles in magazines such as Popular Science even before that date.

Figure 2 The flying car that is often discussed was likely inspired by the 1962 animated series “The Jetsons.” Source: Thejetsons.fandom.com

Roughly every ten years since then, the dream resurfaces and there’s a wave of articles in the general media about all the new flying cars under development and road/air test, and how actual showroom models are “just around the corner.” However, it seems like we are always approaching but not making the turn around that corner; Terrafugia’s massive publicity wave, followed by subsequent bankruptcy, is just one example.

The problem for flying cars, however attractive the concept may be, is that the priority needs and constraints for a ground vehicle, such as a car, are not aligned with those of an aircraft; in fact, they often contradict each other.

 It’s difficult enough in any vehicle-engineering design to find a suitable balance among tradeoffs and constraints – after all, that’s what engineering is about. For  the flying  car, however, it is not so much about finding the balance point as it is about reconciling dramatically opposing issues. In addition, both classes of vehicles are subject to many regulatory mandates related to safety, and those add significant complexity.

Sometimes, it’s nearly impossible to “square the circle” and come up with a viable and acceptable solution to opposing requirements. Literally, “to square the circle” refers to the geometry challenge of constructing a square with the same area as a given circle but using only a compass and straightedge, a problem posed by the ancient Greeks and which was proven impossible in 1882. Metaphorically, the phrase means to attempt or solve something that seems impossible, such as combining two fundamentally different or incompatible things.

What’s the future for these all-solar “cars”? Unlike talking heads, pundits, and journalists, I’ll admit that I have no idea. They may never happen, they may become an expensive “toy” for some, or they may capture a small but measurable market share. Once prototypes are out on the street getting some serious road mileage, further innovations and updates may make them more attractive and perhaps less costly—again, I don’t know (nor does anyone).

Given the uncertainties associated with solar-powered and flying cars, why do they get so much attention? That’s an easy question to answer: they are fun and fairly easy to write about and the coverage gets attention. After all, they are more exciting to present and likely to attract more attention than silicon-carbide MOSFETs.

What’s your sense of the reality of solar-powered cars? Are they a dream with too many real-world limitations? Will they be a meaningful contribution to environmental issues, or an expensive virtue-signaling project—assuming they make it out of the garage and become highway-rated, street-legal vehicles?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

The post Solar-powered cars: is it “déjà vu” all over again? appeared first on EDN.

The next RISC-V processor frontier: AI

Fri, 10/31/2025 - 12:56

The RISC-V Summit North America, held on 22-23 October 2025 in Santa Clara, California, showcased the latest CPU cores featuring new vector processors, high-speed interfaces, and peripheral subsystems. These CPU cores were accompanied by reference boards, software design kits (SDKs), and toolchains.

The show also provided a sneak peek of the RISC-V’s design ecosystem, which is maturing fast with the RVA23 application profile and RISC-V Software Ecosystem (RISE), a Linux Foundation project. The emerging ecosystem encompasses compilers, system libraries, language runtimes, simulators, emulators, system firmware, and more.

“The performance gap between high-end Arm and RISC-V CPU cores is narrowing and a near parity is projected by the end of 2026,” said Richard Wawrzyniak, principal analyst for ASIC, SoC and IP at The SHD Group. He named Andes, MIPS, Nuclei Systems, and SiFive as market leaders in RISC-V IP. Wawrzyniak also mentioned new entrants such as Akeana, Tenstorrent, and Ventana.

Andes, boasting 20 years of expertise in the semiconductor IP business, was a prominent presence in the corridors of the RISC-V Summit in Santa Clara. It’s a founding member of RISC-V International and a pure-play IP vendor. At the RISC-V Summit, Andes displayed its processor lineup, including AX45, AX46, AX66, and Cuzco.

Figure 1 The processor lineup was showcased at the RISC-V Summit in Santa Clara. Source: Andes

Andes claims that these RISC-V processors, featuring powerful compute and efficient control, provide the architectural diversity required in artificial intelligence (AI) applications. AX45 and AX46 processors have been taped out and are shipping in volumes. Here, Andes also provides in-chip firmware, tester software, on-board software, and on-cloud software as part of its hardware IP monitoring offerings.

Though RISC-V is enjoying a robust deployment in automotive, Internet of Things (IoT), and networking, AI was all the rage on the RISC-V Summit floor. “If RISC-V has a tailwind, it’s AI,” Wawrzyniak said.

RISC-V world’s AI moment

Andes claims it’s driving RISC-V into the AI world with features such as advanced vector processing. And that its RISC-V processors are powering devices from the battery-sipping edge to high-performance data centers. Andes also claims that 38% of its revenue comes from AI designs.

Companies like Andes can also bring differentiation and efficiency to AI processor designs through automated custom extensions. “We are getting there, and the deployment speed is impressive,” said Dr. Charlie Su, president and CTO of Andes Technology.

Figure 2 Meta deployed two generations of AI accelerators for training and inference using RISC-V vector/scalar cores. Source: Andes

“RISC-V is getting better for AI applications in data centers,” said Ty Garibay, president of Condor Computing. “RVA23 has a massive investment in features for data center-class AI designs.” Condor Computing, a wholly owned subsidiary of Andes, founded in 2023, develops high-performance RISC-V IPs and is based in Austin, Texas.

Wawrzyniak of SHD Group acknowledges that AI applications are driving the adoption of RISC-V-enabled system-on-chips (SoCs). “The heterogeneous nature of SoCs has created opportunities for multiple CPU architectures,” he said. “These SoCs can support both RISC-V and other ISAs, allowing applications to pick the best core for each function.”

Moreover, the diverse needs for AI acceleration are fueling the demand for RISC-V. “RISC-V CPU IP vendors can more easily introduce new and more powerful CPU cores, which extends the reach of RISC-V into AI applications that require greater compute power,” Wawrzyniak said.

During his keynote, Wawrzyniak said that initial RISC-V deployments were driven by embedded applications such as networking, smart sensors, storage, and wearables. “RISC-V is now transitioning to higher-end applications like ADAS and data centers as AI expands to those applications.”

RISC-V processor duo

At the RISC-V Summit, Andes provided more details about its new application processors. It showcased AX66, a mid-range application processor, and Cuzco, a high-end application processor; both are RVA23-compliant. AX66—incorporating up to 8 cores—features dual vector pipes with VLEN=128 and front-end decode 4-wide. It has a shared L3 cache of up to 32 MB.

Figure 3 AX66 is a 64-bit multicore CPU IP for developing a high-performance quad-decode 13-stage superscalar out-of-order processor. Source: Andes

On the higher end, Cuzco features time-based scheduling with a time resource matrix to determine instruction issue cycles after decoding, thereby reducing logic complexity and dynamic power for wide machines. Cuzco’s decode is either 6-wide or 8-wide, and it has 8 execution pipelines (2 per slice).

Cuzco incorporates up to 8 cores and offers a shared L3 cache of up to 256 MB. The Cuzco RISC-V processor has been implemented at 5-nm nodes with 8 execution pipelines and 7 million gates. It features an L2 configuration with 2MB and is targeted for a 2.5-GHz speed.

Figure 4 The Cuzco design represents the first in a new class of RISC-V CPUs aimed at data center-class performance while maintaining power efficiency and area benefits. Source: Andes

For the development of these RISC-V processors, the AndeSight integrated development environment (IDE) helps design engineers generate files for LLVM to recognize new instructions. Then there is AndesAIRE software, which facilitates graph-level optimization for pruning and quantization as well as back-end-aware optimization for fusion and allocation.

For OS support, the processors comply with RVA22 and RVA23 profiles and SoC hardware and software platforms. Andes also provides additional support to ensure that the Linux kernel is upstream-compatible.

Cuzco, unveiled at Hot Chips 2025 earlier this year, features a time-based out-of-order microarchitecture engineered to deliver high performance and efficiency across compute-intensive applications in AI, data center, networking, and automotive markets. Andes provided a preview of this out-of-order CPU at the RISC-V Summit.

Condor Computing developed the Cuzco RISC-V core, which is fully integrated into the Andes toolchain and ecosystem. Condor recently completed full hardware emulation of its new CPU IP while successfully booting Linux and other operating systems.

“Condor’s microarchitecture combines advanced out-of-order execution with novel hardware techniques to dramatically boost performance-per-watt and silicon efficiency,” Andes CTO Su said. “It’s ideally suited for demanding CPU workloads in AI, automotive compute, applications processing, and beyond.”

The first customer availability of the Cuzco RISC-V processor is expected in the fourth quarter of 2025.

The RISC-V adoption

According to Wawrzyniak, chip designers are now looking at both Arm and RISC-V processor architectures. “The RISC-V ISA and its rising ecosystem have interjected competition once again into the SoC design landscape.”

Furthermore, the custom RISC-V ISA extensions empower innovation and tailored performance. Not surprisingly, therefore, the adoption of RISC-V by large technology companies such as Broadcom, Google, Meta, MediaTek, Qualcomm, Renesas, and Samsung continues to validate the utility of the RISC-V ISA in the semiconductor industry.

RISC-V, once an academic exercise, has come a long way since its launch in May 2010 at the University of California, Berkley. However, as Krste Asanovic, chief architect at SiFive, said during his keynote, RISC-V will continue to evolve across different verticals and that it’ll be around for a long time.

Related Content

The post The next RISC-V processor frontier: AI appeared first on EDN.

1,200-V diodes offer low loss, high efficiency

Thu, 10/30/2025 - 22:24
Taiwan Semi's 1,200-V PLA/PLD series diodes in a ThinDPAK package.

Taiwan Semiconductor launches a new series of automotive-grade, low-loss diodes in three popular industry-standard packages. They provide an automotive-level performance upgrade in existing designs and low-power dissipation required for higher-power rectification applications.

Taiwan Semi's 1,200-V PLA/PLD series diodes in a ThinDPAK package.(Source: Taiwan Semiconductor)

The 1,200-V PLA/PLD series, with ratings of 15 A, 30 A or 60 A, all feature low forward voltage (1.3 Vf max), low reverse leakage (<10 µA at 25°C), and high junction temperature (175°C Tj max). They are available in three packages—ThinDPAK, D2PAK-D, and TO-247BD—for design flexibility.

These 1,200-V diodes provide easy drop-in replacements using an industry-standard pinout to improve efficiency in existing designs, according to the company. They can be used in a variety of applications such as three-phase AC/DC converters, server and computing power (including AI power) systems, EV charging stations, on-board battery chargers, Vienna rectifiers, totem pole and bridgeless topologies, inverters and UPS systems, and general-purpose rectification in high-power systems.

The new PLA/PLD series is offered in six models manufactured to automotive-quality standards. Two of the models, the PLAD15QH (ThinDPAK) and PLDS30QH (D2PAK-D), are fully AEC-Q qualified for automotive applications. The other four models include the PLAD15Q (ThinDPAK), PLDS30Q (D2PAK-D), PLAH30Q (TO-247BD), and PLAH60Q (TO-247BD).

The PLA/PLD series are sampling now. They are in-stock at DigiKey and Mouser. Production lead times is 8-14 weeks ARO. Design resources include datasheets, spice models, Foster and Cauer thermal models, and CAD files (symbol, footprint, and 3D model).

The post 1,200-V diodes offer low loss, high efficiency appeared first on EDN.

Wirewound resistors operate in harsh environments

Thu, 10/30/2025 - 21:56
Bpurns' Riedon precision wirewound resistors.

Bourns Inc. launches its series of Riedon precision wirewound resistors. These passive devices meet application requirements for high accuracy and long-term stability. They offer a wide resistance range of up to 6 megohms (MΩ) with ultra-low resistance tolerances (as low as ±0.005 percent).

Bpurns' Riedon precision wirewound resistors.(Source: Bourns Inc.)

This rugged, high-precision resistor series is offered in multiple axial, radial, and square package sizes and in a variety of lead configurations for greater design flexibility. They feature non-inductive multi-Pi cores, protective encapsulation technology, and a low standard temperature coefficient of ±2 ppm/°C.

These features help minimize inductance and noise while maintaining stability and efficiency even under high heat and harsh electrical conditions, Bourns said.

The series is 100 percent acceptance tested and RoHS-compliant. Applications include measurement equipment, bridge circuits, load cells and strain gauges, imaging systems, current sensing equipment, and high-frequency circuit designs.

The Riedon wirewound resistors are available now. Custom solutions are also available to meet specific customer requirements.

Last year, Bourns expanded its Riedon power resistor family with the launch of 11 product series, including wirewound resistors and current-sense resistors. They feature high power ratings, low temperature coefficients (TCRs), a wide resistance range, and an extended temperature range.

These resistors are available in numerous packaging options, including wirewound through-hole and surface mount; surface-mount metal film; and bare/coated metal element resistors. They target a variety of applications, including battery energy storage systems, industrial power supplies, motor drives, smart meters, telecom 5G remote radio and baseband units, and current sensing.

The post Wirewound resistors operate in harsh environments appeared first on EDN.

Sony debuts image sensor with MIPI A-PHY link

Thu, 10/30/2025 - 18:09

According to Sony, the IMX828 CMOS image sensor is the industry’s first to integrate a MIPI A-PHY interface for connecting automotive cameras, sensors, and displays with their ECUs. The built-in serializer-deserializer physical layer removes the need for external serializer chips, enabling more compact, lower-power camera systems.

The IMX828 offers 8-Mpixel resolution (effective pixels) and a 150-dB high dynamic range. Its pixel structure achieves a high saturation level of 47 kcd/m², allowing accurate recognition of high-luminance objects such as red traffic signals and LED taillights.

A low-power parking-surveillance mode detects motion to help reduce theft and vandalism risk. Images are captured at low resolution and frame rate to keep power consumption under 100 mW. When motion is detected, the sensor alerts the ECU and switches to normal imaging mode.

Sony plans to obtain AEC-Q100 Grade 2 qualification before mass production begins. The IMX828 meets ISO 26262 requirements, with hardware metrics conforming to ASIL-B and the development process to ASIL-D. Sample shipments are expected to start in November 2025. A datasheet was not available at the time of this announcement.

Sony Semiconductor Solutions 

The post Sony debuts image sensor with MIPI A-PHY link appeared first on EDN.

EIS-powered chipset improves EV battery monitoring

Thu, 10/30/2025 - 18:09

NXP’s battery management chipset integrates electrochemical impedance spectroscopy (EIS) to enable lab-grade vehicle diagnostics. The system comprises three devices: the BMA7418 18-channel Li-Ion cell controller, BMA6402 communication gateway, and BMA8420 battery junction box monitor. Together, they deliver hardware-based synchronization of all cell measurements within a high-voltage battery pack with nanosecond precision.

By embedding EIS directly in hardware, the chipset supports real-time, high-frequency monitoring of battery health. Accurate impedance measurements, combined with in-chip discrete Fourier transformation, help OEMs manage faster and safer charging, detect early signs of degradation, and simplify overall system design.

EIS sends controlled excitation signals through the battery and analyzes frequency responses to reveal cell aging, temperature shifts, or micro shorts. NXP’s system uses an integrated excitation source with a pre-charge circuit, while DC link capacitors provide secondary energy storage for greater efficiency.

The complete BMS solution is expected to be available by the beginning of 2026, with enablement software running on NXP’s S32K358 automotive microcontroller. Read more about the chipset here.

NXP Semiconductors 

The post EIS-powered chipset improves EV battery monitoring appeared first on EDN.

Compact oscillator fits tight AI interconnects

Thu, 10/30/2025 - 18:09

Housed in a 6-pin, 2.0×1.6-mm LGA package, Mixed-Signal Devices’ MS1180 crystal oscillator conserves space in AI data center infrastructure. Factory-programmed to provide any frequency from 10 MHz to 1000 MHz with under 1-ppb resolution, it is well-suited for 1.6T and 3.2T optical modules, active optical cables, active electrical cables, and other size-constrained interconnect devices.

The MS1180 is optimized for key networking frequencies—156.25 MHz, 312.5 MHz, 491.52 MHz, and 625 MHz—and maintains low RMS phase jitter of 28.3 fs to 43.1 fs when integrated from 12 kHz to 20 MHz. It offers ±20-ppm frequency stability from –40 °C to +105 °C. Power-supply-induced phase noise is –114 dBc for 50-mV supply ripples at 312.5 MHz, with a supply-jitter sensitivity of 0.1 fs/mV (measured with 50-mVpp ripple from 50 kHz to 1 MHz on VDD pin).

Supporting multiple output formats (CML, LVDS, EXT LVDS, LVPECL, HCSL), the device runs from a single 1.8- V supply with an internal regulator.

The MS1180 crystal oscillator is sampling now to strategic partners and Tier 1 customers. Production volumes are expected to ramp in Q1 2026.

MS1180 product page   

Mixed-Signal Devices  

The post Compact oscillator fits tight AI interconnects appeared first on EDN.

Retimer boosts USB 3.2 and DP in auto cockpits

Thu, 10/30/2025 - 18:09

A bit-level retimer from Diodes, the PI2DPT1021Q enables high-speed USB and DisplayPort (DP) connectivity in automotive smart cockpits and infotainment systems. The 10-Gbps bidirectional device supports USB 3.2 and DP 1.4 standards for various automotive USB Type-C applications.

The retimer has 4:4 channels, configurable via I²C for different modes: four-lane DP, two-lane DP with one-lane USB 3.2 Gen 2, or one- or two-lane USB 3.2 Gen 2. It is AEC-Q100 Grade 2 qualified and operates over a temperature range of -40° to +105 °C.

To maintain signal integrity, the PI2DPT1021Q offers receiver adaptive equalization that compensates for channel losses up to -23 dB at 5 GHz. It also provides low latency (<1 ns) from signal input to output, ensuring good interoperability between USB and DP devices. Additional features include jitter cleaning, an adaptive continuous-time linear equalizer (CTLE), and a 3-tap transmitter with selectable adjustment.

The PI2DPT1021Q retimer costs $1.65 each in lots of 5000 units.

PI2DPT1021Q product page 

Diodes

The post Retimer boosts USB 3.2 and DP in auto cockpits appeared first on EDN.

GaN flyback converter supplies up to 75 W

Thu, 10/30/2025 - 18:09

ST’s VIPerGaN50W houses a 700-V GaN power transistor, flyback controller, and gate driver in a compact 5×6-mm QFN package. The quasi-resonant offline converter delivers up to 75 W from high-line input (185–265 VAC) or 50 W across the full universal input range (85–265 VAC). It uses a proprietary technique that ensures chargers and power supplies operate silently at all load levels.

Along with zero voltage switching (ZVS), the VIPerGaN50W includes dynamic blanking time, which minimizes switching losses by limiting the frequency. It also offers adjustable valley synchronization delay to maximize efficiency at any input line and load condition. A valley-lock feature stabilizes skipped cycles to prevent audible switching noise.

At no load, the converter’s standby power drops below 30 mW thanks to adaptive burst mode, helping meet stringent ecodesign regulations. Advanced power-management features ensure the output-power capability and switching frequency remain stable, even when the supply voltage changes.

In production now, the VIPerGaN50W is priced from $1.09 each in lots of 1000 units.

VIPerGaN50W product page

STMicroelectronics

The post GaN flyback converter supplies up to 75 W appeared first on EDN.

RISC-V Summit spurs new round of automotive support

Thu, 10/30/2025 - 17:44
RISC-V icon symbol.

The adoption of RISC-V with open standards in automotive applications continues to accelerate, leveraging its flexibility and scalability, particularly benefiting the automotive industry’s shift to software-defined vehicles. Several RISC-V IP core and development tool providers recently announced advances and partnerships to drive RISC-V adoption in automotive applications.

In July 2025, the first Automotive RISC-V Ecosystem Summit, hosted by Infineon Technologies AG, was held in Munich. Infineon believes cars will change in the next five years more than in the last 50 years, and as traditional architectures come to their limit, RISC-V will be a game-changer, enabling the collaboration between software and hardware.

RISC-V icon symbol.(Source: Adobe Stock)

However, RISC-V adoption will require an ecosystem to deliver new technologies for the automotive industry. The summit showcased RISC-V solutions and technologies ready for automotive, particularly for SDVs, bringing together RISC-V players in areas such as compute IP, software, and development solutions.

Fast-forward to October with several RISC-V players expanding the enabling ecosystem for automotive with key collaborations ahead of the October 2025 RISC-V Summit. Quintauris, for example, announced several partnerships, including with Andes Technology Corp., Everspin Technologies, Tasking, and Lauterbach GmbH, all focused on advancing RISC-V for automotive and other safety-critical applications.

The Quintauris strategic partnership with Andes, a provider of RISC-V processor cores, brings Andes’s RISC-V processor IP into Quintauris’s RISC-V-based portfolio, consisting of profiles, reference architectures, and software components. The partnership will focus on automotive, industrial, and edge computing applications. It kicks off with the integration of the 32-bit ISO 26262–certified processor in the AndesCore processor series with Quintauris’s automotive real-time reference architecture.

Quintauris is also teaming up with Everspin to bring its advanced memory solutions—magnetoresistive RAM technologies—into Quintauris’s reference architectures and real-time platforms for automotive, industrial, and edge applications. This partnership addresses the need for memory subsystems to meet the high standards for performance and functional safety in automotive applications.

In the development tools space, Quintauris announced a new partnership with Tasking to bolster RISC-V development in the automotive industry. Delivering certifiable development tools for safety-critical embedded software, Quintauris will integrate Tasking’s RISC‑V compiler into its upcoming RISC‑V reference platform.

Addressing embedded systems debugging, the new Quintauris and Lauterbach collaboration focuses on safety-critical industries such as automotive. Under the partnership, Lauterbach’s TRACE32 toolset, including a debug and trace suite, for embedded systems will be integrated into the Quintauris RISC-V reference platform. The TRACE32 toolset provides debugging, traceability, and system analysis tools.

Lauterbach also announced in October that its TRACE32 development tools support Tenstorrent’s system-on-chips (SoCs) and chiplets for RISC-V and AI-based workloads in the automotive, client, and server sectors. Tenstorrent’s automotive and robotics base die SoC targets automotive applications in SDVs. The SoC implements at least eight 64-bit superscalar, out-of-order TT-Ascalon RISC-V cores with vector and hypervisor ISA extensions, along with RISC-V-based AI accelerators and additional RISC-V cores for system and communication management.

The TRACE32 development tools allow simultaneous debugging of the TT-Ascalon RISC-V processors and other cores implemented on the chip, from pre-silicon development to prototyping on silicon and in-field debugging on electronic control units.

Also helping to accelerate the global adoption of RISC-V, Tenstorrent and CoreLab Technology are collaborating on an open-architecture computing platform for automotive edge and robotics applications. The Atlantis computing platform addresses demanding AI computing requirements, delivering a scalable, safety-ready CPU IP portfolio. The platform will leverage Tenstorrent’s RISC-V CPU IP and CoreLab Technology’s energy-efficient IP and SoC solutions.

Designed to deliver on performance, power efficiency, low total cost of ownership, and customization, all RISC-V CPU cores in the platform support deep customization, enabling customers to tailor their compute resources for their applications, according to Tenstorrent.

The automotive industry demands that ecosystem players meet stringent functional safety and security standards. To meet these requirements, Codasip recently announced that two of its high-performance embedded processor cores, the Codasip L735 and Codasip L739, have received TÜV SÜD certification for functional safety.

The L735 is certified up to ASIL-B and the L739 is certified up to ASIL-D, defined by the ISO 26262 standard. Both products are also compliant with ISO/SAE 21434 for cybersecurity in automotive development. In addition, Codasip’s IP development process is certified to both ISO 26262 and ISO/SAE 21434.

The L735 and L739 cores are part of the Codasip 700 family. The L735 includes safety mechanisms such as error-correcting code on caches and tightly coupled memories, a memory protection unit, and support for RISC-V RERI to provide standardized error reporting. The L739 adds dual-core lockstep, enabling ASIL-D certification.

Capability Hardware Enhanced RISC Instructions (CHERI) variants are available for both products. CHERI security technology protects against memory safety vulnerabilities. Codasip is standardizing a CHERI extension for RISC-V in collaboration with other members of the CHERI Alliance.

The post RISC-V Summit spurs new round of automotive support appeared first on EDN.

Circuit makes square deal

Thu, 10/30/2025 - 16:04

A classic nonlinear analog function is the squaring circuit. It’s useful in power sensing, frequency multiplication, RMS computation, and many other odd jobs around the lab bench.

The version in Figure 1 is straightforward, fast, temperature-compensated, calibration-free, and if the transistors are well matched, accurate. The final output is as follows:

 Vout = R3 antilog(2log(Vin/R1) – log(Vgain/R2)) = R3 antilog(log((Vin/R1)2 /(Vgain/R2))
Vout = (R1-2 R2 R3)Vin2 /Vgain

Figure 1 The squaring amplifier that is fast, temperature-compensated, calibration-free, and accurate (if the transistors are well matched).

Wow the engineering world with your unique design: Design Ideas Submission Guide

Its input can accept either voltage or current. It gains a bit of extra versatility from a separate gain factor control input, which can also accept voltage or current. Another boost in versatility comes from a similarly flexible output with both voltage and (inverted) current output mode. If the current mode is chosen, A3 and R3 can be omitted and a dual op-amp (OPA2228) used instead of the quad (OPA4228) illustrated.

 The series connection of Q1 and Q2 generates a signal proportional to 2log(Vin/R1) = (log(Vin/R1)2). This is applied to antilogger Q3 ,which subtracts log(Vgain/R2) from it to generate a current of:

-(antilog(log((Vin/R1)2 /(Vgain/R2))

This is inverted and scaled by R3 and A3 to yield the final:

Vout = (R1-2 R2 R3)Vin2 / Vgain

 Note that if the three resistors are equal and Vin = Vgain, then:

Vout = (R-2 R R)Vin2 / Vin = Vin

And, the squarer circuit will have unity gain.

Which is kind of a “square deal,” although I doubt it’s what Teddy Roosevelt had in mind when he made that phrase his 1904 campaign slogan.

An interesting application happens when the squarer is combined with a full-wave precision rectifier (like the one in “New full-wave precision rectifier has versatile current mode output”). See Figure 2.

Figure 2 The cascading full-wave rectifier (black) with squarer makes low distortion frequency doubler (red).

 Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Circuit makes square deal appeared first on EDN.

How help are free AI tools for electronic design?

Wed, 10/29/2025 - 17:48

For the past couple of years, I’ve been using AI to assist in the design of my hardware and firmware projects. The experience has generally been good, even though the outcome isn’t always useful. So, I’m presenting a short summary of a few of the tasks I have attempted and providing my unscientific grade to the outcome. The grades will be averaged at the end. Note: I do not have any paid AI subscriptions—I only used free AI tools, mostly Microsoft Copilot and ChatGPT (although I have tried a few others). These are just a few of my experiences using online AI.

Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale

Converting voltage to percent charge

Grade: A

I wanted to show the charge remaining in a lithium polymer battery used to power a design. This is not straightforward as the function to convert voltage to percent charge for a lithium polymer battery is not a linear function. I asked Copilot to make a table of 20 voltages from 3.2 V to 4.2 V and their respective charge percentages. Then I asked it to create a C function to do this conversion. It created this nicely, including linear interpolation.

Finding the median without sorting

Grade: D

A while back, I wrote a Design Idea (DI) article on non-linear filters. While doing this, I queried Copilot to create a C program that can find the median of 5 numbers and do this without sorting. (No sorting for a small number of points is useful for increasing speed.) It created a nice-looking program—nice formatting and good comments. It also compiled fine. The problem was that the program didn’t work—it found the wrong value for the median in some cases.

Initializing an ADC

Grade: C+

Another project required me to write code for the SAMD51 MCU to initialize the ADC for high-speed sampling. As I was trying to get maximum speed from the ADC, it was a somewhat complex setup, especially the clocking system. I tried creating the code in both Copilot and ChatGPT multiple times.

Some code would not compile due to things like bad register names, and some code would just not work, giving no ADC readings. After some back and forth, those issues were corrected. A few of the comments in code were misleading or just plain wrong as it applied to clock frequencies. As the code got close to a working function, I took over the code and reworked parts of it to make it work.

Graphic design

Grade: C+

I was doing some LCD graphics design for a project, and one part was a battery charge indicator. This symbol, for battery percent of charge, was to be displayed on an LCD with an ILI9321 controller. (This standard figure looks like an AA battery with a green interior representing the percent charge.)

I asked Copilot to write C code for this using the GFX graphics library. The length of the green fill worked well, but the battery figure looked nothing like a battery. It was a rectangle with two large circles on both ends. I had to rewrite portions of the code myself.

Grade: F

In the same project, I asked Copilot for a USB symbol written using the GFX graphics library, as above. This didn’t look like the trident-like, universal USB symbol. I was essentially three sticking out from a central point at various angles. It was unusable.

Enclosure design

Grade: D-

Next, I tried to have Copilot and ChatGPT design an enclosure that would work on a workbench, allowing the user to see the LCD and to easily connect BNC cables. All I got were images of rectangular boxes. No matter how I asked for a more unique shape, it never went much beyond a rectangular enclosure. Then, even the rectangular box could not be delivered as a usable 3D file “step”, “stl” file without using other programs.

Filter design

Grade: C-

I asked ChatGPT, “Can you design and display a circuit that takes in a signal, AC couples it to a gain stage of 5, and then filters it at 120 kHz before outputting it?” Instead of explaining the result, the image in Figure 1 will speak for itself.

Figure 1 ChatGPT’s output for a filter design that takes in a signal, AC couples it to a gain stage of 5, and then filters it at 120 kHz before outputting it.

It did include a nice explanation of how components were selected, but the schematic was mostly unreadable. Dedicated tools such as TI’s Webench filter design tool, Analog Devices’ Filter Wizard, and ST’s eDesign Suite are the right tools for filter circuit design and are actually easier to use.

Grade: Ungraded

I tried to create C code, in both Copilot and ChatGPT, for calculating coefficients for a digital Sallen-Key 2-pole high-pass, low-pass, band-pass, and band-stop filters. I tried many times and could not get a good working algorithm. The code was close, but the filters did not function correctly. Eventually, I found the code after an extensive Google search. It’s possible my testing may have been part of the problem—unsure.

Grade: B

Along the way, I tried lots of smaller queries, many of which were very helpful.

A lab notebook

I’m sure some of the issues are my skill in creating the AI prompts. This certainly made my attempts take longer as I had to add more detail in follow-up prompts. I actually found this conversational style more engaging than using search engines. It’s not like a Google search, where you can’t typically do follow-ups to your query—you have to re-enter your original query with a modification.

The AI systems work much more like a conversation with a colleague. You can tell it that the code it gave you did not compile, as it didn’t recognize a register name. Or you can ask it to give you faster code, or change a resistor value in a circuit, and recalculate the remaining components.

One thing I learned when writing this article is that both ChatGPT and Copilot keep a complete history of conversations we had. It’s sort of like a lab notebook, showing your path to a certain design—very helpful.

A C rating

Looking at the average grade, it comes in between a C and a C-. I’ll give it the benefit of the doubt and call it a C. The C rating matches my gut feel also. The interaction is fairly easy—it feels like interacting with a coworker. The conversation goes on, attempting to fine-tune the final answer. The interaction process is much better than doing a Google search and getting a list of things to pick from, without an easy way to refine the search.

Does it save time? That’s hard to judge as I’m still learning how to create better prompts. Sometimes I get a useful answer right away. On more complex queries, I’ve gotten pulled down a rabbit hole and wasted time while the solution diverged from what I was looking for. There have been times when it had me trying to finetune the result, and I turned to Google and got an answer much faster.

You can easily be lulled into the feeling that you’re conversing with a savant, but it may be more like AI-splaining. Every answer exudes confidence, but it could be the confidence of ignorance. Remember that these answers have not been checked or tested.

Will I continue to use it? Certainly… I’ll get better at using it, and the tools will continue to improve. What I would like to see is an AI tool focused specifically on electrical engineering (hardware, firmware, and system design). This may focus its skills on finding or creating circuits, and being able to dig down deep into data sheets, etc. It would also be nice if it could test its results through simulation or by executing the code in a series of tests. Maybe in the future.

All in all, it’s worth using and everyone should give it a try, just check the answer closely.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post How help are free AI tools for electronic design? appeared first on EDN.

Power Tips #146: Design functional safety power supplies with reduced complexity

Wed, 10/29/2025 - 15:54

Many industrial applications in the automotive, automation, appliance, or medical sectors require power supplies that comply with functional safety standards. If the input voltage of such a power supply is not within its specification, the system to which it is supplying power is potentially operating in an unsafe state. Monitoring input and output voltages for faults such as undervoltage, overvoltage, and overtemperature may require resetting and transitioning the system to a safe state.

Defining the protections needed to comply with functional safety standards depends on the safety level, which the design engineer must determine in cooperation with a safety inspection agency such as Technischer Überwachungsverein. The engineer must also work on a time-consuming risk assessment of failures that address both safe and dangerous failures as well as random and systematic failures.

Functional safety in power supplies

Safety standards such as IEC 61508 or ISO 13849A specify the maximum allowable probability of dangerous failures per hour.

The requirements for a safe power supply as specified in IEC 61508, which covers functional safety in industrial manufacturing, include overvoltage protection with safety shutoff, secondary-side voltage control with safety shutoff, and power-down with safety shutoff. These protections require significant additional external circuitry around the switched-mode power supply (SMPS).

A safe power supply must also fulfill random hardware fault requirements. Using an integrated PG pin as the safety mechanism to monitor failures can be insufficient, because this pin is typically not independent; it shares the same internal band gap with all safety and monitoring features. A drifting band gap will cause the PG pin to fail. This is known as a common-cause failure, which does not meet functional safety requirements.

As shown in Figure 1, detecting any fault will also require additional supply-voltage supervisors as well as a switch connected in series to the input; alternatively, the switch could connect to the output. This switch disconnects the system from the source or load in case of a failure. Redundant supply-voltage supervisors monitor the input and output voltages. Typically, an industrial power supply is limited to less than a 60-VDC input, even in the event of a fault, requiring an additional circuit with transient voltage suppression and a fuse, because not all devices are specified to 60 V.

Figure 1 An industrial safe power supply example block diagram. Source: Texas Instruments

The switch at the input, which is under the control of the monitor, can remove power in case of a failure. The input and output voltage are monitored continuously. As I mentioned earlier, to comply with functional safety standards, all parts must operate within a specified operating voltage. That is not an easy task, given the requirement to detect undervoltage and overvoltage events immediately.

Buck converter

Using a functional-safety-compliant buck converter with integrated safety features can greatly reduce the amount of external circuitry, as shown in Figure 2. An integrated redundant circuit, which replaces the external voltage supervisor, has a startup diagnostic check and can detect the failure of a FET. This implementation reduces the overall cost of designing a safe power supply.

Figure 2 Integrated functional safety features replace an external voltage supervisor, reducing circuit complexity. Source: Texas Instruments

The nFAULT pin in the converter is used for overvoltage protection and as a failure flag. Triggering the nFAULT pin disables a safety switch, which in this case is an ideal diode controller connected to the input. The Temp pin communicates the temperature to a microprocessor and forces a shutdown if the temperature is too high. The VSNS pin has feedback path failure detection, and there is another feedback divider for redundancy. During startup, the LM68645-Q1 buck converter checks the configuration on the RT, FB, and VSNS pins.

Figure 3 shows a block diagram of a universal board (configurable to meet different safety standards)—with an input voltage range of 19.2 V to 28.8 V and a maximum 60 V—for a safe power supply.

A synchronous buck converter generates a 5-V output with a maximum current of 3 A. Beside the buck converter is an ideal diode with back-to-back MOSFETs connected to the input. An ideal diode connects to the output. The nFAULT pin can control both switches. Two additional supervisors for redundant voltage monitoring on the input and output can disable both switches as well. The ideal diode controller has power-path control and overvoltage protection. The voltage supervisors also provide built-in self-test and overvoltage and undervoltage protection.

Figure 3 The TI Industrial 24 V to 5 V safe power supply reference design, where a number of redundant options on the board make it possible to comply with different functional safety standards. Source: Texas Instruments

A buck converter designed to help meet functional safety standards reduces the amount of necessary functional safety documentation, system cost, and time to market. Because all of the devices in the 24 V to 5 V safe power supply reference design are specified for ≥ 60 V, an input transient voltage suppressor or fuse is not necessary.

Upgrading a safe power supply

Although upgrading a safe power supply to a higher standard requires significant effort, it is possible to design a power supply that meets functional safety requirements but also decreases time to market and system cost. Using a buck converter with integrated safety features helps achieve systematic and random hardware metrics and reduces the needed external circuitry.

Florian Mueller, systems applications engineer, Texas Instruments

Related Content

 

 

The post Power Tips #146: Design functional safety power supplies with reduced complexity appeared first on EDN.

Polyn delivers silicon-implementation of its NASP chip

Tue, 10/28/2025 - 20:21
Polyn's neuromorphic analog signal processing (NASP) VAD chip.

Polyn Technology Ltd. announces the successful manufacturing and testing of its first silicon-implementation of its neuromorphic analog signal processing (NASP) technology. It includes the validation of both the NASP technology and design tools, which automatically convert trained digital neural network models into ultra-low-power analog neuromorphic cores ready for manufacturing in standard CMOS processes. The first product chip features an analog neuromorphic core of a voice activity detection (VAD) neural network model.

Polyn's neuromorphic analog signal processing (NASP) VAD chip.(Source: Polyn Technology Ltd.)

This platform uses trained neural networks in the analog domain to perform AI inference with much lower power consumption than conventional digital neural processors, according to the company. Application-specific NASP chips can be designed for a range of edge AI applications, including audio, vibration, wearable, robotics, industrial, and automotive sensing.

This is the first time that Polyn generated an asynchronous, fully analog neural-network core implementation in silicon directly from a digital model. This opens up a “new design paradigm— neural computation in the analog domain, with digital-class accuracy and microwatt-level energy use,” said Aleksandr Timofeev, Polyn’s CEO and founder, in a statement.

Targeting always-on edge devices, the NASP chips with AI cores process sensor signals in their native analog form in microseconds, using microwatt-level power, which eliminates all overhead associated with digital operations, Polyn explained.

Recommended Neuromorphic analog signal processor aids TinyML

The first neuromorphic analog processor contains a VAD core for real-time voice activity detection and offers fully asynchronous operation. Key specs of this NASP VAD chip include ultra-low-power consumption of about 34 µW during continuous operation and ultra-low latency at 50 microseconds per inference.

In addition to the VAD core, Polyn plans to develop other cores for speaker recognition and voice extraction, targeting home appliances, communications headsets, and other voice-controlled devices.

In April 2022, the company announced its first NASP test chip, implemented in 55-nm CMOS technology, demonstrating the technology’s brain-mimicking architecture. This was followed in October 2022 with the introduction of the NeuroVoice tiny AI chip, delivering on-chip voice extraction from any noisy background. In 2023, Polyn introduced VibroSense, a Tiny AI chip solution for vibration monitoring sensor nodes. (Polyn was ranked as an EE Times Silicon 100 company to watch in 2025.)

Customers who are developing products with ultra-low-power voice control can apply online for the NASP VAD chip evaluation kit. Polyn will demonstrate its first NASP chips, available for ordering, at CES 2026 in Las Vegas, Nevada, January 6-9, in Hall G, Booth #61701. A limited selection will be showcased at CES Unveiled Europe in Amsterdam, October 28, Booth HB143.

The post Polyn delivers silicon-implementation of its NASP chip appeared first on EDN.

Are rough surfaces on PCBs impacting high-frequency signals?

Tue, 10/28/2025 - 19:43
Group of PCBs.

Printed-circuit boards (PCBs) are an integral part of most electronic devices today, and as PCBs become smaller, electronics engineers must remain aware of the tiny defects that can affect how these components function, especially when they involve high-frequency signals. Surface roughness may seem minor, but it can significantly affect PCB performance, including impedance and signal transmission. What should electronics engineers know about it, and how can they minimize this issue?

Path length

Rough PCB surfaces increase the signal’s path length. This is due to the skin effect, which occurs because high-frequency electrical signals are more likely to flow along a conductor’s outer surface instead of through its core. A longer path length can also increase resistance and cause energy loss.

Group of PCBs.(Source: Adobe Stock)

Engineers can reduce these issues by choosing the appropriate surface finishes for different PCB parts. Immersion silver is a good choice for balancing performance and affordability, although it must be handled carefully to prevent tarnishing.

Electroless nickel immersion gold offers a flat and smooth surface with a gold layer that promotes excellent solderability and conductivity and a nickel layer that offers oxidation protection. This surface finish minimizes signal distortion, making it a popular option for microwave and radio-frequency applications.

Although immersion tin features a smooth surface, it has lower corrosion resistance than other options, making it less frequently selected for high-frequency PCBs. Because hard gold has good conductivity and resists wear, engineers often use it in high-frequency applications, such as on contact points and connectors. This approach minimizes signal loss and increases overall durability.

If you plan to outsource finishing or other manufacturing steps to a specialty provider, consider choosing one with extensive experience and the equipment and expertise needed for your PCB design.

For example, in 2024, PCB company OKI Circuit Technology created an ultra-high, multilayer PCB line. This expansion boosted its capacity potential by approximately 1.4× while also helping the company cater to customers with smaller orders. The company has also invested in numerous enhancements that increase its precision and equip it to meet the needs of next-generation communications, robotics, and semiconductors.

Signal integrity

Rough surfaces compromise signal integrity and can cause parasitic capacitance. This issue can also increase crosstalk if it results in uneven electromagnetic field distribution. Smoother surfaces enable faster signal speeds while preventing distortion and delays.

Because surface roughness is one of many factors that can interfere with signal integrity, electronics engineers should scrutinize all design aspects to find other potential culprits. Some companies offer specialized tools to make the task easier.

One provider sells software that uses artificial intelligence to assess proposed designs. Users can also check trace path routing by studying cross-sectional diagrams that show various layers, identifying potential issues more quickly.

Component placement and PCB layout configurations can affect signal integrity, so designers should consider those aspects before assuming rough surfaces have degraded performance. Digital twins and similar tools allow engineers and product designers to experiment with various layouts before committing to a final PCB layout. Keeping a log of all design changes also allows engineers to revert to previous iterations if newer versions worsen signal integrity.

If companies notice ongoing signal integrity problems or other challenges, examining the individual industrial processes may highlight the causes. This usually starts with data collection because the information provides a baseline. Once companies begin tracking trends, they can discover the most effective ways to tighten quality control and meet other goals that improve PCB performance.

Tailored assistance

If electronics engineers conclude that rough surfaces are among the primary contributors to signal issues in their high-frequency PCBs, they can then address the problem by partnering with third-party providers that understand the complexities of finishing small parts. These companies can detail the various finish types available and provide pricing and lead times, depending on the unit order of PCBs.

Companies that need PCB finishing for prototypes or small production runs may request manual processes. Skilled technicians use tools and magnification on parts with complex geometries or other characteristics that make them unsuitable for mechanical methods.

Controlled combustion, electrolytic action, and vibratory containers are some of the other options for finishing small parts through non-manual means. Specialist finishers can examine the PCB designs and recommend the best strategies to achieve consistent smoothness with maximum efficiency.

Because many manufacturers have high-volume finishing needs, some startups have emerged to fill the need while supporting producers’ automation efforts. Augmentus is one example, focusing on physical AI to scale automated surface finishing for high-mix environments. The company has built a fully autonomous system for today’s factory floors. In July 2025, the company secured $11 million in a Series A+ funding round to scale for high-mix, complex robotic surface finishing and welding.

Augmentus views surface finishing as one of the most challenging problems in automation, but the company believes its technology will break new ground. Although it is too early to know how this option and others like it may change PCB production, automated processes could offer better repeatability, making surface roughness less problematic.

Ongoing awareness

Because surface roughness can negatively affect high-frequency PCB signals, engineers should explore numerous ways to address it effectively. Considering this issue early in the design process and selecting appropriate finishes are proactive steps for strengthening component quality control.

About the author

Emily Newton is a technical writer and the editor-in-chief of Revolutionized. She enjoys researching and writing about how technology is changing the industrial sector.

The post Are rough surfaces on PCBs impacting high-frequency signals? appeared first on EDN.

5-V ovens (some assembly required)—part 1

Tue, 10/28/2025 - 15:26

The ovens in this two-part Design Idea (DI) can’t even warm that leftover half-slice of pizza, let alone cook dinner, but they can keep critical components at a constant temperature. In the first part, we’ll look at a purely analog approach, saving something PWM-based for the second.

Perhaps you want to build a really wide-range LF oscillator with a logarithmic sweep, using no more than a resistor, an op-amp, and a diode for the log element. That diode needs to be held at a constant temperature for accuracy and stability: it needs ovening (if there is such a verb).

Wow the engineering world with your unique design: Design Ideas Submission Guide

I made such a device some years ago, and was reminded of it when spotting how a bead thermistor fitted rather nicely into the hole in a TO-220’s tab. (Cluttered workbenches can sometimes trigger interesting cross-fertilizations.) Now, can we turn that tab into a useful temperature-stabilized hotplate, suitable for mounting heat-sensitive components on? Ground rules: aim at a rather arbitrary 50°C, make the circuitry as simple as possible, use a 5-V supply, and keep the consumption low.

This is a practical exploration of how to use a transistor, a thermistor, and as little else as possible to get the job done. It lacks the elegance and sophistication of designs that use a transistor as both a sensor and a source of heat, but it is simpler.

Figure 1 shows the schematic of a simple version needing only a 2-wire connection, along with two photos indicating its construction. It was slimmed down from a more complex but less successful initial idea, which we’ll look at later.

Figure 1 A simple oven circuit, heated by both R2 and Q2. The NTC thermistor Th1 provides feedback, the set point being determined by R1. Note how critical components are thermally tied together as they are all built onto the TO-220 package, as shown in the photos. Also note the fine lead wires to reduce heat loss once the assembly is heat-insulated.

Both R2 and Q2 can contribute to heating. On a cold start (literally) Th1’s resistance is high so that the Darlington pair Q1 and Q2 has enough base voltage to saturate it, with (most of) the rail voltage across R2. As the assembly heats up, Th1’s resistance drops, reducing the drive to Q1/2. The rail now appears across both R2 and Q2, with the latter taking over as the main, though now reduced, source of heat. This gives a degree of proportional control, reducing the drive as the set-point is approached. That base drive depends not only on the ratio of R2 to Th1 but also on Q1/2’s effective VBE, which needs to be temperature-stabilized—as indeed it is. Consumption varies from ~90 mA when cold to ~30 mA when stable.

Setting and measuring the temperature

R1 sets the stabilization temperature, the target being 50°C. Experimentally, 12k worked best, giving a stable hotplate temperature of 49.6°C for an ambient of 19.5°C. Cooling the surroundings to -0.5°C left the hotplate at 48.8°C, so that the hotplate temperature falls by 0.04°C for each degree drop outside. Better thermal insulation would have reduced that.

The measuring probe was a 10k thermistor equipped with fine wires and stuck to the hotplate with thermal paste, the module being wrapped in ~12 mm of foam—and we’ll come back to that. Thermal paste and heat shrink could have been used for the main assembly but dabs of epoxy worked well and kept the hotplate surface flat. Metal-loaded, high-temperature epoxy conducts heat several times better than the plain-vanilla variety while still being an electrical insulator, though that may make little difference given reasonable physical contact.

Other resistors and transistors

R2 is fairly critical. A higher value than 47R heats up slower than is necessary, while a lower one does so too fast, leading to the temperature overshooting because of the limited proportional control. Experiments showed that 47R was close to optimal, with minimal overshoot and thus the fastest stabilization time. The hotplate temperature settles to within a degree in around two minutes and is almost spot-on after three minutes.

Neither Q1 nor Q2 is critical, but the E-line package of a ZTX300 (for example) fits better than a TO-92 would. But why not use an integrated Darlington like the TIP122? Alas, such devices incorporate base–emitter resistors, nominally 10k and 150R, which load Th1 unpredictably. Trying one picked at random showed that R1 needed to be ~7k8 for a set-point of 50°C.

Similarly, this also works with Q1/2 replaced by a MOSFET, with R1’s value now depending on the gate threshold; 3k9 was close for a BUK553. BJTs are far more predictable: build this as drawn, and it should be within a degree, with Q1/2’s VBE settling at ~1.18 V; use a random MOSFET, and it could be anywhere.

Access all areas

The next variant, shown in Figure 2, is electrically similar but provides access to useful circuit nodes to help monitor its performance. It was also easier to experiment with.

Figure 2 While electrically the same as Figure 1, this brings out most circuit nodes to help with experimentation and monitoring, including the LEDs on “pin 3”.

Now we can see what we’re doing! The LEDs give a simple status indication, the green one lighting when it’s close to the set-point rather than fully stable. Figure 3 shows the effect, along with traces for Q1/2’s Vcc—allowing us to read the current in the transistors and R2—and the hotplate temperature. The latter is accurate, but the voltage and current scales are less so because they assume a precise 5-V supply and a 50-Ω load rather than the measured 4.94 V and 47Ω plus stray resistance. This module stabilized at ~50.6°C.

Figure 3  Measurements taken from Figure 2’s circuit for about three minutes after a cold start.

So much for the basic circuit. Now, it needs thermal insulation to keep the heat in, a block of foam being the obvious choice. But foams have widely differing thermal conductivities. Expanded polystyrene or polyethylene will work, but the foamed polyisocyanurate or similar used for wall insulation panels is around twice as good—and offcuts are often freely available from builders’ skips/dumpsters! Figure 4 shows the module from Figure 2 mounted on/in a block of it, with at least 10 mm of foam around any part of the circuit module.

Wikipedia has an illuminating plot of the thermal conductivities of many materials, including our foams and epoxies. The article of which it is a part has a lot of useful background, too.

Figure 4 The module from Figure 2 mounted on a block of foam. The intermediate connecting wires are meandered across its surface to minimize heat loss. Note the diode, typical of a component needing stabilization, stuck to the hotplate, ready for its new connections to be treated similarly.

The fine lead wires—0.15 mm diameter, as used with wiring pencils—are meandered over the surface to lengthen the thermal paths. Copper has a thermal conductivity some 19,000 times greater than the foam: 384 W/m·K vs ~0.02 W/m·K. In very crude terms, for a given thermal path length and temperature gradient, a single, short 0.11-mm-diameter copper wire will leak heat at about the same rate as the entire surface area of our foam block (~6000 mm2). Ironically, perfect insulation would be bad, as the innards could never cool to recover from an overshoot. This build took 620 seconds to cool by 63% of the way to ambient.

Hot stuff

Disconnecting Th1 in Figure 2’s circuit let the module heat up to the max while still allowing monitoring—or would have done, had I not chickened out when its resistance dropped to 720 Ω, for just over 100°C. (The epoxy was rated to 110°C.) That was with the full insulation; in free air, it struggled to reach 70°C—the rating for other components.

One subtle problem is the inevitable mismatch between the sensing thermistor and the target device, as analyzed in a Stephen Woodward DI, which also implies that the position of the target on the hotplate will affect its actual temperature. We’ll ignore that for the moment, because we’re more interested in constancy than precision, but will return to it in Part 2.

Finishing at the starting point

The foregoing circuits were actually simplifications of my starting point, which is shown in Figure 5. When the temperature is stable at ~50°C, point A is at half-rail. R3 is chosen so that U1’s output will turn Q1/2 on just enough to maintain that. However, while the extra gain improves the temperature regulation, it also causes some overshoot. R3 or R2 must be trimmed to set the temperature: fiddly, and not really designable. R3 was calculated at 4k12 but needed ~5k6 in reality. That’s why I gave up on this approach.

Figure 5 The original circuit that suffered from overshoot. The LEDs give a too-high/too-low temperature indication.

The long-tailed pair of Darlingtons (Q3, Q4) sense the difference between the thermistor voltage—half the rail when stable, as noted—and a half-rail reference, so that the red LED will be on when the temperature is low, the green one lighting while it’s high, with both on at the stable point. Full-red to full-green takes ~300 mV differential, or ~±3°C. This works but gives no better indication than the LEDs in Figure 2. (The low-power Darlingtons used seem to omit those extra, internal resistors. Q1/2 could now be replaced by that TIP122, as it’s driven by a low-impedance source. R4 is purely to protect against current surges.)

Figure 6 plots its performance when starting from cold, showing the overshoot and recovery. Compare this with Figure 3.

Figure 6 The start-up performance of Figure 5’s circuit.

If I were building something similar in any quantity, I wouldn’t do it like this: SMDs and a flexible circuit would be much cleaner. For example, a 2512 power resistor for R2 (or R5 in Figure 5), pressed flat, with some insulation, against the power transistor’s tab would probably be ideal.

In Part 2, we’ll see how even a simple PWM-based circuit can give better proportional control and hence generally better performance. The bad news: we may eventually abandon the TO-220 tab in favor of another way of assembling our hotplate.

Related Content

The post 5-V ovens (some assembly required)—part 1 appeared first on EDN.

Pages