EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 57 min 57 sec ago

Bias for HF JFET

Wed, 05/15/2024 - 16:57

Junction field-effect transistors (JFETs) usually require some reverse bias voltage to be applied to a gate terminal.

In HF and UHF applications, this bias is often provided using the voltage across the source resistor Rs (Figure 1).

Figure 1: JFETs typically require some reverse bias across the gate terminal and in HF/UHF applications, this is often provided using the voltage across resistor Rs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Barring the evident lack of efficiency, such approach has other shortcomings as well:

  • The drain current has statistical dispersion, so to get a target value of the current some circuit adjustment is required.
  • The drain current may depend on temperature or power fluctuations.
  • To achieve an acceptable low source impedance, several capacitors Cs have to be used.
  • To maintain the same headroom a higher power voltage is required.
  • The lack of direct contact with the ground plane means worse cooling of the transistor, which is crucial for power applications.

The circuit in Figure 2 is free of all these. It consists of a control loop which produces control voltage of negative polarity for n-channel JFET amplifier.

Figure 2: A control loop that produces control voltage of negative polarity for n-channel JFET amplifier in HF and UHF applications.

The circuit uses two infrared LEDs IR333C (diameter = 5 mm) in a self-made photocoupler. Two such LEDs placed face-to-face in an appropriate PVC tube about 12 mm long, that’s all. One such device produces 0.81 V @ Iled < 4 mA, which is quite sufficient for the HEMT FHX35LG, for example.

Of course, if you need higher voltage, several such devices can be simply cascaded.

The main amplification in the loop is performed by the JFET itself. Its value is about gm * R1, where gm is a transconductance of Q1.

The transistor pair Q2 and Q3 compares the voltage drops on the resistors R1 and R2 making them equal. Hence, by changing the ratio R2:R3 you can set the working point you need:

Id = Vdd * R2 / ((R2 + R3) * R1)

As we can see, the drain current (Id) still depends on power voltage (Vdd). To avoid this dependence, we can replace resistor R2 with a Zener diode, then:

Id = Vz / R1

 Peter Demchenko studied math at the University of Vilnius and has worked in software development.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bias for HF JFET appeared first on EDN.

Thin PCBs: Challenges with BGA packages

Wed, 05/15/2024 - 16:33

During electrical design process, certain design choices need to be made. One example is USB C type connector-based design with a straddle-mount connector. In such scenario, the overall PCB thickness is constrained while using a straddle-mount connector whose thickness governs the overall thickness. For historical reasons, the standard PCB thickness is 0.063” (1.57 mm).

Before the advent of PCBs, transistor-based electronics were often assembled using a method called breadboarding, which involved using wood as a substrate. However, wood was fragile, leading to delicate assemblies. To address this, bakelite sheets, commonly used on workbench surfaces, became the standard substrate for electronic assemblies, with a thickness of 1/16 inch, marking the beginning of PCBs at this thickness.

Figure 1 A PCB cross section is shown with a straddle-mount type connector. Source: Wurth Elektronik

Take the example of Wurth Elektronik’s USB 3.1 plug, a straddle-mount connector with part number 632712000011. The part datasheet recommends a PCB thickness of 0.8 mm/0.031” for an optimal use. This board thickness is common among various board fabrication houses. The 0.031” board is relatively easy to fabricate as many fab houses do a 6-layer PCB with 1 Oz copper on each layer.

However, designing and working with thin PCBs presents several challenges. One of the primary concerns is their mechanical fragility. Thin PCBs are more flexible and prone to bending or warping, making them difficult to handle during assembly and more susceptible to damage during handling. The handling includes pick and place assembly process, holes drilling, in-circuit testing (ICT) as well as functional probes during the functional testing.

The second level of handling is by the end user, for example dropping the device containing the PCB assembly (PCBA). Additionally, thin PCBs often requires specialized manufacturing processes and materials, leading to increased production costs. Component placement becomes more critical as well, as traces may need to be positioned closer together, increasing the risk of short circuits and signal interference.

Furthermore, thin PCBs face challenges in heat dissipation due to their reduced thermal mass. Addressing these challenges demands careful consideration during the design, manufacturing, and assembly stages to ensure the reliability and performance of the final product.

These issues are especially critical when a designer mounts a ball grid array (BGA) component on a 0.031” thickness board. Most of major fabrication houses recommend a minimum thickness of 0.062” when BGAs are mounted on the board.

How to test durability

The mechanical durability of PCB assemblies is generally assessed using a drop test. Drop test requirements for a PCBA typically include specifying the drop height, drop surface, number of drops, orientation during the drop, acceptance criteria, and testing standards. The drop height is the distance from which the PCBA will be dropped, typically ranging from 30 to 48 inches, depending on the application and industry standards.

The drop surface, such as concrete or wood, is also defined. Manufacturers determine the number of drops the PCBA must withstand, usually between 3 to 6 drops. The orientation of the PCBA during the drop, whether face down, face up, or on an edge or corner, is also specified. Acceptance criteria, such as functionality after the drop and any visible damage, are clearly defined.

Testing standards like IPC-TM-650 or specific customer requirements guide the testing process. For a medical device, the drop test requirements are governed by section 15.3.4.1 of IEC 60601-1 Third Edition 2005-12. By establishing these requirements, manufacturers ensure that their PCBAs and products are robust enough to withstand real-world use and maintain functionality even after being subjected to drops and impacts.

The soldering joint might not be captured during a drop test until a functional failure is observed. The BGA can fail due to poor assembly-related issues like the thermal stresses during soldering or poor soldering joint quality. A thin board weakens due to excessive mechanical shock and vibration assembly.

These defects can be captured during a drop test as the BGA part may not withstand the stresses encountered during a drop test, as shown in the figures below. The BGA failures can be inspected using X-ray, optical inspection, or electrical testing. A detailed analysis may be performed using cross section analysis using scanning electron microscopy (SEM).

Figure 2 The BGA solder joint shows a line crack. Source: Keyence

Figure 3 The above image displays a cross section of a healthy BGA. Source: Keyence

Figure 4 Here is a view of some of the BGA failure modes. Source: Semlabs

How to fix BGA failure on thin PCBs

Pad cratering is the fracturing of laminate under Cu pads of surface mount components, which often occurs during mechanical events. The initial crack can propagate, causing electrically open circuits by affecting adjacent Cu conducting lines. It’s more common in lead-free assemblies due to different laminate materials. Mitigation involves reducing stress on the laminate or using stronger, more pad cratering-resistant materials.

The issue can be fixed by mechanically stretching the PCB or changing the laminate material. It can be done with any of the following steps.

  • Thinner boards are more prone to warping and may require additional fixturing (stiffeners or work board holders) to process on the manufacturing line if the requirements below are not met. A PCB stiffener is not an integral part of the circuit board; rather, it’s an external structure that offers mechanical support to the board.

Figure 5 An aluminum bar is shown as a mechanical PCB stiffener. Source: Compufab

  • Corner adhesive/epoxy on the BGA corners or use BGA underfill. For example, an adhesive that can be used for this purpose is Zymet UA-3307-B Edgebond, Korapox 558 or Eccobond 286. The epoxy along the BGA corners or as an underfill strengthens the PCB, thereby preventing PCB flexion and hence the failure.
  • Strict limitations on board flexure during circuit board assembly operations. For instance, supporting the PCB during handling operation like via hole drilling, pick and place, ICT, or functional testing with flying probes.
  • Matching the recommended soldering profile of the BGA. The issue can be made worse if the BGA manufacture’s recommended soldering profile is not followed, resulting in cold solder joints. There should be enough thermocouples on the PCB panel to monitor the PCB temperature.
  • Ensure that the BGA pad size is as per manufactures recommendation.

Managing thin PCB challenges

A thin PCB (0.031”) can weaken the PCB assembly, thereby making it susceptible to mechanical and thermal forces. And the challenges are unique when mounting a BGA to the thin PCB.

However, the design challenges and risks can be managed by carefully controlling the PCB handling processes and then strengthening the thin PCB with design solutions discussed in this article.

Editor’s Note: The views expressed in the article are author’s personal opinion.

Jagbir Singh is a staff electrical engineer for robotics at Smith & Nephew.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Thin PCBs: Challenges with BGA packages appeared first on EDN.

The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess

Wed, 05/15/2024 - 16:32

Starting last year, as I mentioned at writeup publication time, EDN asked me to do yearly coverage of Google’s (or is that Alphabet’s? whatevah) I/O developer conference, as I’d already long been doing for Apple’s WWDC developer-tailored equivalent event, and on top of my ongoing throughout-the-year coverage of notable Google product announcements:

And, as I also covered extensively a year ago, AI ended up being the predominant focus of Google I/O’s 2023 edition. Here’s part of the upfront summary of last year’s premier event coverage (which in part explains the rationalization for the yearly coverage going forward):

Deep learning and other AI operations…unsurprisingly were a regularly repeated topic at Wednesday morning’s keynote and, more generally, throughout the multi-day event. Google has long internally developed various AI technologies and products based on them—the company invented the transformer (the “T” in “GPT”) deep learning model technique now commonly used in natural language processing, for example—but productizing those research projects gained further “code red” urgency when Microsoft, in investment partnership with OpenAI, added AI-based enhancements to its Bing search service, which competes with Google’s core business. AI promises, as I’ve written before, to revolutionize how applications and the functions they’re based on are developed, implemented and updated. So, Google’s ongoing work in this area should be of interest even if your company isn’t one of Google’s partners or customers.

And unsurprisingly, given Google’s oft-stated, at the time, substantial and longstanding planned investment in various AI technologies and products and services based on them, AI was again the predominant focus at this year’s event, which took place earlier today as I write these words, on Tuesday, May 14:

But I’m getting ahead of myself…

The Pixel 8a

Look back at Google’s Pixel smartphone family history and you’ll see a fairly consistent cadence:

  • One or several new premium model(s) launched in the fall of a given year, followed by (beginning with the Pixel 3 generation, to be precise)
  • one (or, with the Pixel 4, two) mainstream “a” variant(s) a few calendar quarters later

The “a” variants are generally quite similar to their high-end precursors, albeit with feature set subtractions and other tweaks reflective of their lower price points (along with Google’s ongoing desire to still turn a profit, therefore the lower associated bill of materials costs). And for the last several years, they’ve been unveiled at Google I/O, beginning with the Pixel 6a, the mainstream variant of the initial Pixel 6 generation based on Google-developed SoCs, which launched at the 2022 event edition. The company had canceled Google I/O in 2020 due to the looming pandemic, and 2021 was 100% virtual and was also (bad-pun-intended) plagued by ongoing supply chain issues, so mebbe they’d originally planned this cadence earlier? Dunno.

The new Pixel 8a continues this trend, at least from feature set foundation and optimization standpoints (thicker display bezels, less fancy-pants rear camera subsystem, etc.). And by the way, please put in proper perspective reviewers who say things like “why would I buy a Pixel 8a when I can get a Pixel 8 for around the same price?” They’re not only comparing apples to oranges; they’re also comparing old versus new fruit (this is not an allusion to Apple; that’s in the next paragraph). The Pixel 8 and 8 Pro launched seven months ago, and details on the Pixel 9 family successors are already beginning to leak. What you’re seeing are retailers promo-pricing Pixel 8s to clear out inventory, making room for Pixel 9 successors to come soon. And what these reviewers are doing is comparing them against brand-new list-price Pixel 8as. In a few months, order will once again be restored to the universe. That all said, to be clear, if you need a new phone now, the Pixel 8 is a compelling option.

But here’s the thing…this year, the Pixel 8a was unveiled a week prior to Google I/O, and even more notably, right on top of Apple’s most recent “Let Loose” product launch party. Why? I haven’t yet seen a straight answer from Google, so here are some guesses:

  • It was an in-general attempt by Google to draw attention away from (or at least mute the enthusiasm for) Apple and its comparatively expensive (albeit non-phone) widgets
  • Specifically, someone at Google had gotten a (mistaken) tip that Apple might roll out one (or a few) iPhone(s) at the event and decided to proactively queue up a counterpunch
  • Google had so much else to announce at I/O this year that they, not wanting the Pixel 8a to get lost in all the noise, decided to unveil it ahead of time instead.
  • They saw all the Pixel 8a leaks and figured “oh, what the heck, let’s just let ‘er rip”.

The Pixel Tablet (redux)

But that wasn’t the only thing that Google announced last week, on top of Apple’s news. And in this particular case the operative term is relaunched, and the presumed reasoning is, if anything, even more baffling. Go back to my year-back coverage, and you’ll see that Google launched the Tensor G2-based Pixel Tablet at $499 (128GB, 255GB for $100 more), complete with a stand that transforms it into an Amazon Echo Show-competing (and Nest Hub-succeeding) smart display:

Well, here’s the thing…Google relaunched the very same thing last week, at a lower price point ($399), but absent the stand in this particular variant instance (the stand-inclusive product option is still available at $499). It also doesn’t seem that you can subsequently buy the stand, more accurately described as a dock (since it also acts as a charger and embeds speakers that reportedly notably boost sound quality), separately. That all, said, the stand-inclusive Pixel Tablet is coincidentally (or not) on sale at Woot! for $379.99 as I type these words, so…🤷‍♂️

And what explains this relaunch? Well:

  • Apple also unveiled tablets that same day last week, at much higher prices, so there’s the (more direct in this case, versus the Pixel 8a) competitive one-upmanship angle, and
  • Maybe Google hopes there’s sustainable veracity to the reports that Android tablet shipments (goosed by lucrative trade-in discounts) are increasing at iPads’ detriment?

Please share your thoughts on Google’s last-week pre- and re-announcements in the comments.

OpenAI

Turnabout is fair play, it seems. Last Friday, rumors began circulating that OpenAI, the developer of the best-known GPT (generative pre-trained transformer) LLM (large language model), among others, was going to announce something on Monday, one day ahead of Google I/O. And given the supposed announcement’s chronological proximity to Google I/O, those rumors further hypothesized that perhaps OpenAI was specifically going to announce its own GPT-powered search engine as an alternative to Google’s famous (and lucrative) offering. OpenAI ended up in-advance denying the latter rumor twist, at least for the moment, but what did get announced was still (proactively, it turned out) Google-competitive, and with an interesting twist of its own.

To explain, I’ll reiterate another excerpt from my year-ago Google I/O 2023 coverage:

The way I look at AI is by splitting up the entire process into four main steps:

  1. Input
  2. Analysis and identification
  3. Appropriate-response discernment, and
  4. Output

Now a quote from the LLM-focused section of my 2023 year-end retrospective writeup:

LLMS’ speedy widespread acceptance, both as a generative AI input (and sometimes also output) mechanism and more generally as an AI-and-other interface scheme, isn’t a surprise…their popularity was a matter of when, not if. Natural language interaction is at the longstanding core of how we communicate with each other after all, and would therefore inherently be a preferable way to interact with computers and other systems (which Star Trek futuristically showcased more than a half-century ago). To wit, nearly a decade ago I was already pointing out that I was finding myself increasingly (and predominantly, in fact) talking to computers, phones, tablets, watches and other “smart” widgets in lieu of traditional tapping on screens and keyboards, and the like. That the intelligence that interprets and responds to my verbally uttered questions and comments is now deep learning trained and subsequent inferred versus traditionally algorithmic in nature is, simplistically speaking, just an (extremely effective in its end result, mind you) implementation nuance.

Here’s the thing: OpenAI’s GPT is inherently a text-trained therefore text-inferring deep learning model (steps 2 and 3 in my earlier quote), reflected in the name of the “ChatGPT” AI agent service based on it (later OpenAI GPT versions also support still image data). To speak to an LLM (step 1) as I described in the previous paragraph, for example, you need to front-end leverage another OpenAI model and associated service called Whisper. And for generative AI-based video from text (step 4) there’s another OpenAI model and service, back-end this time, called Sora.

Now for that “interesting twist” from OpenAI that I mentioned at the beginning of this section. In late April, a mysterious and powerful chatbot named “gpt2-chatbot” appeared on a LLM comparative evaluation forum, only to disappear shortly thereafter…and reappear again a week after that. Its name led some to deduce that it was a research project from OpenAI (further fueled by a cryptic social media post from CEO Sam Altman) —perhaps a potential successor to latest-generation GPT-4 Turbo—which had intentionally-or-not leaked into the public domain.

Turns out, we learned on Monday, it was a test-drive preview of now-public GPT-4o (“o” for “omni”), And not only does GPT-4o outperform OpenAI precursors as well as competitors, based on Chatbot Arena leaderboard results, it’s also increasingly multimodal, meaning that it’s been trained on and therefore comprehends additional input (as well as generating additional output) data types. In this case, it encompasses not only text and still images but also audio and vision (specifically, video). The results are very intriguing. For completeness, I should note that OpenAI also announced chatbot agent application variants for both MacOS and Windows on Monday, following up on the already-available Android and iOS/iPadOS versions.

Google Gemini

All of which leads us (finally) to today’s news, complete with the aforementioned 121 claimed utterances of “AI” (no, I don’t know how many times they said “Gemini”):

@verge Pretty sure Google is focusing on AI at this year’s I/O. #google #googleio #ai #tech #technews #techtok ♬ original sound – The Verge

Gemini is Google’s latest LLM, previewed a year ago, formally unveiled in late 2023 and notably enhanced this time around. Like OpenAI with GPT, Google’s deep learning efforts started out text-only with models such as LaMDA and PaLM; more recent Gemini has conversely been multimodal from the get-go. And pretty much everything Google talked about during today’s keynote (and will cover more comprehensively all week) is Gemini in origin, whether as-is or:

  • Memory footprint and computational “muscle” fine-tuned for resource-constrained embedded systems, smartphones and such (Gemini Nano, for example), and/or
  • Training dataset-tailored for application-specific use cases

including the Gemma open model variants.

In the interest of wordcount (pushing 2,000 as I type this), I’m not going to go through each of the Gemini-based services and other technologies and products announced today (and teased ahead of time, in Project Astro’s case) in detail; those sufficiently motivated can watch the earlier-embedded video (upfront warning: 2 hours), archived liveblogs and/or summaries (linked to more detailed pieces) for all the details. As usual, the demos were compelling, although it wasn’t entirely clear in some cases whether they were live or (as Google caught grief for a few months ago) prerecorded and edited. More generally, the degree of success in translating scripted and otherwise controlled-environment demo results into real-life robustness (absent hallucinations, please) is yet to be determined. Here are a few other tech tidbits:

  • Google predictably (they do this every year) unveiled its sixth-generation TPU (Tensor Processing Unit) architecture, code-named Trillium, with a claimed 4.7x performance boost in compute performance per chip versus today’s 5th-generation precursor. Design enhancements to achieve this result include expanded (count? function? both? not clear) matrix multiply units, faster clock speeds, doubled memory bandwidth and the third-generation SparseCore, a “specialized accelerator for processing ultra-large embeddings common in advanced ranking and recommendation workloads,” with claimed benefits both in training throughput and subsequent inference latency.
  • The company snuck a glimpse of some AR glasses (lab experiment? future-product prototype? not clear) into a demo. Google Glass 2, Revenge of the Glassholes, anyone?
  • And I couldn’t help but notice that the company ran two full-page (and identical-content, to boot) ads for YouTube in today’s Wall Street Journal even though the service was barely mentioned in the keynote itself. Printing error? Google I/O-unrelated v-TikTok competitive advertising? Again, not clear.

And with that, my Google I/O coverage is finit for another year. Over to all of you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess appeared first on EDN.

Change of guard at Intel Foundry, again

Tue, 05/14/2024 - 18:24

A little more than a year after he took the reins of Intel’s ambitious bid for semiconductor contract manufacturing, Stuart Pann is retiring while handing over the charge to Kevin O’Buckley. The transition took place on Monday, 13 May, and it once more raised questions about the future viability of Intel’s third-party foundry business.

Pann, a 35-year company veteran, joined Intel during the heydays of the PC revolution in 1981. He returned to the Santa-Clara, California-based semiconductor firm in 2021 to lead the chip manufacturing division, Intel Foundry Services (IFS). He replaced Intel Foundry’s first chief, Randhir Thakur, who later became CEO and managing director of Tata Electronics, the electronics manufacturing arm of Indian conglomerate Tata Group.

Figure 1 Pann, currently in a support role for a smooth transition, will retire at the end of this month. Source: Intel

Now O’Buckley replaces Pann, and it’s a déjà vu of Thakur-to-Pann handover a year ago. For instance, during the first quarter of 2024, Intel Foundry reported revenue of $4.4 billion, which was down by $462 million compared to the first quarter of 2023. That’s mainly attributed to lower revenues from back-end services and product samples.

Pann—who left the company only a few months after Intel Foundry marked the official launch of the manufacturing business as an independent entity to compete with the likes of TSMC and Samsung—set up Intel’s IDM 2.0 Acceleration Office (IAO) to guide the implementation of an internal foundry model. IAO closely works with Intel’s business units to support the company’s internal foundry model.

Intel Foundry, which aims to move beyond traditional foundry offerings and establish itself as the world’s first open-system foundry, faces huge technical and commercial challenges. That includes combining wafer fabrication, advanced process and packaging technology, chiplet standards, software, and assembly and test capabilities in a unified semiconductor ecosystem.

O’Buckley inherits these challenges. He comes from Marvell, where he led the company’s custom chips business as senior VP for the Custom, Compute and Storage Group. O’Buckley came to Marvell in 2019 via its acquisition of Avera Semiconductor, a 1,000-person chip design company that traces its roots to IBM, which offloaded it to GlobalFoundries before it was sold to Marvell. O’Buckley led Avera’s divestiture from GlobalFoundries.

Figure 2 Like his predecessor, O’Buckley will report directly to CEO Pat Gelsinger. Source: Intel

Intel CEO Pat Gelsinger, who has bet Intel’s revival bid on setting up an independent fab business, acknowledges that Intel Foundry is still some distance away from profitability due to the large up-front investment needed to ramp it up. However, time isn’t on Gelsinger’s side, meaning a swift turnaround plan is in order for O’Buckley.

O’Buckley is an outsider, a plus at Intel, where employees are known to have stayed long years; his expertise in the custom chips business will also be an asset at Intel Foundry. Next, during his stint at IBM, he spearheaded the company’s development of 22- and 14-nm FinFET technologies. As Gelsinger puts it, he has a unique blend of expertise in both foundry and fabless companies.

Now comes the tough part, execution.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Change of guard at Intel Foundry, again appeared first on EDN.

Sampling and aliasing

Tue, 05/14/2024 - 16:31

If we want to take samples of some analog waveform, as in doing analog to digital conversions at some particular conversion rate, there is an absolute lower limit to the rate of doing conversions versus the highest frequency component of the analog signal. That limit must not be violated if the sampling process is to yield valid results. We do not want to encounter the phenomenon called “aliasing”.

The term “aliasing” as we use it here has nothing to do with spy thrillers or crime novels. Aliasing is an unwanted effect that can arise when some analog waveform is being sampled for its instantaneous values at regular time intervals that are longer than half the reciprocal of a sampling frequency. If we were to sample some waveform once every microsecond, the sampling interval is half of that one microsecond for which we would have a sampling frequency limit of 2 MHz or faster.

Aliasing will occur if the sampled waveform has frequency component(s) that are greater in frequency than 50% of the sampling frequency. To turn that statement around, aliasing will occur if the sampling frequency is too low. Aliasing will occur at any sampling rate that is lower than twice the highest frequency component of the waveform being sampled.

The next question is: Why?

The late comedian Professor Irwin Corey once posed a similar question: “Why is the sky blue?” His answer was something like “This is a question which must be taken in two parts. The first part is ‘Why?’ ‘Why’ is a question Man has asked since the beginning of time. Why? I don’t know. The second part is ‘Is the sky blue?’ The answer is ‘Yes!'”

Fortunately, we can do a little better than that as follows.

The sampling process can be thought of as multiplying the waveform being sampled by a very narrow duty cycle pulse waveform of zero value for most of the time and of unity value for the very narrow sampling time interval. That sampling waveform will be rich in harmonics. There will be a spectral line at the sampling frequency itself plus spectral lines at each of the sampling frequency’s harmonics as well. Each spectral line will have sidebands as shown in Figure 1 which will extend from those sampling frequency spectral lines up and down the frequency spectrum in keeping with the sampled waveform’s bandwidth.

Figure 1 Sampling versus aliasing where spectral line will have sidebands that will extend from those sampling frequency spectral lines up and down the frequency spectrum in keeping with the sampled waveform’s bandwidth.

The sampling waveform is amplitude modulated by the sampled waveform and so I’ve chosen to call that sampled waveform’s highest frequency component, Fmod. Each bandwidth is 2 * Fmod.

If the sampling frequency is high enough as with Fs1, the illustrated sidebands do not overlap. There is a respectable guard band between them, and no aliasing occurs.

If the sampling frequency starts getting lower as with Fs2, the sidebands start getting closer together and there is a less comfortable, if I may use that word, guard band.

If the sampling frequency gets too low as with Fs3 which is less than twice Fmod, the sidebands overlap, and we have aliasing. Sampling integrity is lost. The sampled waveform cannot be reconstructed from the undersampled output of this now unsatisfactory system.

Consider this an homage to Claude Shannon (April 30, 1916 – February 24, 2001) and his sampling theory.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sampling and aliasing appeared first on EDN.

Cancel thermal airflow sensor PSRR with just two resistors

Mon, 05/13/2024 - 16:35

Self-heated transistors used as thermal air flow sensors are a particular (obsessive?) interest of mine, and over the years I must have designed dozens of variations on this theme. Figure 1 illustrates one such topology seen here before. It connects two transistors in a Darlington pair with Q2 serving as an unheated ambient thermometer and Q1 as the self-heated airflow sensor. Reference amplifier A1 and current sense resistor R3 regulate a constant 67 mA = heating current = 333 mW @ 5 V heating power.

Figure 1 Typical self-heated transistor thermal airflow sensor.

Wow the engineering world with your unique design: Design Ideas Submission Guide

This heat input raises Q1’s temperature above ambient by 64oC at 0 fpm air speed, cooling to 24oC at 1000 fpm as shown in Figure 2.

 Figure 2 Thermal sensor temperature versus air speed.

As shown in Figure 2, the relationship between the airspeed and cooling of the self-heated transistor sensor is highly nonlinear. This is an inherent characteristic of such sensors and causes the sensor temperature versus air speed signal to be equally nonlinear. Consequently, even relatively small power supply instabilities, that translate % for % into instability in sensor temperature rise, can create surprisingly large airspeed measurement errors.

Clearly, anything less than perfect power supply stability can make this a problem.

But Figure 3 offers a surprisingly simple and inexpensive fix consisting of just two added resistors: R7 and R8.

Figure 3 Added R7 and R8 establish an instability-cancelling relationship between heating voltage V and heating current I.

The added Rs sum feedback from current sensing R3 with heating voltage source V. Summation happens in a ratio such that a percentage increase in V produces an equal and opposite percentage decrease in current I, and vice versa. The result is shown graphically in Figure 4.

Note the null (inflection) point at 5 V where heating is perfectly independent of voltage.

Figure 4: Sensor temperature versus supply voltage where: Blue = heating voltage V and (uncorrected) power; Red = heating current I; and Black = I*V = heating power / temperature.

Here’s the same thing in simple nullification math:

 I = (0.2 – V*R8/R7)/R3 = (0.2 – 0.02V)/R3
H = I*V = (0.2V – 0.02V2)/R3
dH/dV = (0.2 – 0.04V)/R3 = (0.2 – 0.2)/R3 = 0 @ V = 5 volts
dH = -0.01% @ V = 5 volts ±1%

Note the 200:1 stability improvement that attenuates a ±1% variation in V down to only -0.01% variation in heating power and therefore temperature.

Problem solved. Cheaply!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cancel thermal airflow sensor PSRR with just two resistors appeared first on EDN.

Is Rohm closer to acquiring Toshiba’s power chip business?

Mon, 05/13/2024 - 13:55

As Rohm Semiconductor deepens its ties with Toshiba Electronic Devices & Storage, industry watchers wonder if Rohm is getting closer to acquiring Toshiba’s power chips business. It all began late last year when the two companies announced a joint investment of $2.7 billion to collaborate on manufacturing power electronics devices.

But what made this news more noteworthy was that the announcement followed Rohm’s becoming part of a private equity group that was planning to take Toshiba private. However, when the two companies joined hands to boost the volume production of power devices, they stated that they had been considering this collaboration for some time and that it wasn’t a starting point in Rohm acquiring Toshiba’s power semiconductors business.

There is a third player in this $2.7 billion investment plan: the Japanese government, which adds another dimension to this hookup between Rohm and Toshiba Electronic Devices & Storage. Japan, aiming to strengthen the resilience of its semiconductor supply chains, recognises the strategic importance of power electronics and wants to double the power chip production in the country.

Moreover, Japan sees the local power chip industry as too fragmented, which makes it hard for them to compete with companies like Infineon. So, the Japanese government will subsidize one-third of this $2.7 billion investment in power semiconductor production on part of Rohm and Toshiba.

A closer look at this dimension also adds merits to the possibility of Rohm subsequently acquiring Toshiba’s power semiconductors business. It’s worth mentioning that Rohm was the first company to mass produce silicon carbide (SiC) MOSFETs, and it’s been continuously investing in this wideband gap (WBG) technology since then.

Figure 1 The Miyazaki Plant No. 2, based on assets acquired from Solar Frontier in July 2023, is dedicated to manufacturing SiC power devices. Source: Rohm

In the $2.7 billion joint investment plan announced late last year, Rohm will invest ¥289.2 billion in its new plant in Kunitomi, Miyazaki Prefecture, to produce SiC power chips. Toshiba will invest ¥99.1 billion in its newly built 300-mm fab in Nomi, Ishikawa Prefecture, to produce silicon-based power chips.

After delisting late last year, Toshiba faces an uncertain future. However, it still possesses highly valuable assets, and its power electronics business is one of them. There has also been chatter about splitting Toshiba into three units.

Figure 2 Vehicle electrification and automation of industrial equipment have led to strong demand for power devices like MOSFETs and IGBTs at 300-mm fab in Nomi. Source: Toshiba

When you see this potential divesture in the wake Japan’s desire to have a power electronics company that can compete with the likes of Infineon, Rohm taking over Toshiba’s power semiconductors business seems like a no-brainer. Among Japan’s current power chip firms, Rohm is known to have a stable power electronics business.

And the company is keen to affirm its management vision: “We focus on power and analog solutions and solve social problems by contributing to our customers’ needs for energy savings and miniaturization of their products.” Given this backdrop, Rohm taking over Toshiba Electronic Devices & Storage is probably a matter of time.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Is Rohm closer to acquiring Toshiba’s power chip business? appeared first on EDN.

How Wi-Fi sensing simplifies presence detection

Fri, 05/10/2024 - 12:18

The emerging technology of Wi-Fi sensing promises significant benefits for a variety of embedded and edge systems. Using only the radio signals already generated by Wi-Fi interfaces under normal operation, Wi-Fi sensing can theoretically enable an embedded device to detect the presence of humans, estimate their motion, approximate their location, and even sense gestures and subtle movements, such as breathing and heartbeats.

Smart home, entertainment, security, and safety systems can all benefit from this ability. For example, a small sensor in a car could detect the presence of back-seat passengers—soon to be a requirement in new passenger vehicles. It can even detect a child breathing under a blanket as it does not require line of sight. Or an inexpensive wireless monitor in a home could detect in a room or through walls when a person falls—a lifesaver in home-care situations.

Figure 1 Wi-Fi Sensing can be performed on any Wi-Fi-enabled device with the right balance of power consumption and processing performance. Source: Synaptics

Until recently, such sensing could only be done with a passive RF receiver relying on the processing capability of a nearby Wi-Fi access point. Now, it can be done on every Wi-Fi-enabled end device. This article explores how designers can get from theory to shipped product.

How it works

The elegance of Wi-Fi sensing is that it uses what’s already there: the RF signals that Wi-Fi devices use to communicate. In principle, a Wi-Fi receiving device could detect changes in those RF signals as it receives them and, from the changes, infer the presence, motion, and location of a human in the area around the receiver.

Early attempts to do this used the Wi-Fi interface’s receive signal strength indicator (RSSI), a number produced by the interface periodically to indicate the average received signal strength. In much the same way that a passive infrared motion detector interprets a change in IR intensity as motion near its sensor, these Wi-Fi sensors interpret a change in RSSI value as the appearance or motion of an object near the receiver.

For instance, a person could block the signal by stepping between the receiver and the access point’s transmitter, or a passing person could alter the multipath mix arriving at the receiver.

RSSI is unstable in the real world, even when no one is nearby. It can be challenging to separate the influences of noise, transmitter gain changes, and many other sources from the actual appearance of a person.

This has led researchers to move to a richer, more frequently updated, and more stable data stream. With the advent of multiple antennas and many subcarrier frequencies, transmitters and receivers need far more information than just RSSI to optimize antenna use and subcarrier allocation. Their solution is to take advantage of channel state information (CSI) in the 802.11n standard. This should be available from any compliant receiver, though the accuracy may vary.

Figure 2 Wi-Fi system-on-chips (SoCs) can analyze CSI for subtle changes in the channel through which the signal is propagating to detect presence, motion, and gestures. Source: Synaptics

CSI is reported by the receiver every time a subcarrier is activated. It is essentially a matrix of complex numbers, each element conveying magnitude and phase for one combination of transmit and receive antennas. A three-transmit-antenna, two-receive-antenna channel would be a 3 x 2 array. The receiver generates a new matrix for each subcarrier activation. So, in total, the receiver maintains a matrix for each active subcarrier.

The CSI captures far more information than the RSSI, including attenuation and phase shift for each path and frequency. In principle, all this data contains a wealth of information about the environment around the transmitter and receiver. In practice, technical papers have reported accurate inference of human test subjects’ presence, location, motion, and gestures by analyzing changes in the CSI.

Capturing presence data

Any compliant Wi-Fi interface should produce the CSI data stream. That part is easy. However, it is the job of the sensor system to process the data and make inferences from it. This process is generally divided into three stages, following the conventions developed for video image processing: data preparation, feature extraction, and classification.

The first challenge is data preparation. While the CSI is far more stable than the RSSI, it’s still noisy, mainly due to interference from nearby transmitters. The trick is to remove the noise without smoothing away the sometimes-subtle changes in magnitude or phase that the next stage will depend upon to extract features. But how to do this depends on the extraction algorithms and, ultimately, the classification algorithms and what is being sensed.

Some preparation algorithms may simply lump the CSI data into time bins, toss out outliers, and look for changes in amplitude. Others may attempt to extract and amplify elusive changes in phase relationships across the subcarriers. So, data preparation can be anything from a simple time-series filter to a demanding statistical algorithm.

Analysis and inference

The next stage in the pipeline will analyze the cleansed data streams to extract features. This process is analogous—up to a point—to feature extraction in vision processing. In practice, it is quite different. Vision processing may, for instance, use simple numerical calculations on pixels to identify edges and surfaces in an image and then infer that a surface surrounded by edges is an object.

But Wi-Fi sensors are not working with images. They are getting streams of magnitude and phase data that are not related in any obvious way to the shapes of objects in the room. Wi-Fi sensors must extract features that are not images of objects but are instead anomalies in the data streams that are both persistent and correlated enough to indicate a significant change in the environment.

As a result, the extraction algorithms will not simply manipulate pixels but will instead perform complex statistical analysis. The output of the extraction stage will be a simplified representation of the CSI data, showing only anomalies that the algorithms determine to be significant features of the data.

The final stage in the pipeline is classification. This is where the Wi-Fi sensor attempts to interpret the anomaly reported by the extraction stage. Interpretation may be a simple binary decision: is there a person in the room now? Is the person standing or sitting? Are they falling?

Or it may be a more quantitative evaluation: where is the person? What is their velocity vector? Or it may be an almost qualitative judgment: is the person making a recognizable gesture? Are they breathing?

The nature of the decision will determine the classification algorithm. Usually, there is no obvious, predictable connection between a person standing in the room and the resulting shift in CSI data. So, developers must collect actual CSI data from test cases and then construct statistical models or reference templates, often called fingerprints. The classifier can then use these models or templates to best match the feature from the extractor and the known situations.

Another approach is machine learning (ML). Developers can feed extracted features and correct classifications of those features into a support vector machine or a deep-learning network, training the model to classify the abstract patterns of features correctly. Recent papers have suggested that this may be the most powerful way forward for classification, with reported accuracies from 90 to 100% on some classification problems.

Wi-Fi sensing implementation

Implementing the front-end of an embedded Wi-Fi sensing device is straightforward. All that’s required is an 802.11n-compliant interface to provide accurate CSI data. The back-end is more challenging as it requires a trade-off between power consumption and capability.

For the data preparation stage, simple filtering may be within the range of a small CPU core. After all, a small matrix arrives only when a subcarrier is activated. But more sophisticated, statistical algorithms will call for a low-power DSP core. The statistical techniques for feature extraction are also likely to need the power and efficiency of the DSP.

Classification is another matter. All reported approaches are easily implemented in the cloud, but that is of little help for an isolated embedded sensor or even an edge device that must limit its upstream bandwidth to conserve energy.

Looking at the trajectory of algorithms, from fingerprint matching to hidden Markov models to support vector machines and deep-learning networks, the trend suggests that future systems will increasingly depend on low-power deep-learning inference accelerator cores. Thus, the Wi-Fi sensing system-on-chip (SoC) may well include a CPU, a DSP, and an inference accelerator.

However, as this architecture becomes more apparent, we see an irony. Wi-Fi sensing’s advantage over other sensing techniques is its elegant conceptual simplicity. But something else becomes clear as we unveil the true complexity of turning the twinkling shifts in CSI into accurate inferences.

Bringing a successful Wi-Fi sensing device to market will require a close partnership with an SoC developer with the right low-power IP, design experience, and intimate knowledge of the algorithms—present and emerging. Choosing a development partner may be one of the most important of the many decisions developers must make.

Ananda Roy is senior product line manager for wireless connectivity at Synaptics.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How Wi-Fi sensing simplifies presence detection appeared first on EDN.

Handheld analyzers gain pulse generator option

Thu, 05/09/2024 - 22:26

FieldFox handheld RF analyzers from Keysight can now generate an array of pulse types at frequencies as high as 54 GHz. Outfitted with Option 357 pulse generator software, the FieldFox B- and C-Series analyzers give field engineers access to pulse generation capabilities that support analog modulations and user-defined pulse sequences. All that is needed to upgrade an existing analyzer is a software license key and firmware upgrade.

The software option includes standard pulses, FM chirps, FM triangles, AM pulses, and user-definable pulse sequences. In addition, it can create continuous wave (CW) signals with or without AM/FM modulations, including frequency shift keying (FSK) and binary phase shift keying (BPSK). Key parameters of the generated signal are displayed in both numerical and graphical formats.

FieldFox handheld analyzers equipped with pulse generation serve many purposes, including field radar testing for air traffic control, simulating automotive radar scenarios, performing field EMI leakage checks, and assessing propagation loss of mobile networks.

FieldFox product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Handheld analyzers gain pulse generator option appeared first on EDN.

Software platform streamlines factory automation

Thu, 05/09/2024 - 22:26

Reducing shop-floor hardware, Siemens’ Simatic Automation Workstation delivers centralized software-defined factory automation and control. The system allows manufacturers to replace a hardware programmable logic controller (PLC), conventional human-machine interface (HMI), and edge device with a single software-based workstation.

Hundreds of PLCs can be found throughout plants, each one requiring extensive programming to keep it up-to-date, secure, and aligned with other PLCs in the manufacturing environment. In contrast, the Simatic Workstation can be viewed and managed from a central point. Since programming, updates, and patches can be deployed to the entire fleet in parallel, the shop floor remains in synch.

Simatic Workstation is an on-premise operational technology (OT) platform. It offers high data throughput and low latency, essential for running various modular applications. Simatic caters to conventional automation tasks, like motion control and sequencing, as well as advanced automation operations that incorporate artificial intelligence.

The Simatic Automation Workstation is the latest addition to Siemens’ Xcelerator digital business platform. Co-creator Ford Motor Company will be the first customer to deploy and scale these workstations across its manufacturing operations.

Siemens

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Software platform streamlines factory automation appeared first on EDN.

Silicon capacitor boasts ultra-low ESL

Thu, 05/09/2024 - 22:26

Joining Empower’s family of E-CAP silicon capacitors for high-frequency decoupling is the EC1005P, a device with an equivalent series inductance (ESL) of just 1 picohenry (pH). The EC1005P offers a capacitance of 16.6 µF, along with low impedance up to 1 GHz. A very thin profile allows the capacitor to be embedded into the substrate or interposer of any SoC, especially those used in high-performance computing (HPC) and artificial intelligence (AI) applications.

E-CAP high-density silicon capacitor technology fulfills the ‘last inch’ decoupling gap from the voltage regulators to the SoC supply pins. This approach integrates multiple discrete components into a single monolithic device with a much smaller footprint and component count than solutions based on conventional multilayer ceramic capacitors.

In addition to sub-1-pH ESL, the EC1005P provides sub-3-mΩ equivalent series resistance (ESR). The capacitor comes in a 3.643×3.036-mm, 120-pad chip-scale package. Its standard profile of 784 µm can be customized for various height requirements.

The EC1005P E-CAP is sampling now, with volume production expected in Q4 2024. A datasheet for the EC1005P was not available at the time of this announcement. For more information about Empower’s ECAP product family, click here.

Empower Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon capacitor boasts ultra-low ESL appeared first on EDN.

Crossbar switch eases in-vehicle USB-C connectivity

Thu, 05/09/2024 - 22:25

A 10-Gbps automotive-grade crossbar switch from Diodes routes USB 3.2 and DisplayPort 2.1 signals through a USB Type-C connector. The PI3USB31532Q crossbar switch maintains high signal integrity when used in automotive smart cockpit and rear seat entertainment applications.

For design flexibility, the PI3USB31532Q supports three USB-C compliant configuration modes switching at 10 Gbps. It can connect a single lane of USB 3.2 Gen 2; one lane of USB 3.2 Gen 2 and two channels of DisplayPort 2.1 UHBR10; or four channels of DisplayPort 2.1 UHBR10 to the USB-C connector. When configured for DisplayPort, the switch also connects the complementary AUX channels to the USB-C sideband pins. Switch configuration is controlled via an I2C interface or on-chip logic using four external pins.

The crossbar switch provides a -3-dB bandwidth of 8.3 GHz, with insertion loss, return loss and crosstalk of -1.7 dB, -15 dB, and -38 dB, respectively, at 10 Gbps. Qualified to AEC-Q100 Grade 2 requirements, the part operates over a temperature range of -40°C to +105°C and requires a 3.3-V supply.

Housed in a 3×6-mm, 40-pin QFN package, the PI3USB31532Q crossbar switch costs $1.10 each in lots of 3500 units.

PI3USB31532Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Crossbar switch eases in-vehicle USB-C connectivity appeared first on EDN.

MCU manages 12-V automotive batteries

Thu, 05/09/2024 - 22:25

Infineon’s PSoC 4 HVPA 144k MCU serves as a programmable embedded system for monitoring and managing automotive 12-V lead-acid batteries. The ISO 26262-compliant part integrates precision analog and high-voltage subsystems on a single chip, enabling safe, intelligent battery sensing and management.

Powered by an Arm Cortex-M0+ core operating at up to 48 MHz, the 32-bit microcontroller supplies up to 128 kbytes of code flash, 8 kbytes of data flash, and 8 kbytes of SRAM, all with ECC. Dual delta-sigma ADCs, together with four digital filtering channels, determine the battery’s state-of-charge and state-of-health by measuring voltage, current, and temperature with an accuracy of up to ±0.1%.

An integrated 12-V LDO regulator, which tolerates up to 42 V, allows the device to be supplied directly from the 12-V lead-acid battery without requiring an external power supply. The high-voltage subsystem also includes a LIN transceiver (physical interface or PHY).

The PSoC 4 HVPA 144k is available now in 6×6-mm, 32-pin QFN packages. Infineon also offers an evaluation board and automotive-grade software.

PSoC 4 HVPA 144k product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCU manages 12-V automotive batteries appeared first on EDN.

2×AA/USB: OK!

Thu, 05/09/2024 - 16:20

While an internal, rechargeable lithium battery is usually the best solution for portable kit nowadays, there are still times when using replaceable cells with an external power option, probably from a USB source, is more appropriate. This DI shows ways of optimizing this.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The usual way of combining power sources is to parallel them, with a series diode for each. That is fine if the voltages match and some loss of effective battery capacity, owing to a diode’s voltage drop, can be tolerated. Let’s assume the kit in question is something small and hand-held or pocketable, probably using a microcontroller like a PIC, with a battery comprising two AA cells, the option of an external 5 V supply, and a step-up converter producing a 3.3 V internal power rail. Simple steering diodes used here would give a voltage mismatch for the external power while wasting 10 or 20% of the battery’s capacity.

Figure 1 shows a much better way of implementing things. The external power is pre-regulated to avoid the mismatch, while active switching minimizes battery losses. I have used this scheme in both one-offs and production units, and always to good effect.

Figure 1 Pre-regulation of an external supply is combined with an almost lossless switch in series with the battery, which maximizes its life.

The battery feed is controlled by Q1, which is a reversed p-MOSFET. U1 drops any incoming voltage down to 3.3 V. Without external power, Q1’s gate is more negative than its source, so it is firmly on, and (almost) the full battery voltage appears across C3 to feed the boost converter. Q2’s emitter–base diode stops any current flowing back into U1. Apart from the internal drain–source or body diode, MOSFETs are almost symmetrical in their main characteristics, which allows this reversed operation.

When external power is present, Q1.G will be biased to 3.3 V, switching it off and effectively disconnecting the battery. Q2 is now driven into saturation connecting U1’s 3.3 V output, less Q2’s saturated forward voltage of 100–200 mV, to the boost converter. (The 2N2222, as shown, has a lower VSAT than many other types.) Note that Q2’s base current isn’t wasted, but just adds to the boost converter’s power feed. Using a diode to isolate U1 would incur a greater voltage drop, which could cause problems: new, top-quality AA manganese alkaline (MnAlk) cells can have an off-load voltage well over 1.6 V, and if the voltage across C3 were much less than 3 V, they could discharge through the MOSFET’s inherent drain–source or body diode. This arrangement avoids any such problems.

Reversed MOSFETs have been used to give battery-reversal protection for many years, and of course such protection is inherent in these circuits. The body diode also provides a secondary path for current from the battery if Q1 is not fully on, as in the few microseconds after external power is disconnected.

Figure 1 shows U1 as an LM1117-3.3 or similar type, but many more modern regulators allow a better solution because their outputs appear as open circuits when they are unpowered, rather than allowing reverse current to flow from their outputs to ground. Figure 2 shows this implementation.

Figure 2 Using more recent designs of regulator means that Q2 is no longer necessary.

Now the regulator’s output can be connected directly to C3 and the boost converter. Some devices also have an internal switch which completely isolates the output, and D1 can then be omitted. Regulators like these could in principle feed into the final 3.3mV rail directly, but this can actually complicate matters because the boost converter would then also need to be reverse-proof and might itself need to be turned off. R2 is now used to bias Q1 off when external power is present.

If we assume that the kit uses a microcontroller, we can easily monitor the PSU’s operation. R5—included purely for safety’s sake—lets the microcontroller check for the presence of external power, while R3 and R4 allow it to measure the battery voltage accurately. Their values, calculated on the assumption that we use an 8-bit A–D conversion with a 3.3 V reference, give a resolution of 10 mV/count, or 5 mV per cell. Placing them directly across the battery loads it with ~5–6 µA, which would drain typical cells in about 50 years; we can live with that. The resistor ratio chosen is close to 1%-accurate.

Many components have no values assigned because they will depend on your choice of regulator and boost converter. With its LM1117-3.3, the circuit of Figure 1 can handle inputs of up to 15 V, though a TO-220 version then gets rather warm with load currents approaching 80 mA (~1 W, its practical power limit without heatsinking).

I have also used Figure 2 with Microchip’s MCP1824T-3302 feeding a Maxim MAX1674 step-up converter, with an IRLML6402 for Q1, which must have a low on-resistance. Many other, and more recent, devices will be suitable, and you probably have your own favorites.

While the external power input is shown as being naked, you may want to clothe it with some filtering and protection such as a poly-fuse and a suitable Zener or TVS. Similarly, no connector is specified, but USBs and barrel jacks both have their places.

While this is shown for nominal 3V/5V supplies, it can be used at higher voltages subject to gate–source voltage limitations owing to the MOSFET’s input protection diodes, the breakdown voltages of which can range from 6 V to 20 V, so check your device’s data sheet.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2×AA/USB: OK! appeared first on EDN.

Optimize battery selection and operating life of wireless IoT devices

Thu, 05/09/2024 - 14:59

Batteries are essential for powering many Internet of Things (IoT) devices, particularly wireless sensors, which are now deployed by billions. But batteries are often difficult to access and expensive to change because it’s a manual process. Anything that can be done to maximize the life of batteries and minimize or eliminate the need to change them during their operating life is a worthwhile endeavour and a significant step toward sustainability and efficiency.

Taking the example of a wireless sensor, this is a five-step process:

  1. Select the components for your prototype device: sensor, MCU, and associated electronics.
  2. Use a smart power supply with measurement capabilities to establish a detailed energy profile for your device under simulated operating conditions.
  3. Evaluate your battery options based on the energy profile of your device.
  4. Optimize the device parameters (hardware, firmware, software, and wireless protocol).
  5. Make your final selection of the battery type and capacity with the best match to your device’s requirements.

Selecting device type and wireless protocol

Microcontroller (MCU) is the most common processing resource at the heart of embedded devices. You’ll often choose which one to use for your next wireless sensor based on experience, the ecosystem with which you’re most familiar, or corporate dictate. But when you have a choice and conserving energy is a key concern for your application, there may be a shortcut.

Rather than plow through thousands of datasheets, you could check out EEMBC, an independent benchmarking organization. The EEMBC website not only enables a quick comparison of your options but also offers access to a time-saving analysis tool that lists the sensitivity of MCU platforms to various design parameters.

Most IoT sensors spend a lot of time in sleep mode and send only short bursts of data. So, it’s important to understand how your short-listed MCUs manage sleep, idle and run modes, and how efficiently they do that.

Next, you need to decide on the wireless protocol(s) you’ll be using. Range, data rate, duty cycle, and compatibility within the application’s operating environment will all be important considerations.

Figure 1 Data rates and range are the fundamental parameters considered when choosing a wireless protocol. Source: BehrTech

Once you’ve established the basics, digging into the energy efficiency of each protocol gets more complex and it’s a moving target. There are frequent new developments and enhancements to established wireless standards.

At data rates of up to 10 Kbps, Bluetooth LE/Mesh, LoRa, or Zigbee are usually the lowest energy protocols of choice for distances up to 10 meters. If you need to cover a 1-km range, NB-IoT may be on your list, but at an order of magnitude higher energy usage.

In fact, MCU hardware, firmware and software, the wireless protocol, and the physical environment in which an IoT device operates are all variables that need to be optimized to conserve energy. The only effective way to do that is to model these conditions during development and watch the effect of changes on the fly as you make changes to any of these parameters.

Establish an initial energy profile of device under test (DUT)

The starting point is to use a smart, programmable power supply and measurement unit to profile and record the energy usage of your device. This is necessary because simple peak and average power measurements with multimeters can only provide limited information. The Otii Arc Pro from Qoitech was used here to illustrate the process.

Consider a wireless MCU. In run mode, it may be putting out a +6 dBm wireless signal and consuming 10 mA or more. In deep sleep mode, the current consumption might fall below 0.2 µA. That’s a 50:1 dynamic range and changes happen almost instantaneously, certainly within microseconds. Conventional multimeters can’t capture changes like these, so they can’t help you understand the precise energy profile of your device. Without that, your choice of battery is open to miscalculation.

Your smart power supply is a digitally controlled power source offering control over parameters such as voltage, current, power, and mode of operation. Voltage control should ideally be in 1 mV steps so that you can determine the DUT’s energy consumption at different voltage levels to mimic battery discharge.

You’ll need sense pins to monitor the DUT power rails, a UART to see what happens when you make code changes, and GPIO pins for status monitoring. Standalone units are available, but it can be more flexible and economical to choose a smart power supply that uses your computer’s processing resources and display, as shown in the example below.

Figure 2 The GUI for a smart power supply can run on Windows, MacOS, or Ubuntu. Source: Qoitech

After connecting, you power and monitor the DUT simultaneously. You’re presented with a clear picture of voltages and current changes over time. Transients that you would never be able to see on a traditional meter are clearly visible and you can immediately detect unexpected anomalies.

Figure 3 A smart power profiler gives you a detailed comparison of your device’s energy consumption for different hardware and firmware versions. Source: Qoitech

From the stored data in the smart power supply, you’ll be able to make a short list of battery options.

Choosing a battery

Battery selection needs to consider capacity, energy density, voltage, discharge profile, and temperature. Datasheet comparisons are the starting point but it’s important to validate the claims of battery manufacturers by benchmarking their batteries through testing. Datasheet information is based on performance under “normal conditions” which may not apply to your application.

Depending on your smart power supply model, the DUT energy profiling described earlier may provide an initial battery life estimate based on a pre-programmed battery type and capacity. Either the same instrument or a separate piece of test equipment may then be used for a more detailed examination of battery performance in your application. Accelerated discharge measurements, when properly set up, are a time-saving alternative to the years it may take a well-designed IoT device to exhaust its battery.

These measurements must follow best practices to create an accurate profile. These include maintaining high discharge consistency to achieve a match to the DUT’s peak current, shortening the cycle time and increasing sleep current so that the battery can recover. You should also consult with battery manufacturers to validate any assumptions you make during the process.

You can profile the same battery chemistries from different manufacturers, or different battery chemistries, perhaps comparing lithium coin cells with AA alkaline batteries.

Figure 4 The comparison shows accelerated discharge characteristics for AA and AAA alkaline batteries from five different manufacturers. Source: Qoitech

By this stage, you have a good understanding of both the energy profile of your device and of the battery type and capacity that’s likely to result in the longest operating life in your applications. Upload your chosen battery profile to your smart power supply and set it up to emulate that battery.

Optimize and iterate

You can now go back to the DUT and optimize hardware and software for the lowest power consumption in near real-world conditions. You may have the flexibility to experiment with different wireless protocols, but even if that’s not the case, experimenting with sleep and deep-sleep modes, network routing, and even alternative data security protocols can all yield improvements, avoiding a common problem where 40 bytes of data can easily become several Kbytes.

Where the changes create a significant shift in your device’s energy profile, you may also review the choice of battery and evaluate again until you achieve the best match.

While this process may seem lengthy, it can be completed in just a few hours and may extend the operating life of a wireless IoT edge device, and hence reduce battery waste, by up to 30%.

Björn Rosqvist, co-founder and chief product officer of Qoitech, has 20+ years of experience in power electronics, automotive, and telecom with companies such as ABB, Ericsson, Flatfrog, Sony, and Volvo Cars.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Optimize battery selection and operating life of wireless IoT devices appeared first on EDN.

Apple’s Spring 2024: In-person announcements no more?

Wed, 05/08/2024 - 17:35

By means of introduction to my coverage of Apple’s 2024-so-far notable news, I’d like to share the amusing title, along with spot-on excerpts from the body text, from a prescient piece I saw on Macworld yesterday. The title? “Get ready for another Apple meeting that could have been an email”. Now the excerpts:

Apple started running virtual press events during the pandemic when in-person gatherings made little sense and at various times were frowned upon or literally illegal. But Apple has largely stuck with that format even as health concerns lessened and its own employees were herded back into the office.

 Why is that? Because virtual events have advantages far beyond the containment of disease. Aside from avoiding the logistical headaches of getting a thousand bad-tempered journalists from around the world to the same place at the same time, a pre-recorded video presentation is much easier to run smoothly than a live performance…

 Nobody cringes harder than me when live performers get things wrong, and I absolutely get the attraction of virtual keynotes for Apple. But it does raise some awkward existential questions about why we need to bother with the elaborate charade that is a keynote presentation. What, after all, is the point of a keynote? If it’s just to get information about new products, that can be done far more efficiently via a press release that you can read at your own speed; just the facts, no sitting through skits and corporate self-congratulation.

 Is it to be marketed by the best hypemen in the business? If that’s really something you want, you might as well get it from an ad: virtual keynotes give none of that dubious excitement and tribalistic sense of inclusivity you get with a live performance. And we’ve even lost the stress-test element of seeing an executive operating the product under extreme pressure. What we’re left with is a strange hybrid: a long press release read out by a series of charisma-free executives, interspersed with advertisements.

I said something similar in my coverage of Apple’s June 2023 Worldwide Developer Conference (WWDC):

This year’s event introductory (and product introduction) presentation series was lengthy, with a runtime of more than two hours, and was also entirely pre-recorded. This has been Apple’s approach in recent years, beginning roughly coincident with the COVID lockdown and consequent transition to a virtual event (beginning in 2020; 2019 was still in-person)…even though both last- and this-years’ events returned to in-person from a keynote video viewing standpoint.

 On the one hand, I get it; as someone who (among other things) delivers events as part of his “day job”, the appeal of a tightly-scripted, glitch-free set of presentations and demonstrations can’t be understated. But live events also have notable appeal: no matter how much they’re practiced beforehand, there’s still the potential for a glitch, and therefore when everything still runs smoothly, what’s revealed and detailed is (IMHO) all the more impactful as a result.

What we’ve ended up with so far this year is a mix of press release-only and virtual-event announcements, in part (I suspect, as does Macworld) because of “building block” mass-production availability delays for the products in today’s (as I write these words on Tuesday, May 7) news.

But I’m getting ahead of myself.

The Vision Pro

Let’s rewind to early January, when Apple confirmed that its first-generation Vision Pro headset (which I’d documented in detail within last June’s WWDC coverage) would open for pre-orders on January 19, with in-store availability starting February 2.

Granted, the product’s technology underpinnings remain amazing 11 months post-initial unveil:

But I’m still not sold on the mainstream (translation: high volume) appeal of such a product, no matter how many entertainment experiences and broader optimized applications Apple tries to tempt me with (and no matter how much Apple may drop the price in the future, assuming it even can to a meaningful degree, given bill-of-materials cost and profit-expectation realities). To be clear, this isn’t an Apple-only diss; I’ve expressed the same skepticism in the past about offerings from Oculus-now-Meta and others. And at the root of my pessimism about AR/VR/XR/choose-your-favorite-acronym (or, if you’re Apple, “spatial computing”, whatever that means) headsets may indeed be enduring optimism of a different sort.

Unlike the protagonists of science fiction classics such as William Gibson’s Neuromancer and Virtual Light, Neal Stephenson’s Snow Crash, and Ernest Cline’s Ready Player One, I don’t find the real world to be sufficiently unpleasant that I’m willing to completely disengage from it for long periods of time (and no, the Vision Pro’s EyeSight virtual projected face doesn’t bridge this gap). Scan through any of the Vision Pro reviews published elsewhere and you’ll on-average encounter similar lukewarm-at-best enthusiasm from others. And I can’t help but draw an accurate-or-not analogy to Magic Leap’s 2022 consumer-to-enterprise pivot when I see subsequent Apple press releases touting medical and broader business Vision Pro opportunities.

So is the Vision Pro destined to be yet another Apple failure? Maybe…but definitely not assuredly. Granted, we might have another iPod Hi-Fi on our hands, but keep in mind that the first-generation iPhone and iPad also experienced muted adoption. Yours truly even dismissively called the latter “basically a large-screen iPod touch” on a few early occasions. So let’s wait and see how quickly the company and its application-development partners iterate both the platform’s features and cost before we start publishing headlines and crafting obituaries about its demise.

The M3-based MacBook Air

Fast-forward to March, and Apple unveiled M3 SoC-based variants of the MacBook Air (MBA), following up on the 13” M2-based MBA launched at the 2022 WWDC and the first-time-in-this-size 15” M2 MBA unveiled a year later:

Aside from the Apple Silicon application processor upgrade (first publicly discussed last October), there’s faster Wi-Fi (6E) along with an interesting twist on expanded external-display support; the M3-based models can now simultaneously drive two of ‘em, but only when the “clamshell” is closed (i.e., when the internal display is shut off). But the most interesting twist, at least for this nonvolatile-memory-background techie, is that Apple did a seeming back-step on its flash memory architecture. In the M2 generation, the 256 GByte SSD variant consisted of only a single flash memory chip (presumably single-die, to boot, bad pun intended), which bottlenecked performance due to the resultant inability for multi-access parallelism. To get peak read and (especially evident) write speeds, you needed to upgrade to a 512 GByte or larger SSD.

The M3 generation seemingly doesn’t suffer from the same compromise. A post-launch teardown revealed that (at least for that particular device…since Apple multi-sources its flash memory, one data point shouldn’t necessarily be extrapolated to an all-encompassing conclusion) the 256 GByte SSD subsystem comprised two 128 GByte flash memory chips, with consequent restoration of full performance potential. I’m particularly intrigued by this design decision considering that two 128 GByte flash memories conceivably cost Apple more than one 256 GByte alternative (likely the root cause of the earlier M1-to-M2 move). That said, I also don’t underestimate the formidable negotiation “muscle” of Apple’s procurement department…

Earnings

Last week, we got Apple’s second-fiscal-quarter earnings results. I normally don’t cover these at all, and I won’t dwell long on the topic this time, either. But they reveal Apple’s ever-increasing revenue and profit reliance on its “walled garden” services business (to the ever-increasing dismay of its “partners”, along with various worldwide government entities), given that hardware revenue dropped for all hardware categories save Macs, notably including both iPhone and iPad and in spite of the already-discussed Vision Pro launch. That said, the following corporate positioning seemed to be market-calming:

In the March quarter a year ago, we were able to replenish iPhone channel inventory and fulfill significant pent-up demand from the December quarter COVID-related supply disruptions on the iPhone 14 Pro and 14 Pro Max. We estimate this one-time impact added close to $5 billion to the March quarter revenue last year. If we removed this from last year’s results, our March quarter total company revenue this year would have grown.

The iPad Air

And today we got new iPads and accessories. The iPad Air first:

Reminiscent of the aforementioned MacBook Air family earlier this year, they undergo a SoC migration, this time from the M1 to the M2. They also get a relocated front camera, friendlier (as with 2022’s 10th generation conventional iPad) for landscape-orientation usage. And to the “they” in the previous two sentences, as well as again reminiscent of the aforementioned MacBook Air expansion to both 13” and 15” form factors, the iPad Air now comes in both 11” and 13” versions, the latter historically only offered with the iPad Pro.

Speaking of which

The M4 SoC

Like their iPad Air siblings, the newest generation of iPad Pros relocate the front camera to a more landscape orientation-friendly bezel location. But that’s among the least notable enhancements this time around. On the flip side of the coin, perhaps most notable news is that they mark the first-time emergence of Apple’s M4 SoC. I’ll begin with obligatory block diagrams:

Some historical perspective is warranted here. Only six months ago, when Apple rolled out its first three (only?) M3 variants along with inclusive systems, I summarized the to-date situation:

Let’s go back to the M1. Recall that it ended up coming in four different proliferations:

  • The entry-level M1
  • The M1 Pro, with increased CPU and GPU core counts
  • The M1 Max, which kept the CPU core constellation the same but doubled up the graphics subsystem, and
  • The M1 Ultra, a two-die “chiplet” merging together two M1 Max chips with requisite doubling of various core counts, the maximum amount of system memory, and the like

But here’s the thing: it took a considerable amount of time—1.5 years—for Apple to roll out the entire M1 family from its A14 Bionic development starting point:

  • A14 Bionic (the M1 foundation): September 15, 2020
  • M1: November 10, 2020
  • M1 Pro and Max: October 18, 2021
  • M1 Ultra: March 8, 2022

 Now let’s look at the M2 family, starting with its A15 Bionic SoC development foundation:

 Nearly two years’ total latency this time: nine months alone from the A15 to the M2.

I don’t yet know for sure, but for a variety of reasons (process lithography foundation, core mix and characteristics, etc.) I strongly suspect that the M3 chips are not based on the A16 SoC, which was released on September 7, 2022. Instead, I’m pretty confident in prognosticating that Apple went straight to the A17 Pro, unveiled just last month (as I write these words), on September 12 of this year, as their development foundation.

 Now look at the so-far rollout timeline for the M3 family—I think my reason for focusing on it will then be obvious:

  • A17 Pro: September 12, 2023
  • M3: October 30, 2023
  • M3 Pro and Max: October 30, 2023
  • M3 Ultra: TBD
  • M3 Extreme (a long-rumored four-Max-die high-end proliferation, which never ended up appearing in either the M1 or M2 generations): TBD (if at all)

Granted, we only have the initial variant of the M4 SoC so far. There’s no guarantee at this point that additional family members won’t have M1-reminiscent sloth-like rollout schedules. But for today, focus only on the initial-member rollout latencies:

  • M1 to M2: more than 19 months
  • M2 to M3: a bit more than 16 months
  • M3 to M4: a bit more than 6 months

Note, too, that Apple indicates that the M4 is built on a “second-generation 3 nm process” (presumably, like its predecessors, from TSMC). Time from another six-months-back quote:

Conceptually, the M3 flavors are reminiscent of their precursors, albeit with newer generations of various cores, along with a 3 nm fabrication process foundation.

As for the M4, here’s my guess: from a CPU core standpoint, especially given the rapid generational development time, the performance and efficiency cores are likely essentially the same as those in the M3, albeit with some minor microarchitecture tweaks to add-and-enhance deep learning-amenable instructions and the like, therefore this press release excerpt:

Both types of cores also feature enhanced, next-generation ML accelerators.

The fact that there are six efficiency cores this time, versus four in the M3, is likely due in no small part to the second-generation 3 nm lithography’s improved transistor packing capabilities along with more optimized die layout efficiencies (any potential remaining M3-to-M4 die size increase might also be cost-counterbalanced by TSMC’s improved 3 nm yields versus last year).

What about the NPU, which Apple brands as the “Neural Engine”? Well, at first glance it’s a significant raw-performance improvement from the one in the M3: 18 TOPs (trillion operations per second) vs 38 TOPs. But here comes another six-month back quote about the M3:

The M3’s 16-core neural engine (i.e., deep learning inference processing) subsystem is faster than it was in the previous generation. All well and good. But during the presentation, Apple claimed that it was capable of 18 TOPS peak performance. Up to now I’d been assuming, as you know from the reading you’ve already done here, that the M3 was a relatively straight-line derivation of the A17 Pro SoC architecture. But Apple claimed back in September that the A17 Pro’s neural engine ran at 35 TOPS. Waaa?

 I see one (or multiple-in-combination) of (at least) three possibilities to explain this discrepancy:

  • The M3’s neural engine is an older or more generally simpler design than the one in the A17 Pro
  • The M3’s neural engine is under-clocked compared to the one in the A17 Pro
  • The M3’s neural engine’s performance was measured using a different data set (INT16 vs INT8, for example, or FLOAT vs INT) than what was used to benchmark the A17 Pro

My bet remains that the first possibility of the three listed was the dominant if not sole reason for the M3 NPU’s performance downgrade versus that in the A17 Pro. And I’ll also bet that the M4 NPU is essentially the same as the one in the A17 Pro, perhaps again with some minor architecture tweaks (or maybe just a slight clock boost!). So then is the M4 just a tweaked A17 Pro built on a tweaked 3 nm process? Not exactly. Although the GPU architecture also seems to be akin to, if not identical to, the one in the A17 Pro (six-core implementation) and M3 (10-core matching count), the display controller has more tangibly evolved this time, likely in no small part for the display enhancements which I’ll touch on next. Here’s the summary graphic:

More on the iPad Pro

Turning attention to the M4-based iPads themselves, the most significant thing here is that they’re M4-based iPads. This marks the first time that a new Apple Silicon generation has shown up in something other than an Apple computer (notably skipping the M3-based iPad Pro iteration in the process, as well), and I don’t think it’s just a random coincidence. Apple’s clearly, to me, putting a firm stake in the ground as to the corporate importance of its comparatively proprietary (versus the burgeoning array of Arm-based Windows computers) tablet product line, both in an absolute sense and versus computers (Apple’s own and others). A famous Steve Jobs quote comes to my mind at this point:

If you don’t cannibalize yourself someone else will.

The other notable iPad Pro enhancement this time around is the belated but still significant display migration to OLED technology, which I forecasted last August. Unsurprisingly, thanks to the resultant elimination of a dedicated backlight (an OLED attribute I noted way back in 2010 and revisited in 2019) the tablets are now significantly thinner as a result, in spite of the fact that they’re constructed in a fairly unique dual-layer brightness-boosting “sandwich” (harking back to my earlier display controller enhancements comments; note that a separate simultaneous external tethered display is still also supported). And reflective of the tablets’ high-end classification, Apple has rolled out corresponding “Pro” versions of its Magic Keyboard (adding a dedicated function-key row, along with a haptic feedback-enhanced larger trackpad):

And Pencil, adding “squeeze” support, haptic feedback of its own, and other enhancements:

Other notable inter- and intra-generational tweaks:

  • No more mmWave 5G support.
  • No more ultra-wide rear camera, either.
  • Physical SIM slots? Gone, too.
  • Ten-core CPU M4 SoCs are unique to the 1 TByte and 2 TByte iPad Pro variants; lower-capacity mass storage models get only 9 CPU cores (one less performance core, to be precise, although corresponding GPU core counts are interestingly per-product-variant unchanged this time). They’re also allocated only half the RAM of their bigger-SSD brethren: 8 GBytes vs 16 GBytes.
  • 1 and 2 TByte iPads are also the only ones offered a nano-texture glass option.

Given that Apple did no iPad family updates at all last year, this is an encouraging start to 2024. That said, the base 10th-generation iPad is still the same as when originally unveiled in October 2022, although it did get a price shave today (and its 9th-generation precursor is no longer in production, either). And the 6th-generation iPad mini introduced in September 2021 is still the latest-and-greatest, too. I’m admittedly more than a bit surprised and pleased that my unit purchased gently used off eBay last summer is still state-of-the-art!

iPad software holdbacks

And as for Apple’s ongoing push to make the iPad, and the iPad Pro specifically, a credible alternative to a full-blown computer? It’s a topic I first broached at length back in September 2018, and to at least some degree the situation hasn’t tangibly changed since then. Tablet hardware isn’t fundamentally what’s holding the concept back from becoming a meaningful reality, but then again, I’d argue that it never was the dominant shortcoming. It was, and largely remains, software; both the operating system and the applications that run on it. And I admittedly felt validated in my opinion here when I perused The Verge’s post-launch event livestream archive and saw it echoed there, too.

Sure, Apple just added some nice enhancements to its high-end multimedia-creation and editing tablet apps (along with their MacOS versions, I might add) but how many folks are really interested in editing multiple ProRes streams without proxies on a computer nowadays, far from on an iPad? What about tangible improvements for the masses? Sure, you can use a mouse with an iPad now, but multitasking attempts still, in a word, suck. And iPadOS still doesn’t even support the basics, such as multi-user support. Then again, there’s always this year’s WWDC, taking place mid-next month, which I will of course once again be covering for EDN and y’all. Hope springs eternal, I guess. Until then, let me know your thoughts in the comments.

p.s…I realized just before pressing “send to Aalyia” that I hadn’t closed the loop on my earlier “building block mass-production availability delays” tease. My suspicion is that originally the new iPads were supposed to be unveiled alongside the new MacBook Airs back in March, in full virtual-event form. But in the spirit of “where there’s smoke, there’s fire”, longstanding rumors about OLED display volume production delays, I’m also guessing (and/or second-generation 3 nm process volume production delays), pushed the iPads to today.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s Spring 2024: In-person announcements no more? appeared first on EDN.

TSMC crunch heralds good days for advanced packaging

Wed, 05/08/2024 - 14:09

TSMC’s advanced packaging capacity is fully booked until 2025 due to hyper demand for large, powerful chips from cloud service giants like Amazon AWS, Microsoft, Google, and Meta. Nvidia and AMD are known to have secured TSMC’s chip-on-wafer-on-substrate (CoWoS) and system-on-integrated-chips (SoIC) capacity for advanced packaging.

Nvidia’s H100 chips—built on TSMC’s 4-nm process—use CoWoS packaging. On the other hand, AMD’s MI300 series accelerators, manufactured on TSMC’s 5-nm and 6-nm nodes, employ SoIC technology for the CPU and GPU combo before using CoWoS for high-bandwidth memory (HBM) integration.

Figure 1 CoWoS is a wafer-level system integration platform that offers a wide range of interposer sizes, HBM cubes, and package sizes. Source: TSMC

CoWoS is an advanced packaging technology that offers the advantage of larger package size and more I/O connections. It stacks chips and packages them onto a substrate to facilitate space, power consumption, and cost benefits.

SoIC, another advanced packaging technology created by TSMC, integrates active and passive chips into a new system-on-chip (SoC) architecture that is electrically identical to native SoC. It’s a 3D heterogeneous integration technology manufactured in front-end of line with known-good-die and offers advantages such as high bandwidth density and power efficiency.

TSMC is ramping up its advanced packaging capacity. It aims to triple the production of CoWoS-based wafers, producing 45,000 to 50,000 CoWoS-based units per month by the end of 2024. Likewise, it plans to double the capacity SoIC-based wafers by the end of this year, manufacturing between 5,000 and 6,000 units a month. By 2025, TSMC wants to hit a monthly capacity of 10,000 SoIC wafers.

Figure 2 SoIC is fully compatible with advanced packaging technologies like CoWoS and InFO. Source: TSMC

Morgan Stanley analyst Charlie Chan has raised an interesting and viable question: How do companies like TSMC judge advanced packaging demand and allocate capacity accordingly. What’s the benchmark that TSMC uses for its advanced packaging customers?

Jeff Su, director of investor relations at TSMC, while answering Chan, acknowledged that the demand for advanced packaging is very strong and the capacity is very tight. He added that TSMC has more than doubled its advanced packaging capacity in 2024. Moreover, the mega-fab has leveraged its special relationships with OSATs to fulfill customer needs.

TSMC works closely with OSATs, including its Taiwan neighbor and the world’s largest IC packaging and testing company, ASE. TSMC chief C. C. Wei also mentioned during an earnings call that Amkor plans to build an advanced packaging and testing plant next to TSMC’s fab in Arizona. Then there is news circulating in trade media about TSMC planning to build an advanced packaging plant in Japan.

Advanced packaging is now an intrinsic part of the AI-driven computing revolution, and the rise of chiplets will only bolster its importance in the semiconductor ecosystem. TSMC’s frantic capacity upgrades and tie-ups with OSATs point to good days for advanced packaging technology.

TSMC’s archrivals Samsung and Intel Foundry will undoubtedly be watching closely this supply-and-demand saga for advanced packaging while recalibrating their respective strategies. We’ll continue covering this exciting aspect of semiconductor makeover in the coming days.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TSMC crunch heralds good days for advanced packaging appeared first on EDN.

Double and invert 5 V to generate ±10 V using two generic chips and two bootstraps

Tue, 05/07/2024 - 17:15

Integration of analog circuitry with digital logic often requires the addition of an extra supply rail or two. The excellent PSRR of precision op-amps (typically >>100 dB) makes them unfussy about power rail variations. This simplifies power supply circuitry and eases the task of designing it to be uncomplicated and inexpensive.

Here’s a variation on the popular flying-capacitor charge-pump voltage converter motif that takes advantage of op-amp tolerance for less than perfect supply regulation. It first doubles and then inverts 5 V to generate nominally symmetrical positive and negative 10-volt rails which can each handily supply several milliamps. The complete converter consists of two inexpensive generic 20 volt-capable, metal-gate CMOS triple SPDT CD4053Bs, plus just eight passive components. Figure 1 shows the circuit.

Figure 1 A 25 kHz multivibrator (U2b) clocks flying capacitor switches that first, double 5 V to +10 V (paralleled U1a,c and U2a,c), and then inverts it to -10 V (U1b and U2b).

Wow the engineering world with your unique design: Design Ideas Submission Guide

 Paralleled switches U1c and U2c, running at Fpump = 25 kHz, alternate the top end of “flying” capacitor C2 between ground and +5 V, while U1a and U2a synchronously alternate its bottom end between +5 V and +10 V, creating a voltage-doubling capacitive charge pump. The connection of the resulting 10-V rail on U1,2 pin 13 to U1,2 pin 16 implements the first “bootstrap” mentioned above, whereby the switches supply 10 V to themselves. D1 gets things rolling on power up by initially providing ~+5 V until the charge pump takes over, whereupon D1 is reverse biased and disconnects.

Doubling up on the U1,2a and U1,2c charge pump switches serves to halve the effective impedance of the +10 V output to ~180 Ω. This is important because the +10 V output powers not only the external load, but also the internal U1,2b voltage inverter (more on this later). Plus, these relatively high ON-resistance metal-gate CMOS switches need all the help they can get. The result is a fairly stiff +10 V output that droops with loading current 180 mV/mA according to this expression:

V+ = 10 V – 180(I+ + I-)
Where:
I+ = +10 V output load current
I – = -10 V output load current

The 25 kHz pump clock is provided by a “merged” oscillator consisting of U2b driven by positive feedback. From U2c through C1 and negative feedback through R1, generating:

Fpump = (2 loge(2)R1C1)-1

Pump frequency will vary somewhat with component tolerance and loading of the 10 V outputs, but since the clock frequency isn’t critical, any effect on pump performance will be insignificant.

The resulting oscillator waveforms are sketched in Figure 2

Figure 2 The 25kHz multivibrator 10Vpp waveshapes.

 Inversion of +10 V to produce -10 V is handled by U1,2b switching C4 between +10 V and ground on the left side and ground and -10 V on the right. The connection to pin 7 provides the second “bootstrap”. D2 clamps pin 7 near enough to ground for the switches to begin working at power-up until the charge pump takes over.

The result is a negative rail that reacts to loading according to this expression:

 V- = -10 V + (430*I- + 180*I+)
Where:
I+ = +10 V output load current
I – = -10 V output load current

The dependence of the two output voltages on loading is graphically summarized in Figure 3.

Figure 3 Output voltages under four loading scenarios: (1) +10 V output with +10 V loaded 0 to 10 mA, -10 V unloaded; (2) +10 V output with both +10 V and -10 V loaded 0 to 10 mA equally; (3) -10 V output with -10 V loaded 0 to 10 mA, +10 V unloaded; (4) -10 V output with +10 V and -10 V loaded 0 to 10 mA equally.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Double and invert 5 V to generate ±10 V using two generic chips and two bootstraps appeared first on EDN.

James Hitchcock at Tektronix explains the recent EA acquisition

Mon, 05/06/2024 - 16:54

An interview with James Hitchcock, a general manager of Keithley Instruments a Tektronix Company, shed light upon the recent acquisition of Elektro-Automatik (EA), a supplier of high-power electronic test solutions. 

EA’s principal application space lies in energy storage, mobility, hydrogen, and renewable energy applications where their bidirectional programmable DC power supplies can double up as both the power supply and electronic load with their unique regenerative feature and bidirectionality. Many tests involve the necessity to dump large amounts of power in the form of heat through passive/resistive load banks or electronic loads, including battery cycling and burn-in tests. On a massive scale, handling this amount of heat is a significant undertaking, where the proper HVAC and even liquid cooling may be necessary. Instead, EA power supplies take that energy and transfer it back to the grid, recycling otherwise wasted energy and eliminating any cooling costs (Figure 1). 

Figure 1: The process of energy recovery for EA’s regenerative bidirectional programmable power supplies in a testing scenario connected with the unit under test (UUT). Source: EA, a Tektronix Company

The principal application space for many Tektronix instruments lie in signal integrity and precision high frequency testing with an offering of high-end mixed signal oscilloscopes, signal generators, and spectrum analyzers. Keithley source measure units (SMUs), and precision measure instruments offer solutions for semiconductor characterization and quality control. Outside of this, the MSO oscilloscopes and IsoVu probes are geared towards power electronics performance analysis. However, how does any of this mix with EA’s high power test equipment portfolio? 

Test solutions for the EV powertrain

“The primary motivation for acquiring EA and combining the solutions of Tektronix was focused around the battery emulation capabilities of EA and the applications focused on power inverters and motor drives primarily in the automotive space” says James, “where the EA sources can test the batteries but also emulate them in the designs of the vehicles and the Tektronix 4 and 5 series MSO scopes are well-suited for the AC signal analysis to drive the motor that is powered by these battery systems”. As shown in Figure 2, select EA power supplies can simulate a set of battery cells at a specific state of charge (SOC) in a few minutes. Typically, these tests involve hours of preparation, charging and discharging multiple batteries and different SOCs before beginning DUT validation.

Figure 2: The ability to both source and sink power enables EA’s power supplies to simulate battery behavior and accurately reproduce a battery’s voltage and current characteristics to test devices.

The Keithley data acquisition (DAQ) systems and digital multimeters (DMM) have played a role in this space for many years, monitoring the temperature and voltage control of the batteries in battery management systems (Figure 3). “So across the entire engineering workflow of designing the powertrain for an EV the Tektronix-, Keithley-, and EA-branded products work together for a solution.” 

Figure 3: Keithley DAQ systems have long been leveraged in environmental monitoring, burn-in/accelerated life testing, as well as failure analysis for automotive applications. Source: Keithley, a Tektronix company

Power inverter and fuel cell testing

“There are other opportunities in power inverters in renewables, especially converting voltage from the DC side with solar panels to AC,” says James. The testing space expands beyond this with fuel cells testing for heavier duty electric mobility solutions such as large trucks, construction equipment, trains, and boats. Fuel cells are also increasingly used in energy security, providing a backup source of power in the event a black out or brownout occurs. “This is an area that EA is very good at and Tektronix can get involved in designing the precision electronics needed to control this type of testing.”

A gap in market for a unified testing solution 

“Our Keithley source measurement units (SMUs) are well-suited to individual cell design,” says James “Our sourcing capabilities with our SMUs stop at about 5 kW of power (Figure 4). We have a 300 V solution and several hundred amp pulsing solutions with our SMUs and we found engineers were moving to higher powers with the evolution of new battery chemistries, new drive trains, and motors.” 

Figure 4: The Keithley 2650 series SMU is a high power instrument designed for characterizing high power electronics such as diodes, FETs, IGBTs, etc., with up to 3000 V or 2000 W of pulse current power. Source: Keithley, a Tektronix company

Tektronix intends to support this trend of moving to higher voltage electrification system in EVs and more energy dense battery chemistries to reach parity with internal combustion engineer (ICE) vehicles, “there was a gap in the market where the suppliers were offering the power solutions or the measurement solutions but no one was really offering the full capability to serve the engineer across that full power portfolio”. 

In the near term, Tektronix intends to bring the EA products into their software umbrella, providing unified testing solutions for engineers across the power spectrum from low-power embedded IoT designs to ultra-high power energy storage, mobility, and hydrogen fuel applications.

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for eight years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in EE journals and trade magazines.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post James Hitchcock at Tektronix explains the recent EA acquisition appeared first on EDN.

Why Synopsys wants to sell its application security testing business

Mon, 05/06/2024 - 09:43

Nearly a month after Synopsys snapped security IP supplier Intrinsic ID, the Silicon Valley-based firm is reported to have reached closer to selling its software integrity group (SIG), which specializes in application security testing for software developers.

A Reuters report published last week claims that a private equity consortium led by Clearlake Capital and Francisco Partners is in advanced talks to acquire the SIG unit for more than $2 billion, and the deal is anticipated to be announced as early as this week. Synopsys telegraphed the intention to divest its security software business late last year.

The acquisition as well as divesture activities have a strong imprint of Sassine Ghazi’s vision for the company’s future roadmap. Source: Yahoo Finance

Synopsys CEO Sassine Ghazi told the press in March 2024 that around three dozen buyers had shown interest in the SIG unit, and the company was narrowing down the list of potential suitors to half a dozen. Synopsys board has already approved the initiation process for the sale of the SIG unit.

Synopsys has significantly grown its application security test business after acquiring software testing firm Coverity in 2014. Next year, it scooped software security vendor Codenomicon, followed by the acquisition of open-source security vendor Black Duck Software in December 2017.

In June 2021, Synopsys snapped application security risk management firm Code Dx, and a year later, it acquired WhiteHat Security to offer automated protection for web applications in production environments. So, while Synopsys has significantly grown its application security testing business over the years and is one of the key players in this market, why does it want to sell it now?

First, it’s a highly competitive market, and Synopsys has seen its profit margins steadily decline over the past years. Second, and more importantly, Synopsys is streamlining its focus on EDA and IP businesses, so a move away from the application software business seems logical in that context.

A few months before acquiring Intrinsic ID’s IP business for physical unclonable function (PUF) incorporated into system-on-chip (SoC) designs for security capabilities like identification, Synopsys made waves by buying Ansys, an EDA outfit hyper-focused on simulation software. This acquisition is expected to extend Synopsys’ core EDA business into several growing adjacent markets.

When Synopsys made the Ansys and Intrinsic ID acquisitions in a quarter, there were vibes that this EDA firm was on way to become an industry giant. However, the news about the SIG unit’s potential sale shows that the $79 billion company has a well-thought-out plan in which EDA and IP businesses will likely define its future roadmap.

“We believe there’s a higher return on investment in the 90% of our portfolio spread between the design automation and design IP business segments,” Ghazi told investors in November 2023. The company’s software service businesses, like application security testing, clearly fall in the remaining 10%, and buyout firms will be having a closer look at such businesses in 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why Synopsys wants to sell its application security testing business appeared first on EDN.

Pages