Microelectronics world news

I built a WS2812 flower

Reddit:Electronics - Sun, 05/12/2024 - 13:02
I built a WS2812 flower

My first attempt at something freeform. A couple of WS2812 controlled by a small esp32 board. The feet are connected to capacitive touch sensors to control on/off, color mode and brightness.

submitted by /u/robo_01
[link] [comments]

Broken UHF Radio

Reddit:Electronics - Sun, 05/12/2024 - 08:21
Broken UHF Radio

Ear piece no longer works on my radio after I went to the ground and landed on it causing the metal ring to come out.

Radio still works fine just not with an ear piece. What would I have to do to go about fixing this?

submitted by /u/LizardWizard6666
[link] [comments]

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 05/11/2024 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Microchip Preps Radiation Tolerant DC-DC Converters for Space

AAC - Sat, 05/11/2024 - 02:00
The new power converters come in nine variants, with single- and triple-output options for satellite power supplies.

New MediaTek SoC Speeds Up Generative AI Processing at the Edge

AAC - Fri, 05/10/2024 - 20:00
The new flagship smartphone SoC takes an unconventional approach to on-device generative AI and gaming with MediaTek's “All-Big-Core” architecture.

CSconnected appoints Howard Rupprecht as new managing director

Semiconductor today - Fri, 05/10/2024 - 15:53
The South Wales-based compound semiconductor cluster CSconnected Ltd says that Howard Rupprecht will take over as its managing director, with effect from 1 June. Rupprecht has semiconductor industry and business development experience from both the public and private sectors, and has been a director of CSconnected since January...

India Gears Up to Celebrate National Technology Day 2024

ELE Times - Fri, 05/10/2024 - 14:46

India celebrated its first National Technology Day on May 11, 1999, after the successful nuclear test at Pokhran in 1998, to commemorate the many achievements of the Indian scientific and technology fraternity. This day serves as a testament to India’s constant and relentless pursuit of excellence and innovation.  It is an opportunity to celebrate and honor the collaborative efforts of scientists, engineers, entrepreneurs, and educators, towards building a better, more efficient, and supportive innovation ecosystem.

It goes without saying that behind every technological breakthrough are countless individuals and organizations working tirelessly to break the glass ceiling and achieve the unfathomable. Hence, National Technology Day reminds the Indian community to keep investing in research and development, further cementing a culture of science and innovation while ensuring equitable access to technology for all.

India’s Story in Building a Strategic Scientific and Technology Forum 

The journey of technology and innovation has been nothing short of exceptional for India. The nation has been a treasure trove of knowledge from the very start of civilization. With a rich scientific heritage dating back thousands of years, India, to begin with, has made significant contributions including the concept of zero, the formulation of algebra and trigonometry, and the decimal system. Aryabhatta is considered to be a major early physicist and a mathematician who explicitly developed theories on the motion of planets.

Indian scientific community has made great advancements in the fields of medicine, astronomy, engineering, space, biotechnology, renewable energy, electronics, automotive, and defence. The government has put sincere efforts into establishing refined, top-notch, and highly competitive educational facilities like the Indian Institute of Technology (IITs) and Indian Institutes of Science (IISc) among others. Also, to bolster the R&D ecosystem in the country, institutions like the Council of Scientific and Industrial Research (CSIR) and the Indian Space Research Organization (ISRO) have been established.

The Indian IT industry is booming to become a global hub with MNCs like Tata Consultancy Services, Wipro, and Infosys playing a crucial role in accelerating the nation’s economic growth. The Space program has achieved significant milestones, including the launch of satellites, lunar exploration missions (Chandrayaan-1 and Chandrayaan-2), and Mars exploration (Mangalyaan).

The story doesn’t end here; India is also among the top players in biotechnology and pharmaceuticals, with rapid development in areas like vaccine development, generic drug manufacturing, and biotech research. Also, the country is investing hugely in renewable energy stressing on adopting environment-friendly ways and technology in the long run.

How far has India Come?

India in the last decade has seen major strides in the field of science and technology on a global level. Initiatives like “Make in India” launched by the GoI in 2014 aim at transforming the nation into a global manufacturing hub by attracting foreign investments, improving the ease of doing business, and promoting skill development, to revitalize the manufacturing sector and promote economic growth and job creation in the country. “Digital India” is another initiative launched with the vision of transforming India into a digitally-enabled and empowered society. It aims at leveraging digital technologies to bridge the digital divide and promote growth and development in varied sectors like electronics, automotive, IT, transportation, and communication.

With such initiatives being actively worked upon, India has been hit by the wave of startup culture. The EV sector is ramping up with denser infra being set up across the country. Technology leaders like Tesla and TATA are making their way in the EV segment in India. The country is boosting its semiconductor business with GoI investing bulk in setting up manufacturing units. We also saw mobile phone manufacturing jump 21 times to nearly Rs. 4.1 lakh Cr in the last 10 years.

Industry Speaks:

Read with us what top thought leaders have to say as we observe National Technology Day this year.

Mr. Aalok Kumar, Corporate Officer & Sr. VP – Head of the Global Smart City Business at NEC Corporation and President & CEO at NEC Corporation India

“Over the past decade, India has matured into global leadership in technological innovation. This has altered how we experience how the day-to-day services are provided and above all, how citizens utilize civic services. This transformation is fundamentally underpinned by the recognition of technology’s potential to change lives and communities for the better. Today, India is on a steady path to realizing its vision for a ‘Viksit Bharat’ by 2047, and the role of technology comes into sharp focus with greater responsibility than ever before. At this juncture, the tech innovation ethos in India is evolving to “purposeful innovation” aimed at societal good, demanding creative and responsible application of AI, ML, and big data analytics to solve day-to-day problems of the common man. 

At NEC India, our purpose is to build technology that serves the people, and as intelligent communities emerge, driven by data analytics and AI, we are proud to be playing our part in shaping a future where technology redefines citizen-centricity in civic services empowering lives. India’s unique position as a rapidly urbanizing nation undergoing large-scale digital transformation presents an unparalleled opportunity as a testing ground for innovative solutions that can be adapted to various markets worldwide.”

Mr. Sivakumar Selva Ganapathy, Vice President at OpenBlue India Software Engineering & APAC Solutions, Johnson Controls

“This year, India celebrates the 25th anniversary of National Technology Day. As we reflect on this journey, it is evident that our progress over the past two decades has been nothing short of remarkable. From our early achievements to emerging as a global technological hub today, India’s strength and capability in the domain speaks for itself.

While technology has impacted every sector in India, one area where it is poised to make a defining impact is sustainability. From building management systems analyzing occupancy patterns to AI-powered Smart grids optimizing energy distribution, technology is actively transforming cities into efficient and sustainable hubs. As a corollary, the area of green technology and green buildings is increasingly becoming more relevant, and as it continues to evolve, skilling & curriculum development assumes added importance. It is our firm belief that this can be best achieved through industry-academia collaboration. 

At Johnson Controls India, we are observing a steady increase in the adoption of technologies for green buildings, and it won’t be long before buildings evolve from merely being smart, to autonomous – capable of governing and maintaining itself! This National Technology Day, as we look at the strides made, we also look forward to witnessing the future of green technologies unfold, and renew our commitment to innovating for a green tomorrow. “

The Way Forward for India

India holds tremendous potential for innovation and collaboration in varied sectors including healthcare, education, transportation, and communication, where technology has permeated and unlocked new opportunities for growth and development.

National Technology Day is not merely about gadgets and gizmos but holds a more profound implication on how technology has impacted our society for the better. Technology has the power to democratize access to information, improve healthcare systems, enhance education, foster economic development, and promote sustainable development.

As we commemorate this day in 2024, let us look at the future with optimism and resolve and reaffirm our commitment to leverage the power of technology responsibly and ethically, to create a more inclusive and sustainable world.

The post India Gears Up to Celebrate National Technology Day 2024 appeared first on ELE Times.

How Wi-Fi sensing simplifies presence detection

EDN Network - Fri, 05/10/2024 - 12:18

The emerging technology of Wi-Fi sensing promises significant benefits for a variety of embedded and edge systems. Using only the radio signals already generated by Wi-Fi interfaces under normal operation, Wi-Fi sensing can theoretically enable an embedded device to detect the presence of humans, estimate their motion, approximate their location, and even sense gestures and subtle movements, such as breathing and heartbeats.

Smart home, entertainment, security, and safety systems can all benefit from this ability. For example, a small sensor in a car could detect the presence of back-seat passengers—soon to be a requirement in new passenger vehicles. It can even detect a child breathing under a blanket as it does not require line of sight. Or an inexpensive wireless monitor in a home could detect in a room or through walls when a person falls—a lifesaver in home-care situations.

Figure 1 Wi-Fi Sensing can be performed on any Wi-Fi-enabled device with the right balance of power consumption and processing performance. Source: Synaptics

Until recently, such sensing could only be done with a passive RF receiver relying on the processing capability of a nearby Wi-Fi access point. Now, it can be done on every Wi-Fi-enabled end device. This article explores how designers can get from theory to shipped product.

How it works

The elegance of Wi-Fi sensing is that it uses what’s already there: the RF signals that Wi-Fi devices use to communicate. In principle, a Wi-Fi receiving device could detect changes in those RF signals as it receives them and, from the changes, infer the presence, motion, and location of a human in the area around the receiver.

Early attempts to do this used the Wi-Fi interface’s receive signal strength indicator (RSSI), a number produced by the interface periodically to indicate the average received signal strength. In much the same way that a passive infrared motion detector interprets a change in IR intensity as motion near its sensor, these Wi-Fi sensors interpret a change in RSSI value as the appearance or motion of an object near the receiver.

For instance, a person could block the signal by stepping between the receiver and the access point’s transmitter, or a passing person could alter the multipath mix arriving at the receiver.

RSSI is unstable in the real world, even when no one is nearby. It can be challenging to separate the influences of noise, transmitter gain changes, and many other sources from the actual appearance of a person.

This has led researchers to move to a richer, more frequently updated, and more stable data stream. With the advent of multiple antennas and many subcarrier frequencies, transmitters and receivers need far more information than just RSSI to optimize antenna use and subcarrier allocation. Their solution is to take advantage of channel state information (CSI) in the 802.11n standard. This should be available from any compliant receiver, though the accuracy may vary.

Figure 2 Wi-Fi system-on-chips (SoCs) can analyze CSI for subtle changes in the channel through which the signal is propagating to detect presence, motion, and gestures. Source: Synaptics

CSI is reported by the receiver every time a subcarrier is activated. It is essentially a matrix of complex numbers, each element conveying magnitude and phase for one combination of transmit and receive antennas. A three-transmit-antenna, two-receive-antenna channel would be a 3 x 2 array. The receiver generates a new matrix for each subcarrier activation. So, in total, the receiver maintains a matrix for each active subcarrier.

The CSI captures far more information than the RSSI, including attenuation and phase shift for each path and frequency. In principle, all this data contains a wealth of information about the environment around the transmitter and receiver. In practice, technical papers have reported accurate inference of human test subjects’ presence, location, motion, and gestures by analyzing changes in the CSI.

Capturing presence data

Any compliant Wi-Fi interface should produce the CSI data stream. That part is easy. However, it is the job of the sensor system to process the data and make inferences from it. This process is generally divided into three stages, following the conventions developed for video image processing: data preparation, feature extraction, and classification.

The first challenge is data preparation. While the CSI is far more stable than the RSSI, it’s still noisy, mainly due to interference from nearby transmitters. The trick is to remove the noise without smoothing away the sometimes-subtle changes in magnitude or phase that the next stage will depend upon to extract features. But how to do this depends on the extraction algorithms and, ultimately, the classification algorithms and what is being sensed.

Some preparation algorithms may simply lump the CSI data into time bins, toss out outliers, and look for changes in amplitude. Others may attempt to extract and amplify elusive changes in phase relationships across the subcarriers. So, data preparation can be anything from a simple time-series filter to a demanding statistical algorithm.

Analysis and inference

The next stage in the pipeline will analyze the cleansed data streams to extract features. This process is analogous—up to a point—to feature extraction in vision processing. In practice, it is quite different. Vision processing may, for instance, use simple numerical calculations on pixels to identify edges and surfaces in an image and then infer that a surface surrounded by edges is an object.

But Wi-Fi sensors are not working with images. They are getting streams of magnitude and phase data that are not related in any obvious way to the shapes of objects in the room. Wi-Fi sensors must extract features that are not images of objects but are instead anomalies in the data streams that are both persistent and correlated enough to indicate a significant change in the environment.

As a result, the extraction algorithms will not simply manipulate pixels but will instead perform complex statistical analysis. The output of the extraction stage will be a simplified representation of the CSI data, showing only anomalies that the algorithms determine to be significant features of the data.

The final stage in the pipeline is classification. This is where the Wi-Fi sensor attempts to interpret the anomaly reported by the extraction stage. Interpretation may be a simple binary decision: is there a person in the room now? Is the person standing or sitting? Are they falling?

Or it may be a more quantitative evaluation: where is the person? What is their velocity vector? Or it may be an almost qualitative judgment: is the person making a recognizable gesture? Are they breathing?

The nature of the decision will determine the classification algorithm. Usually, there is no obvious, predictable connection between a person standing in the room and the resulting shift in CSI data. So, developers must collect actual CSI data from test cases and then construct statistical models or reference templates, often called fingerprints. The classifier can then use these models or templates to best match the feature from the extractor and the known situations.

Another approach is machine learning (ML). Developers can feed extracted features and correct classifications of those features into a support vector machine or a deep-learning network, training the model to classify the abstract patterns of features correctly. Recent papers have suggested that this may be the most powerful way forward for classification, with reported accuracies from 90 to 100% on some classification problems.

Wi-Fi sensing implementation

Implementing the front-end of an embedded Wi-Fi sensing device is straightforward. All that’s required is an 802.11n-compliant interface to provide accurate CSI data. The back-end is more challenging as it requires a trade-off between power consumption and capability.

For the data preparation stage, simple filtering may be within the range of a small CPU core. After all, a small matrix arrives only when a subcarrier is activated. But more sophisticated, statistical algorithms will call for a low-power DSP core. The statistical techniques for feature extraction are also likely to need the power and efficiency of the DSP.

Classification is another matter. All reported approaches are easily implemented in the cloud, but that is of little help for an isolated embedded sensor or even an edge device that must limit its upstream bandwidth to conserve energy.

Looking at the trajectory of algorithms, from fingerprint matching to hidden Markov models to support vector machines and deep-learning networks, the trend suggests that future systems will increasingly depend on low-power deep-learning inference accelerator cores. Thus, the Wi-Fi sensing system-on-chip (SoC) may well include a CPU, a DSP, and an inference accelerator.

However, as this architecture becomes more apparent, we see an irony. Wi-Fi sensing’s advantage over other sensing techniques is its elegant conceptual simplicity. But something else becomes clear as we unveil the true complexity of turning the twinkling shifts in CSI into accurate inferences.

Bringing a successful Wi-Fi sensing device to market will require a close partnership with an SoC developer with the right low-power IP, design experience, and intimate knowledge of the algorithms—present and emerging. Choosing a development partner may be one of the most important of the many decisions developers must make.

Ananda Roy is senior product line manager for wireless connectivity at Synaptics.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How Wi-Fi sensing simplifies presence detection appeared first on EDN.

ST’s 6-Axis Inertial Measurement Units Hit the Road

AAC - Fri, 05/10/2024 - 02:00
STMicroelectronics has released a new 6-axis inertial measurement unit (IMU) targeting automotive navigation and advanced driver assistance systems (ADAS).

Handheld analyzers gain pulse generator option

EDN Network - Thu, 05/09/2024 - 22:26

FieldFox handheld RF analyzers from Keysight can now generate an array of pulse types at frequencies as high as 54 GHz. Outfitted with Option 357 pulse generator software, the FieldFox B- and C-Series analyzers give field engineers access to pulse generation capabilities that support analog modulations and user-defined pulse sequences. All that is needed to upgrade an existing analyzer is a software license key and firmware upgrade.

The software option includes standard pulses, FM chirps, FM triangles, AM pulses, and user-definable pulse sequences. In addition, it can create continuous wave (CW) signals with or without AM/FM modulations, including frequency shift keying (FSK) and binary phase shift keying (BPSK). Key parameters of the generated signal are displayed in both numerical and graphical formats.

FieldFox handheld analyzers equipped with pulse generation serve many purposes, including field radar testing for air traffic control, simulating automotive radar scenarios, performing field EMI leakage checks, and assessing propagation loss of mobile networks.

FieldFox product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Handheld analyzers gain pulse generator option appeared first on EDN.

Software platform streamlines factory automation

EDN Network - Thu, 05/09/2024 - 22:26

Reducing shop-floor hardware, Siemens’ Simatic Automation Workstation delivers centralized software-defined factory automation and control. The system allows manufacturers to replace a hardware programmable logic controller (PLC), conventional human-machine interface (HMI), and edge device with a single software-based workstation.

Hundreds of PLCs can be found throughout plants, each one requiring extensive programming to keep it up-to-date, secure, and aligned with other PLCs in the manufacturing environment. In contrast, the Simatic Workstation can be viewed and managed from a central point. Since programming, updates, and patches can be deployed to the entire fleet in parallel, the shop floor remains in synch.

Simatic Workstation is an on-premise operational technology (OT) platform. It offers high data throughput and low latency, essential for running various modular applications. Simatic caters to conventional automation tasks, like motion control and sequencing, as well as advanced automation operations that incorporate artificial intelligence.

The Simatic Automation Workstation is the latest addition to Siemens’ Xcelerator digital business platform. Co-creator Ford Motor Company will be the first customer to deploy and scale these workstations across its manufacturing operations.

Siemens

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Software platform streamlines factory automation appeared first on EDN.

Silicon capacitor boasts ultra-low ESL

EDN Network - Thu, 05/09/2024 - 22:26

Joining Empower’s family of E-CAP silicon capacitors for high-frequency decoupling is the EC1005P, a device with an equivalent series inductance (ESL) of just 1 picohenry (pH). The EC1005P offers a capacitance of 16.6 µF, along with low impedance up to 1 GHz. A very thin profile allows the capacitor to be embedded into the substrate or interposer of any SoC, especially those used in high-performance computing (HPC) and artificial intelligence (AI) applications.

E-CAP high-density silicon capacitor technology fulfills the ‘last inch’ decoupling gap from the voltage regulators to the SoC supply pins. This approach integrates multiple discrete components into a single monolithic device with a much smaller footprint and component count than solutions based on conventional multilayer ceramic capacitors.

In addition to sub-1-pH ESL, the EC1005P provides sub-3-mΩ equivalent series resistance (ESR). The capacitor comes in a 3.643×3.036-mm, 120-pad chip-scale package. Its standard profile of 784 µm can be customized for various height requirements.

The EC1005P E-CAP is sampling now, with volume production expected in Q4 2024. A datasheet for the EC1005P was not available at the time of this announcement. For more information about Empower’s ECAP product family, click here.

Empower Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon capacitor boasts ultra-low ESL appeared first on EDN.

Crossbar switch eases in-vehicle USB-C connectivity

EDN Network - Thu, 05/09/2024 - 22:25

A 10-Gbps automotive-grade crossbar switch from Diodes routes USB 3.2 and DisplayPort 2.1 signals through a USB Type-C connector. The PI3USB31532Q crossbar switch maintains high signal integrity when used in automotive smart cockpit and rear seat entertainment applications.

For design flexibility, the PI3USB31532Q supports three USB-C compliant configuration modes switching at 10 Gbps. It can connect a single lane of USB 3.2 Gen 2; one lane of USB 3.2 Gen 2 and two channels of DisplayPort 2.1 UHBR10; or four channels of DisplayPort 2.1 UHBR10 to the USB-C connector. When configured for DisplayPort, the switch also connects the complementary AUX channels to the USB-C sideband pins. Switch configuration is controlled via an I2C interface or on-chip logic using four external pins.

The crossbar switch provides a -3-dB bandwidth of 8.3 GHz, with insertion loss, return loss and crosstalk of -1.7 dB, -15 dB, and -38 dB, respectively, at 10 Gbps. Qualified to AEC-Q100 Grade 2 requirements, the part operates over a temperature range of -40°C to +105°C and requires a 3.3-V supply.

Housed in a 3×6-mm, 40-pin QFN package, the PI3USB31532Q crossbar switch costs $1.10 each in lots of 3500 units.

PI3USB31532Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Crossbar switch eases in-vehicle USB-C connectivity appeared first on EDN.

MCU manages 12-V automotive batteries

EDN Network - Thu, 05/09/2024 - 22:25

Infineon’s PSoC 4 HVPA 144k MCU serves as a programmable embedded system for monitoring and managing automotive 12-V lead-acid batteries. The ISO 26262-compliant part integrates precision analog and high-voltage subsystems on a single chip, enabling safe, intelligent battery sensing and management.

Powered by an Arm Cortex-M0+ core operating at up to 48 MHz, the 32-bit microcontroller supplies up to 128 kbytes of code flash, 8 kbytes of data flash, and 8 kbytes of SRAM, all with ECC. Dual delta-sigma ADCs, together with four digital filtering channels, determine the battery’s state-of-charge and state-of-health by measuring voltage, current, and temperature with an accuracy of up to ±0.1%.

An integrated 12-V LDO regulator, which tolerates up to 42 V, allows the device to be supplied directly from the 12-V lead-acid battery without requiring an external power supply. The high-voltage subsystem also includes a LIN transceiver (physical interface or PHY).

The PSoC 4 HVPA 144k is available now in 6×6-mm, 32-pin QFN packages. Infineon also offers an evaluation board and automotive-grade software.

PSoC 4 HVPA 144k product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCU manages 12-V automotive batteries appeared first on EDN.

NASA and JAXA Detect Interstellar X-Rays With 36-Pixel Sensor

AAC - Thu, 05/09/2024 - 20:00
The NASA and JAXA XRISM mission leverages a new 36-pixel X-ray sensor called Resolve to examine the movement and composition of X-ray-emitting stellar objects.

2×AA/USB: OK!

EDN Network - Thu, 05/09/2024 - 16:20

While an internal, rechargeable lithium battery is usually the best solution for portable kit nowadays, there are still times when using replaceable cells with an external power option, probably from a USB source, is more appropriate. This DI shows ways of optimizing this.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The usual way of combining power sources is to parallel them, with a series diode for each. That is fine if the voltages match and some loss of effective battery capacity, owing to a diode’s voltage drop, can be tolerated. Let’s assume the kit in question is something small and hand-held or pocketable, probably using a microcontroller like a PIC, with a battery comprising two AA cells, the option of an external 5 V supply, and a step-up converter producing a 3.3 V internal power rail. Simple steering diodes used here would give a voltage mismatch for the external power while wasting 10 or 20% of the battery’s capacity.

Figure 1 shows a much better way of implementing things. The external power is pre-regulated to avoid the mismatch, while active switching minimizes battery losses. I have used this scheme in both one-offs and production units, and always to good effect.

Figure 1 Pre-regulation of an external supply is combined with an almost lossless switch in series with the battery, which maximizes its life.

The battery feed is controlled by Q1, which is a reversed p-MOSFET. U1 drops any incoming voltage down to 3.3 V. Without external power, Q1’s gate is more negative than its source, so it is firmly on, and (almost) the full battery voltage appears across C3 to feed the boost converter. Q2’s emitter–base diode stops any current flowing back into U1. Apart from the internal drain–source or body diode, MOSFETs are almost symmetrical in their main characteristics, which allows this reversed operation.

When external power is present, Q1.G will be biased to 3.3 V, switching it off and effectively disconnecting the battery. Q2 is now driven into saturation connecting U1’s 3.3 V output, less Q2’s saturated forward voltage of 100–200 mV, to the boost converter. (The 2N2222, as shown, has a lower VSAT than many other types.) Note that Q2’s base current isn’t wasted, but just adds to the boost converter’s power feed. Using a diode to isolate U1 would incur a greater voltage drop, which could cause problems: new, top-quality AA manganese alkaline (MnAlk) cells can have an off-load voltage well over 1.6 V, and if the voltage across C3 were much less than 3 V, they could discharge through the MOSFET’s inherent drain–source or body diode. This arrangement avoids any such problems.

Reversed MOSFETs have been used to give battery-reversal protection for many years, and of course such protection is inherent in these circuits. The body diode also provides a secondary path for current from the battery if Q1 is not fully on, as in the few microseconds after external power is disconnected.

Figure 1 shows U1 as an LM1117-3.3 or similar type, but many more modern regulators allow a better solution because their outputs appear as open circuits when they are unpowered, rather than allowing reverse current to flow from their outputs to ground. Figure 2 shows this implementation.

Figure 2 Using more recent designs of regulator means that Q2 is no longer necessary.

Now the regulator’s output can be connected directly to C3 and the boost converter. Some devices also have an internal switch which completely isolates the output, and D1 can then be omitted. Regulators like these could in principle feed into the final 3.3mV rail directly, but this can actually complicate matters because the boost converter would then also need to be reverse-proof and might itself need to be turned off. R2 is now used to bias Q1 off when external power is present.

If we assume that the kit uses a microcontroller, we can easily monitor the PSU’s operation. R5—included purely for safety’s sake—lets the microcontroller check for the presence of external power, while R3 and R4 allow it to measure the battery voltage accurately. Their values, calculated on the assumption that we use an 8-bit A–D conversion with a 3.3 V reference, give a resolution of 10 mV/count, or 5 mV per cell. Placing them directly across the battery loads it with ~5–6 µA, which would drain typical cells in about 50 years; we can live with that. The resistor ratio chosen is close to 1%-accurate.

Many components have no values assigned because they will depend on your choice of regulator and boost converter. With its LM1117-3.3, the circuit of Figure 1 can handle inputs of up to 15 V, though a TO-220 version then gets rather warm with load currents approaching 80 mA (~1 W, its practical power limit without heatsinking).

I have also used Figure 2 with Microchip’s MCP1824T-3302 feeding a Maxim MAX1674 step-up converter, with an IRLML6402 for Q1, which must have a low on-resistance. Many other, and more recent, devices will be suitable, and you probably have your own favorites.

While the external power input is shown as being naked, you may want to clothe it with some filtering and protection such as a poly-fuse and a suitable Zener or TVS. Similarly, no connector is specified, but USBs and barrel jacks both have their places.

While this is shown for nominal 3V/5V supplies, it can be used at higher voltages subject to gate–source voltage limitations owing to the MOSFET’s input protection diodes, the breakdown voltages of which can range from 6 V to 20 V, so check your device’s data sheet.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2×AA/USB: OK! appeared first on EDN.

Optimize battery selection and operating life of wireless IoT devices

EDN Network - Thu, 05/09/2024 - 14:59

Batteries are essential for powering many Internet of Things (IoT) devices, particularly wireless sensors, which are now deployed by billions. But batteries are often difficult to access and expensive to change because it’s a manual process. Anything that can be done to maximize the life of batteries and minimize or eliminate the need to change them during their operating life is a worthwhile endeavour and a significant step toward sustainability and efficiency.

Taking the example of a wireless sensor, this is a five-step process:

  1. Select the components for your prototype device: sensor, MCU, and associated electronics.
  2. Use a smart power supply with measurement capabilities to establish a detailed energy profile for your device under simulated operating conditions.
  3. Evaluate your battery options based on the energy profile of your device.
  4. Optimize the device parameters (hardware, firmware, software, and wireless protocol).
  5. Make your final selection of the battery type and capacity with the best match to your device’s requirements.

Selecting device type and wireless protocol

Microcontroller (MCU) is the most common processing resource at the heart of embedded devices. You’ll often choose which one to use for your next wireless sensor based on experience, the ecosystem with which you’re most familiar, or corporate dictate. But when you have a choice and conserving energy is a key concern for your application, there may be a shortcut.

Rather than plow through thousands of datasheets, you could check out EEMBC, an independent benchmarking organization. The EEMBC website not only enables a quick comparison of your options but also offers access to a time-saving analysis tool that lists the sensitivity of MCU platforms to various design parameters.

Most IoT sensors spend a lot of time in sleep mode and send only short bursts of data. So, it’s important to understand how your short-listed MCUs manage sleep, idle and run modes, and how efficiently they do that.

Next, you need to decide on the wireless protocol(s) you’ll be using. Range, data rate, duty cycle, and compatibility within the application’s operating environment will all be important considerations.

Figure 1 Data rates and range are the fundamental parameters considered when choosing a wireless protocol. Source: BehrTech

Once you’ve established the basics, digging into the energy efficiency of each protocol gets more complex and it’s a moving target. There are frequent new developments and enhancements to established wireless standards.

At data rates of up to 10 Kbps, Bluetooth LE/Mesh, LoRa, or Zigbee are usually the lowest energy protocols of choice for distances up to 10 meters. If you need to cover a 1-km range, NB-IoT may be on your list, but at an order of magnitude higher energy usage.

In fact, MCU hardware, firmware and software, the wireless protocol, and the physical environment in which an IoT device operates are all variables that need to be optimized to conserve energy. The only effective way to do that is to model these conditions during development and watch the effect of changes on the fly as you make changes to any of these parameters.

Establish an initial energy profile of device under test (DUT)

The starting point is to use a smart, programmable power supply and measurement unit to profile and record the energy usage of your device. This is necessary because simple peak and average power measurements with multimeters can only provide limited information. The Otii Arc Pro from Qoitech was used here to illustrate the process.

Consider a wireless MCU. In run mode, it may be putting out a +6 dBm wireless signal and consuming 10 mA or more. In deep sleep mode, the current consumption might fall below 0.2 µA. That’s a 50:1 dynamic range and changes happen almost instantaneously, certainly within microseconds. Conventional multimeters can’t capture changes like these, so they can’t help you understand the precise energy profile of your device. Without that, your choice of battery is open to miscalculation.

Your smart power supply is a digitally controlled power source offering control over parameters such as voltage, current, power, and mode of operation. Voltage control should ideally be in 1 mV steps so that you can determine the DUT’s energy consumption at different voltage levels to mimic battery discharge.

You’ll need sense pins to monitor the DUT power rails, a UART to see what happens when you make code changes, and GPIO pins for status monitoring. Standalone units are available, but it can be more flexible and economical to choose a smart power supply that uses your computer’s processing resources and display, as shown in the example below.

Figure 2 The GUI for a smart power supply can run on Windows, MacOS, or Ubuntu. Source: Qoitech

After connecting, you power and monitor the DUT simultaneously. You’re presented with a clear picture of voltages and current changes over time. Transients that you would never be able to see on a traditional meter are clearly visible and you can immediately detect unexpected anomalies.

Figure 3 A smart power profiler gives you a detailed comparison of your device’s energy consumption for different hardware and firmware versions. Source: Qoitech

From the stored data in the smart power supply, you’ll be able to make a short list of battery options.

Choosing a battery

Battery selection needs to consider capacity, energy density, voltage, discharge profile, and temperature. Datasheet comparisons are the starting point but it’s important to validate the claims of battery manufacturers by benchmarking their batteries through testing. Datasheet information is based on performance under “normal conditions” which may not apply to your application.

Depending on your smart power supply model, the DUT energy profiling described earlier may provide an initial battery life estimate based on a pre-programmed battery type and capacity. Either the same instrument or a separate piece of test equipment may then be used for a more detailed examination of battery performance in your application. Accelerated discharge measurements, when properly set up, are a time-saving alternative to the years it may take a well-designed IoT device to exhaust its battery.

These measurements must follow best practices to create an accurate profile. These include maintaining high discharge consistency to achieve a match to the DUT’s peak current, shortening the cycle time and increasing sleep current so that the battery can recover. You should also consult with battery manufacturers to validate any assumptions you make during the process.

You can profile the same battery chemistries from different manufacturers, or different battery chemistries, perhaps comparing lithium coin cells with AA alkaline batteries.

Figure 4 The comparison shows accelerated discharge characteristics for AA and AAA alkaline batteries from five different manufacturers. Source: Qoitech

By this stage, you have a good understanding of both the energy profile of your device and of the battery type and capacity that’s likely to result in the longest operating life in your applications. Upload your chosen battery profile to your smart power supply and set it up to emulate that battery.

Optimize and iterate

You can now go back to the DUT and optimize hardware and software for the lowest power consumption in near real-world conditions. You may have the flexibility to experiment with different wireless protocols, but even if that’s not the case, experimenting with sleep and deep-sleep modes, network routing, and even alternative data security protocols can all yield improvements, avoiding a common problem where 40 bytes of data can easily become several Kbytes.

Where the changes create a significant shift in your device’s energy profile, you may also review the choice of battery and evaluate again until you achieve the best match.

While this process may seem lengthy, it can be completed in just a few hours and may extend the operating life of a wireless IoT edge device, and hence reduce battery waste, by up to 30%.

Björn Rosqvist, co-founder and chief product officer of Qoitech, has 20+ years of experience in power electronics, automotive, and telecom with companies such as ABB, Ericsson, Flatfrog, Sony, and Volvo Cars.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Optimize battery selection and operating life of wireless IoT devices appeared first on EDN.

Marktech unveils multi-chip packages with InGaAs photodiodes and multiple LED emitters

Semiconductor today - Thu, 05/09/2024 - 13:44
Marktech Optoelectronics Inc of Latham, NY, USA, a designer and manufacturer of optoelectronics components and assemblies — including UV, visible, near-infrared (NIR) and short-wave infrared (SWIR) emitters, detectors, and indium phosphide (InP) epiwafers — has unveiled its new multi-chip packages with indium gallium arsenide (InGaAs) photodiodes and multiple LED emitters...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки