Українською
  In English
Microelectronics world news
AI is defining reality as we progress further
AI has well integrated into almost every sector of the economy. It has not only driven efficiency but has also simulated innovation. As AI assimilates in to the electronic industry, new trends have sparked a growing bud of innovation for the upcoming year. The electronics industry will experience a new wave of faster decision-making, improved efficiency, and sustainability as AI develops in 2026.
As research and development in the field of Artificial Intelligence grows, the trends for next year can be understood as follows-
- Agentic AI: Artificial Intelligence is already being used extensively in the R&D sector but AI can conclusively solve a key challenge in the electronic manufacturing sector too. With the development of Agentic AI, the issues related to supply chain disruptions can be studied, allowing planning in advance. The Agentic AI can identify alternate suppliers and dynamically reconfigure logistics in response to changing conditions. This reduced human intervention can reduce delays in production, and allow for more focus on R&D. It can also be used as an all-time sales assistant, tracking customer requests, generating quotes, and even placing orders. This will bring a sustainable pace to the business of the industry and also reduce the role of middlemen, hence bringing a competitive edge. From predictive maintenance to autonomous marketing, this growing trend can unleash the full potential of the ESDM industry at present. Undoubtedly, businesses that integrate agentic AI early will have an edge over others at shaping the future of the B2B electronic industry.
Some existing providers of this technology are:-
- IBM: IBM’s prebuilt watsonx AI agents are pre-designed systems that offer standard API and SDK support for open-source frameworks, allowing developers to use their preferred tools.
- Wizr AI: Wizr allows companies to build and deploy LLM powered AI agents, trained on company specific data like, CRM logs, internal documents, and past customer interactions, providing a customized experience. They also provide enterprise-grade security and certifications like SOC 2 Type 2 and ISO 27001 for highly-regulated industries.
- TrueFoundry: This provider typically caters to data scientists, ML engineers, and IT professionals with over 1000 LLMs integrated in it along with tools for connecting with other enterprise tools like Slack, Github, and Datadog.
- Generative AI: Gen AI is expected to become the new normal in the coming years, not just for content creators but for the electronic manufacturing industry too. The lack of advanced designing capabilities in the industry can be comprehensively solved with the advancement of Generative AI. From automating the creation of innovative designs, optimizing complex systems, speeding up prototyping and iteration, to reducing development costs, and democratizing design tools, Gen AI will be the new mastermind behind innovation in the manufacturing industry. This technology will allow engineers to explore new design spaces with quick validation and create more efficient and novel electronic components and systems, faster than any traditional methodology. It will eventually also cater to the challenge of skill shortage in the miniature production sector, hence increasing the efficiency of the industry.
With prominent faces like Synopsys.ai and Cadence Design Systems, already providing a comprehensive portfolio for the designing throughout the chip design workflow, other emerging providers are:-
- Flux AI: Its AI-powered e-CAD (electronic Computer Aided Design) provides for designing and building PCBs, saving time as well as giving good results.
- Circuit Mind: this software takes high-requirements and automatically provides optimized schematics and BOMs, creating reliable and error-free circuits.
- DeepPCB: This cloud-based tool uses AI for providing an automated PCB routing.
- Cirkit Designer: Cirkit is an online platform, providing circuit designing, simulation, and collaboration.
- Zuken: Zuken is a major provider of Electronic Design Automation (EDA) tools such as CR-8000, and E3.series for precise results.
- Physical AI: The shortage of skilled labor in the miniature electronic industry is all set to get a new solution with the adoption of AI-powered robotics and automated inspection to handle repetitive and complex tasks, along with the integration of augmented reality (AR) for training and real-time guidance of the human resource. This will allow the industry to use the low-skill personnel for high-level function, which will improve efficiency, quality and speed. The physical AI can retain the knowledge from a retired skilled professional to continue working in sync to the production requirements, further using the same knowledge base to train new recruits, but with a reduced cost on human-resource development. Additionally, the skilled personnel can be freed to focus on strategic and value-added activities that require creativity and decision-making.
Some of the key players in providing this technologies are:-
- Grey Matter Robotics: They are specialized in developing AI-powered robotics systems specifically for automating manufacturing and industrial operations.
- Veco Robotics: Veco integrates 3D sensoring, computer vision, along with AI to make robots work faster alongside humans. They are particularly efficient in handling delicate electronics assembly without the need for traditional caging.
- Sovereign AI: As the race to build newer AI system builds pace, it draws attention to data privacy in the AI landscape. Tomorrow is not just about any AI, but a safe and indigenous AI system that protects sensitive data with national and regional boundaries. This has given rise to a budding trend for sovereign AI. This system will allow businesses to build their own AI models which can comply with local data protection laws and industry-specific regulations. Such customized AI models can adhere to the specific needs of the business and reduce foreign dependence. A self-controlled AI system reduces the risk of cyber fraud and helps protect sensitive Intellectual property (IP). Sovereign AI can also be used to study the impact of a predicted geopolitical event on the supply chains, especially for import dependent components in the industry.
Some of the service providers of Sovereign AI in India include:-
- EDB Postgres: They offer a platform, allowing for secure, on-premises or private-cloud Gen AI interfacing. It also ensures that the data remains within he company’s control, essential for designers and manufacturers.
- Sarvam AI: It is considered India’s leading sovereign AI provider, selected by the Indian government to develop the country’s first homegrown large language model (LLM).
- Digital Twin + AI: A dynamic collaboration between a digital twin and AI will unleash new energy into the electronic manufacturing industry. As the need for miniaturization grows, the modelling of a digital twin in collaboration with AI subjecting it to real-time usage tests can improve the quality and efficiency of microscopic components like PCBs, silicon chips, and ICs. The sensors can be fed with data from real-user experiences which can help engineers design a more efficient and lasting product. It will allow minimal damage, making the process of R&D and testing more cost-effective.
From the several Digital Twin providers, some of them best suited for the electronics industry are:-
- Ansys: They specialize in simulation-based digital twins that use physics-based modelling along with AI integration to create highly accurate virtual prototype of systems.
- PTC: Their ‘ThingWorx’ platform integrates Industrial IoT, AR, and Digital Twin technologies. Allowing manufacturers to monitor, analyze, and optimize operations in real-time, benefiting product quality and predictive maintenance.
While the future of artificial intelligence is bright in the electronic industry, its integration into the existing system can prove to be a challenge. The initial costs may be overbearing for the business, however, the productivity achieved in the long-run will definitely be worth-it.
The post AI is defining reality as we progress further appeared first on ELE Times.
From Monoliths to Modules: A story of heterogeneous integration, chiplets, and the industry reshaping itself
For nearly four decades, the semiconductor narrative has simply revolved around Moore’s Law, shrinking transistors, packing more logic onto a single die for achieving results. However, now the limitations of such an approach are evident in reticle sizes, yields, rising costs, and the reality that not every function benefits from bleeding-edge lithography. The industry’s answer to this, is to stop treating the system as “one big die” instead, treat it as a system of optimized pieces chiplets and heterogeneous integration. What once started as an engineering workaround is now a full-blown industrial shift. The article is a curated, human narrative of how the industry got here, what the leading players are doing, the key technologies emerging, and how it is likely to play out in the coming future.
The pivot: when economics beat scaling
The earliest chiplet experiments were pragmatic. Designers realized that a single large die amplifies risk: one defect ruins the whole chip, and reticle-scale chips are expensive to manufacture. Chiplet thinking flips that risk model and many smaller dies (chiplets) are cheaper to yield and can be produced on the process node best suited to their function. AMD’s decision to “bet the company’s roadmap on chiplets” is perhaps the clearest strategic statement of this pivot; CEO Dr. Lisa Su has repeatedly framed chiplets as a transformational, multi-year bet that paid off by enabling modular, high-performance designs.
That economic logic attracted big players. When companies like AMD, Intel, NVIDIA, TSMC and major cloud providers all start designing around modular architectures, the idea moves from clever trick to industry standard. But to make chiplets practical at scale required new packaging, new interconnect standards, and new supply-chain thinking.
The technical enabling stack- what changed?
Three packaging techniques and a set of interconnect innovations allowed chiplets to become real:
- 2.5D (silicon interposer / CoWoS family): A silicon interposer routes huge numbers of fine wires between side-by-side dies and HBM stacks. TSMC’s CoWoS family (Chip on Wafer on Substrate) is a productionized example used in AI accelerators and high-bandwidth systems; it provides the highest on-package bandwidth today.
- 3D stacking (Foveros, TSVs, hybrid bonding): Stacking dies face-to-face shortens interconnects, saves board area, and opens power/latency advantages. Intel’s Foveros showed how a system could be built vertically from optimized tiles. The real leap is hybrid (Cu–Cu) bonding, which enables ultra-dense, low-parasitic vertical interconnects and is rapidly becoming the preferred route for the highest-performance 3D stacks.
- EMIB (embedded bridge): A cost-effective middle ground: small high-density bridges route signals between adjacent dies on a package without needing a full interposer, balancing cost and performance.
On top of physical packaging, industry collaboration produced UCIe (Universal Chiplet Interconnect Express) a standard that defines die-to-die electrical and protocol layers so designers can mix chiplets from different vendors. UCIe’s goal is simple but radical: make chiplets plug-and-play the way IP blocks (or board components) are today, lowering integration friction and encouraging a multi-vendor marketplace. The consortium’s growth and tone of its public messaging reflect broad industry support.
What the industry leaders are saying (high-level truth from the field)
Words matter because they reveal strategy. Lisa Su framed AMD’s move as an existential bet that enabled modular scaling and faster product cycles not a tweak, but a new company playbook. Jensen Huang (NVIDIA) has discussed shifting packaging needs as designs evolve, stressing that advanced packaging remains a bottleneck even as capacity improves a reminder that packaging is now a strategic choke point full of commercial leverage. And foundries and integrators (TSMC, Intel Foundry, Samsung) openly invest in CoWoS, Foveros and hybrid bonding capacity because advanced packaging is the next frontier after lithography.
The practical outcomes we’re seeing now
- Modular server CPUs and accelerators: AMD’s chiplet EPYC architecture split cores and I/O dies for yield and flexibility; major GPU vendors assemble compute tiles and HBM via CoWoS to reach enormous memory bandwidth.
- New supply-chain pressure: Advanced packaging capacity became a bottleneck in some cycles, forcing companies to book OSAT / CoWoS capacity years ahead. That’s why foundries and governments are investing in packaging fabs.
- Standardization momentum: UCIe and related initiatives reduce engineering friction and unlock third-party chiplet IP as a realistic business model.
The tensions and technical gaps
Heterogeneous integration isn’t a panacea. It introduces new engineering complexity: thermal hotspots in 3D stacks, multi-die power delivery, system-level verification across vendor boundaries, and supply-chain trust issues (who vouches for a third-party chiplet?). EDA flows are catching up but still need better automation for partitioning, packaging-aware floor planning, and co-validation. Packaging capacity, while expanding, remains a strategic scarce resource that shapes product roadmaps.
New technologies to watch
- Hybrid bonding at scale: enabling face-to-face stacks with very high I/O density; companies (TSMC, Samsung, Intel) are racing on patents and process maturity.
- UCIe ecosystem growth: as more vendors ship UCIe-compatible die interfaces, an open marketplace for physical chiplet IP becomes more viable.
- CoWoS-L / CoWoS-S differentiation and packaging variants: vendors are tailoring interposer variants to balance area, cost and performance for AI workloads.
How this story likely ends (judgement, not prophecy)
The industry is not replacing monolithic chips entirely monoliths will remain where tight coupling, the lowest latency, or the cheapest bill-of-materials matter (e.g., mass-market SoCs). But for high-value, high-performance markets (AI, HPC, networking, high-end CPUs), heterogeneous integration becomes standard. Expect three converging trends:
- An ecosystem of chiplet vendors: IP providers sell actual physical chiplets (compute tiles, accelerators, analog front ends) that can be combined like components.
- Packaging as strategic infrastructure: fabs and OSATs that excel at hybrid bonding, interposers, and 3D stacking will hold new leverage; national strategies will include packaging capacity.
- Toolchains and standards that normalize integration: with UCIe-style standards and improved EDA flows, system architects will shift focus from transistor-level tricks to system partitioning and orchestration.
If executed well, the result is faster innovation, cheaper scaling for complex systems, and diversified supply chains. If poorly coordinated, the industry risks fragmentation, security and provenance problems, and bottlenecks centered on a few packaging suppliers.
Final thought
We have moved from a single-die worldview to a modular systems worldview.
That change is technical (new bonds, interposers, interfaces), economic (yield and cost models), and strategic (packaging capacity equals competitive advantage). The transition is messy and political in places, but it’s already rewriting roadmaps: chiplets and heterogeneous integration are not an academic curiosity they are the architecture by which the next decade of compute will be built.
The post From Monoliths to Modules: A story of heterogeneous integration, chiplets, and the industry reshaping itself appeared first on ELE Times.
Chip Code, written by AI: Driving Lead Time Optimization and Supply Chain Resilience in Semiconductor Manufacturing
The semiconductor world is grappling with complex challenges and designing a modern chip that involves billions of transistors, massive verification workloads, and global supply chains prone to disruption is making the process no easier. One of the critical factors hindering innovation and market responsiveness is the extensive lead time, often exceeding 20 weeks. While the procurement and supply chain managers are constantly coordinating wafer fabs, managing inventory, and dealing with rapidly changing markets, the industry’s core bottleneck is the design phase’s sheer complexity and iterative nature.
AI technologies, including Large Language Models (LLM) and newer multi-agent generative systems, are fundamentally transforming Electronic Design Automation (EDA). These systems automate Register Transfer Level (RTL) generation, detect verification errors earlier, and help predict wafer fab schedules. Integrating AI with procurement teams and supply chain planners helps in dealing with industry volatility and resource allocation uncertainty. It is quietly reshaping the entire ecosystem, moving design from an art form reliant on small teams of gurus to a computationally optimized process.
AI’s Role in Chip Design AutomationRTL design, which defines a chip’s logic, was traditionally hand-crafted, taking engineers months for debugging. Now, AI trained on large HDL datasets suggests RTL fragments, accelerates design exploration, and flags inconsistencies. Reinforcement learning ensures the code becomes progressively accurate, often identifying optimal solutions humans miss.
This capability moves beyond mere efficiency; it reduces manufacturing risk. Fewer RTL mistakes mean fewer costly fab re-spins, making wafer scheduling predictable. Predictive analytics spot fab queue bottlenecks, allowing teams to optimize lithography usage before issues escalate. This foresight maintains consistent throughput.
Generative AI advances this using multiple specialized agents: one for synthesis tuning, one for logic checking, and a third for modelling power or timing. This distributed intelligence improves efficiency and provides procurement teams early risk warnings. By simulating designs, they can anticipate mask shortages, material spikes, or foundry capacity issues, effectively optimizing the physical supply chain.
“The ability to automate RTL generation and verification simultaneously is a game-changer. It shifts our engineering focus from tedious bug-hunting to true architectural innovation, accelerating our time-to-market by months.”- — Dr. Lisa Su, CEO, AMD
Multi-Agent Generative AI for Verification: Operational ImpactVerification often consumes up to 70 percent of chip design time, scaling non-linearly with transistor count, making traditional methods unsustainable. The Multi-Agent Verification Framework (MAVF) uses multiple AI agents to collaborate: reading specifications, writing testbenches, and continuously refining the design. This division of labour operates at machine speed and scale.
Results are notable: human effort drops by 50 to 80 percent, with accuracy exceeding manual methods. While currently module-level, this hints at faster full verification loops, compressing the ‘time-to-known-good-design’ window. This means fewer wasted weeks on debugging and substantial savings on re-spins, protecting billions in costs.
“We are seeing a 15% reduction in verification cycles across key IP blocks within a year. The key is the verifiable audit trail these new systems create, which builds trust for sign-off.”- — Anirudh Devgan, CEO, Cadence Design Systems
Predictable verification helps procurement reduce lead-time buffers. Instead of hoarding stock or overbooking fab slots, teams plan using reliable design milestones. The ROI is twofold: engineers save effort, and procurement negotiates smarter contracts, boosting resilience and freeing up working capital.
Industry Insights and Strategic ImplicationsResearch at Intel’s AI Lab shows that machine learning is powerful, but it works best when integrated with classical optimization techniques. For example, in floor planning or system-level scheduling, AI alone often struggles with hard constraints. However, hybrid approaches offer substantial improvements, combining the exploratory power of AI with the deterministic precision of conventional algorithms. The release of datasets like FloorSet demonstrates a strong commitment to benchmarking realistic chip design problems under real-world industrial constraints.
From a strategic perspective, AI-driven design efficiency provides procurement and supply chain teams with several key advantages:
- Agility: Design-to-tapeout cycles become faster, enabling companies to respond quickly when demand surges or falls, capturing market share faster than competitors.
- Resilience: More predictable verification milestones stabilize wafer fab scheduling and reduce exposure to market volatility.
- Negotiation Power: Procurement teams can better align contracts with foundries and suppliers to actual needs, helping reduce buffer costs. This shift moves contracts from being based on generalized risk to specific, design-validated schedules.
“For foundry operations, predictability is everything. AI-driven design provides a stable pipeline of GDSII files, allowing us to lock in capacity planning with much greater confidence, directly improving overall facility utilization.”- C. C. Wei, CEO, TSMC
This alignment reflects a careful integration of technical advances with operational priorities, ensuring that AI improvements translate into tangible, real-world impact across the entire value chain, from concept to silicon.
Future Outlook: AI, Market Dynamics, and Strategic PlanningThe next big step is full-chip synthesis and automated debugging. LLM-powered assistants generate block-level RTL, while reinforcement learning agents iterate to resolve timing or power conflicts. This could significantly speed up tapeout cycles and give supply chain planners a clearer picture of what is coming, though challenges remain regarding the size and systemic integrity of full-chip designs.
Real challenges persist. AI models require large data, raising concerns about proprietary Intellectual Property (IP) and training biases. Even if output passes syntax checks, deeper semantic or safety issues may arise. Integrating these tools into existing EDA workflows requires careful validation, certification, and substantial computing resources. The explainability of AI-generated code is paramount for regulatory approval and risk mitigation.
Ways to manage risks include hybrid human-in-the-loop approaches, deploying modules first, and maintaining strict audit trails for correctness. For supply chain leaders, AI is a tool to reduce volatility buffers, not a magic solution eliminating all risks. Geopolitical and natural disaster risks remain, but AI minimizes internal, process-driven risks.
ConclusionAI is gradually driving operational change in semiconductor design. Full-chip automation remains a long-term goal, but today’s advances in RTL generation, module-level verification, and predictive analytics already shorten design cycles and make wafer fab scheduling predictable. For procurement leaders, supply chain managers, and strategists, this translates to greater agility, reduced risk, and stronger resilience in an instantly changing market.
The takeaway is simple. Companies that thoughtfully integrate AI into design and supply chain operations will gain a clear competitive advantage. Tomorrow’s chips won’t just be faster or more efficient. Their code will be shaped by AI intelligence, providing engineers with insights previously almost impossible to achieve.
The post Chip Code, written by AI: Driving Lead Time Optimization and Supply Chain Resilience in Semiconductor Manufacturing appeared first on ELE Times.
Basic Principles and Implementation of the Quadrature FM Demodulator
EEVblog 1716 - University Dumpster Diving: Kikusui Oscilloscope
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]



