Application of AI to the semiconductor design process and semiconductor chips for AI applications

Machine Learning Artificial Intelligence Natural Language Processing Probabilistic Generative Models Algorithm ICT Technology Computer Architecture IT Infrastructure Digital Transformation Deep Learning Mathematics Computer Hardware DX Case Study Technology Miscellaneous  Navigation of this blog

Summary

I would like to discuss the process of designing semiconductor chips as described in the previous articleComputational Elements of Computers and Semiconductor Chips,” and semiconductor chips specialized for AI applications, which is a step forward from “Introduction to FPGAs for Software Engineers: Machine Learning“.

Semiconductor design technologies and AI technologies applied to them

The process of designing a semiconductor chip, as described in the previous article “Computational Elements of Computers and Semiconductor Chips,” consists of the following general steps.

  • Requirements Definition (Functional Design) Define the requirements for the semiconductor to be designed as well as other hardware/software manufactures. Clarify what functions are to be realized, what performance is required, etc.

  • Circuit Design The next step is to design circuits such as transistor circuits (and power supply and signal propagation circuits) based on logic design. Circuit design includes the design of logic circuits and the creation of circuit diagrams.

Here, tools are used to simulate the operation of the circuit designed. Typical examples include SPICE and Verilog.

  • Simulation Once the circuit design is complete, simulation is performed. Simulation checks whether the circuit operates correctly and whether the performance meets the requirements.
  • Physical (Layout) Design Once simulation is complete, physical design is performed. In physical design, circuit layout and wiring are performed. The size and shape of the semiconductor are determined in this process.

Layout tools are used here to design the circuit wiring. Typical examples include Cadence Virtuoso and Synopsys Custom Compiler.

  • Process Design The process design of semiconductor structures, mainly in the longitudinal direction, is designed based on the characteristics of the process equipment and past process data.

During process design, tools are used to simulate the physical characteristics of the circuit when it is actually manufactured. Typical examples include TCAD and Silvaco.

  • Mask Creation Once the physical design is complete, mask creation takes place. The mask will be a pattern depicting the semiconductor to be fabricated.
  • Wafer and Device Manufacturing Once the masks are completed, wafer fabrication is performed. In wafer fabrication, patterns are written on silicon wafers based on masks to form semiconductor devices. After wafer fabrication is completed, device fabrication is performed. In device manufacturing, semiconductor devices are processed by attaching electrodes and forming protective layers.
  • Evaluation and Inspection Finally, the manufactured semiconductors are evaluated and inspected. In the evaluation and inspection, it is checked whether the performance and quality of the semiconductors meet the regulations.

Here, evaluation and inspection tools are used to assess the performance and quality of the manufactured semiconductor devices.

AI has been applied to various areas of semiconductor design. Examples are described below.

  • Optimization AI can optimize semiconductor design. For example, it can automatically optimize circuit configuration and parameters to reduce power consumption and increase speed. Circuit simulation tools, physical simulation tools, and layout tools include such optimization tools.

Specific techniques for optimization include the SAT solver described in “About SAT (Boolean SAtisfiability),” the submodular optimization technique described in “Submodular Optimization and Machine Learning,” the deep learning technique described in “About Deep Learning,” and the graph data learning technique described in “About Graph Data. Learning”, and graph data technology described in “Graph Data Processing Algorithms and Applications to Machine Learning/Artificial Intelligence Tasks“. Specifically, circuit configuration optimization using deep learning, circuit optimization using Reinforcement Learning, and metaheuristic optimization are being applied in practice.

For example, “Using Deep Reinforcement Learning to Automate Semiconductor Chip Design” describes the use of deep reinforcement learning to solve chip placement problems, and “‘Machine Learning’ Heats Up Research on State-of-the-Art Semiconductor Circuits” describes “Machine Learning and Signal Processing” and “Wireless Technology” as sessions of interest at ISSCC (International Solid-State Circuits Conference), an international conference where research and development results on state-of-the-art semiconductor chips are presented.

  • Pattern Matching In anomaly detection and troubleshooting, anomalous patterns can be detected. Automatic detection of abnormal patterns enables quality control and improvement of manufacturing processes. Technologies used for pattern matching include the deep learning technology described in “About Deep Learning” and the change detection technology described in “Anomaly Detection and Change Detection Technology.

Specifically, VM (Virtual Metrology) technology, which virtually predicts the finished product by processing various measurement data using statistical methods, is described in “Virtual Metrology Technology in Semiconductor Manufacturing Factories,” and various examples are given in “Application of AI to Semiconductor Manufacturing Equipment.

  • Predictive analytics AI can predict future semiconductor performance and reliability. Data analysis can identify potential problems and improvements. Statistical analysis and regression analysis techniques, such as those described in “General Machine Learning and Data Analysis,” are applied to these. Causal inference techniques such as those described in “Statistical Causal Inference and Search” can also be applied.
  • Self-learning The design process can be advanced by learning flow optimization as described in “Workflow & Service Technology. For example, AI can automatically learn and advance design decisions that were previously made based on empirical rules.
About Semiconductor Chips for AI Applications

As mentioned in the article “U.S. and Chinese IT Giants Develop Original Semiconductor Chips Aimed at the AI Era,” major U.S. and Chinese IT (information technology) companies are now developing original chip designs for their products and services in earnest. In this section, we first describe the design characteristics of semiconductor designs for AI applications, and then discuss some specific examples.

First, the design features are as follows.

  • Architectural Design: In designing semiconductors for AI, the first thing to be done is to design an architecture to efficiently execute AI algorithms. For example, an architecture with many parallel processing units may be needed to process deep learning algorithms.
  • High speed: Semiconductors for AI need to be fast. To increase processing speed, measures such as increasing the clock frequency of the semiconductor or designing processors with multiple cores will be necessary. For example, a semiconductor with many parallel processing units is needed to execute deep learning algorithms, and it may be necessary to increase the clock frequency of the semiconductor or design a processor with multiple cores to increase the processing speed of AI algorithms.
  • Efficiency: To run AI algorithms, large amounts of data must be processed. Therefore, it is important to reduce the power consumption of semiconductors. Low-power designs and the use of power-efficient arithmetic units will be effective.
  • Memory management: To execute AI algorithms, a large amount of data needs to be stored in memory. Therefore, attention must be paid to memory management in semiconductors. For example, a cache may be used in the design to enable fast memory access.
  • Data processing: In order to run AI algorithms, specific data types must be supported. For example, there are designs that allow for fast processing of floating point number operations, etc.
  • Software support: To execute AI algorithms, specific software libraries, such as python, must be supported. Therefore, compatibility with software must be considered in semiconductor design.

Semiconductor chips specialized for AI applications are currently attracting a great deal of attention in the AI field because of their high-speed and power-saving processing capabilities compared to general-purpose processors such as GPUs and CPUs. Some specific examples are listed below.

  • NVIDIA Tesla GPU NVIDIA’s Tesla GPUs are designed to take advantage of the high parallel processing power of GPUs to perform training and inference of deep learning models quickly and efficiently. They are also equipped with a special circuit called TensorCore that can accelerate floating-point operations.

  • Google TPU Google’s TPU (Tensor Processing Unit) is a semiconductor chip developed by Google for AI applications. In addition, it can be easily used because it is embedded in the cloud services provided by Google Cloud.

  • Intel Nervana Neural Network Processor Intel’s Nervana NNP will be a semiconductor chip dedicated to neural network training and inference. expected to be used in the field of deep learning.

  • Qualcomm Snapdragon Neural Processing Engine Qualcomm’s Snapdragon Neural Processing Engine is a semiconductor chip for running deep learning models on mobile devices such as smartphones and tablets. The Snapdragon NPE is expected to be used in mobile applications and IoT devices because of its ability to perform fast and efficient computations.

Various other companies are also developing chips for AI applications. These AI-specific semiconductor chips are being considered for applications such as edge computing, where processing is completed at the point of data input, and further acceleration of conventional GPUs.

In addition, as the latest information, “Generative AI for Word and Photoshop, including NVIDIA’s new GPU to enhance inference performance, intensifies competition in the AI processor market,” NVIDIA announced at the GPU Technology Conference (GTC) in 2022 the latest version of its GPU for AI, the NVIDIA H100 GPU with the development code name of Hopper The NVIDIA H100 GPU, with the development code name of “Hopper,” was announced at the GTC (GPU Technology Conference) in 2022.

And the DGX-H100 learning supercomputer, which is equipped with eight of these NVIDIA H100 GPUs, is shipping in 2023. The use of such a high-performance learning supercomputer will shorten the computation of models that have taken a long time to learn in recent years.

Also attracting attention are AI-specific processors and accelerators developed not only by NVIDIA but also by semiconductor ventures such as SambaNova and Tenstorrent, as well as accelerators from Habana Labs, which Intel has acquired and added to its lineup.

AI-specific processors and accelerators are characterized by their processing power per unit of power, or power efficiency, which is superior to that of general-purpose GPU processors, but on the other hand, they require the user to learn a unique programming model.

In addition, the development of NPUs (Neural Processing Units) on the edge side, i.e., in client PCs and smartphones, has been active in recent years. Specific examples include the NPU (or DSP as the company calls it) built into Qualcomm’s smartphone SoC (Snapdragon 8 Gen 2) and Apple’s also developing a smartphone SoC (M1 & M2 chips) with a built-in NPU.

Also in x86 processors, AMD has integrated a Xilinx-derived FPGA into the die of its Ryzen 7000 mobile processor, which is called by the development codename Phoenix, so that it can be used as an NPU.

Intel is providing OEMs with a standalone NPU, known by its development codename Keem Bay, and plans to integrate it into the CPU in its next-generation Meteor Lake product, which is expected to be available later this year.

Hardware approaches to machine learning technology are also discussed in “Thinking Machines: Machine Learning and Its Hardware Implementation” and “Introduction to FPGA for Software Engineers: Machine Learning Edition. See also these documents.

コメント

タイトルとURLをコピーしました