Specific Applications of Artificial Intelligence Technology for DX Applications
Artificial intelligence technology refers to technology that allows computers and robots to perform intelligent tasks previously performed by humans by imitating human intelligence and thought processes. Artificial intelligence technology includes various technologies such as machine learning, deep learning, natural language processing, and image recognition.
Machine learning is a technology that uses large amounts of data to allow computers to learn by themselves and find patterns and regularities in the data. Deep learning is a type of machine learning that uses multi-layered neural networks and is highly accurate in areas such as image recognition and voice recognition.
Natural language processing is a technology that enables computers to understand human language and is used for speech recognition and text analysis. Image recognition is a technology that enables computers to recognize objects and characters from images, and is used in such applications as self-driving cars and security systems.
Artificial intelligence technology has developed rapidly in recent years and is used in a variety of fields. Its applications are wide-ranging and include self-driving cars, the medical field, finance, marketing, and many others.
Some examples are listed below.
- Automated vehicles: Automated vehicles can be driven automatically by using AI technology to monitor the vehicle’s surroundings and perform appropriate operations.
Healthcare: Image recognition technology using AI technology is being used in the medical field to assist in diagnosis. Sensor technology is also evolving to monitor the health status of patients. - Credit scoring: Using AI technology, banks and financial institutions can automatically determine a customer’s credit profile and assign a credit score.
- Security: Facial and behavioral recognition technologies based on AI technology are widely used in the security field. For example, they are used to apprehend criminals and provide security for buildings and airports.
- Cloud services: Automated systems and data analysis tools using AI technology are provided by cloud service providers to streamline the operations of companies and individuals.
- Personalized marketing: Data analysis using AI technology can identify users’ interests and preferences and display advertisements and content based on them.
- Education: AI technology-based e-learning platforms can provide learners with appropriate courses and materials to enable learning tailored to their individual progress.
In addition to the above, this blog also discusses the use of Artificial Intelligence technology, as shown below.
Application Examples
Prototyping has an important place in digital transformation (DX) as a process to rapidly realise and visualise ideas and concepts. Below are some specific benefits and methods of ‘visualisation’ through prototyping in DX.
In this article, we consider artificial intelligence (AI) technology from the philosophy of the Tao. Considering AI technology from the Tao’s philosophy may provide unprecedented inspiration for the role of AI and its design principles. Since the Tao emphasises ‘natural flow’ and ‘harmony’, and ideally adapts to the environment and circumstances without difficulty, the following perspectives are also important in the way AI should be designed.
- Considering Hegel’s phenomenology of the psyche and its application to AI technology
Consider Hegel’s gradual development process of human consciousness and knowledge from the perspective of applying it to AI learning and development. Specifically, the idea of a gradual process by which AI advances in self-awareness and self-improvement, and the approach of designing and developing AI while interpreting the relationship between AI and humans in human society from a philosophical perspective are considered.
- LIDAR (Light Detection and Ranging), generative AI and GNN
LIDAR (Light Detection and Ranging, LIDAR) is a technology that uses laser light to measure the distance to an object and to accurately determine the 3D shape of the surrounding environment and objects. . This technology is used in a variety of fields, including automated driving, topographical surveying, archaeology and construction.
- Free will, AI technology and Zhuangzi’s freedom
When the soft deterministic idea of free will is considered in terms of the use of artificial intelligence technology, it is possible to derive options that machines can ‘do otherwise as well’ beyond the possible human options, and among these, not simply algorithms that can also be realised by machines, but ‘causal reasoning and considerations towards the realisation of strong AI’. If problems can be solved with algorithms based on deep imagination and models based on that imagination, as described in ‘Considerations for causal reasoning and strong AI’, then humans could play a role that machines cannot play.
- The world is made of relations – Carlo Rovelli’s quantum theory and the imperial web
Carlo Rovelli’s The World is Made of Relations (original title: *Helgoland*) presents a ‘relational interpretation’ of quantum mechanics and extends them to the origins of our world. This perspective of interpreting the world as a dynamic field of interaction can be applied to AI technology as follows.
- Towards building Compassionate AI and Empathetic AI
Compassionate AI (Compassionate AI) and Empathetic AI (Empathetic AI) refer to AI that has emotional understanding and compassion and aims to respond with consideration for the emotional and psychological state of the user. These AIs can build a relationship of trust with the user through emotional recognition and natural conversation, and provide more personalised support, making them a technology of particular interest in fields where emotional support is required, such as healthcare, education, mental health and customer service work.
- How does the brain see the world?
The question ‘How does the brain see the world?’ has long been explored in fields such as neuroscience, psychology and philosophy, and provides insight into how the brain works to produce the world we perceive, interpret and are aware of. This section examines whether these perspectives are feasible in AI.
Domain-Driven Design (DDD) is a software design methodology based on an understanding of the business domain. This section describes DDD.
The following are sources of information on specific issues in DX (Digital Transformation). Kaisha Shikiho: Sangyo Chizu” is one of the special issues of Kaisha Shikiho, a corporate information magazine published by Nihon Keizai Shimbun, Inc. It is published in March and September each year and can be purchased at bookstores or online for several thousand yen per copy.
The general steps of the business analysis are as follows. This time, we will show a specific example of business problem solving in the manufacturing industry.
Artificial Intelligence (AI) has great influence in the field of education and has the potential to transform teaching methods and learning processes. Below we discuss several important aspects of AI and education.
ISO 31000 is an international standard for risk management, providing guidance and principles to help organisations manage risk effectively.The combination of ISO 31000 and AI technology is a highly effective approach to enhance risk management and support more accurate decision-making. The use of AI technology makes the risk management process more efficient and effective at the following points.
Bowtie analysis is a risk management technique that is used to organise risks in a visually understandable way. The name comes from the fact that the resulting diagram of the analysis resembles the shape of a bowtie. The combination of bowtie analysis with ontologies and AI technologies is a highly effective approach to enhance risk management and predictive analytics and to design effective responses to risk.
In a ‘place’, a place or situation where people gather, people can share opinions and feelings and exchange values through communication, including physical places such as workplaces, schools and homes, places with a specific purpose or awareness such as meetings and events, or even virtual spaces such as the internet and the metaverse described in ‘History of the metaverse and challenges and AI support’, or even virtual spaces, such as the internet and the metaverse described in “History and challenges of the metaverse and AI support”. A ‘place’ is not just a space, but also a ‘situation’ or ‘opportunity’ for people, objects, ideas and energies that gather there to interact and create new values and meanings.
Implementing an ontology-based data integration and decision-making system in product design and obsolescence management is a way to efficiently manage complex information and support decision-making.
- Auto-Grading (automatic grading) technology
Auto-grading refers to the process of using computer programmes and algorithms to automatically assess and score learning activities and assessment tasks. This technology is mainly used in the fields of education and assessment.
The Science of the Artificial (1969) is a book by Herbert A. Simon on the field of learning science and artificial intelligence, which has particularly influenced design theory. The book is concerned with how man-made phenomena should be categorised and discusses whether such phenomena belong in the realm of ‘science’. System Design and Decision-making Systems.
The Science of the Artificial (1969) is a book by Herbert A. Simon on the field of learning science and artificial intelligence, which has particularly influenced design theory. The book is concerned with how man-made phenomena should be categorised and discusses whether such phenomena belong in the realm of ‘science’.
The service on modelling product properties and functions using Graph Neural Networks (GNN) and predicting market reactions and demand fluctuations is outlined below.
The service for designing new materials and predicting their properties using Graph Neural Networks (GNNs) will be aimed at increasing the efficiency of research and development in the field of materials science, reducing costs and rapidly discovering new high-performance materials. It uses GNNs to model the properties and structure of materials and has the capability to assist in the design of new materials and the prediction of their properties.
The service, which models each stage of the manufacturing process using Graph Neural Networks (GNN) and optimises the design and operation of the production line, is outlined as follows.
- Semiconductor technology and GNNs
GNN is a deep learning technology for handling graph data, which learns the features of nodes and edges while considering directed/undirected relationships for graph structures represented by nodes (vertices) and edges (edges). This GNN technology is capable of capturing complex interdependencies between nodes and is being considered for application in various domains, making it a powerful machine learning method that can be applied to various aspects of semiconductor technology. In this article, specific applications of GNNs to semiconductor technology will be discussed.
As mentioned in the previous article “Fusion of Plant Engineering Ontology ISO15926 and AI Technology,” plant engineering is a complex technology involving many elements and requiring a vast amount of knowledge data, so ontology technology is being actively applied. In this article, I would like to discuss the application of ontology technology to plant engineering from an operational perspective.
IA (Intelligence Augmentation) will be a term that refers to the use of computers and other technologies to augment human intelligence. In other words, IA can be described as the use of computers to supplement and extend human intelligence by providing analysis and decision-making support to improve human capabilities, and to combine human and computer power to create more powerful intellectual capabilities. This is a term that, depending on how you take the meaning, corresponds to the whole area called DX.
In contrast, Artificial Intelligence (AI) refers to the technology and concept of using computers and other machines to realize human intelligence and behavior. AI is evolving in areas such as machine learning, deep learning, natural language processing, and computer vision, and whether or not it has been realized AI can be defined as the ability of machines to solve problems autonomously.
To “awareness” means to observe or perceive something carefully, and when a person notices a situation or thing, it means that he or she is aware of some information or phenomenon and has a feeling or understanding about it. Becoming aware is an important process of gaining new information and understanding by paying attention to changes and events in the external world. In this article, I will discuss this awareness and the application of artificial intelligence technology to it.
RFID is an abbreviation for “Radio Frequency Identification,” a technology that uses wireless communications to read identification information on goods, animals, etc. This RFID system mainly consists of three elements: RFID tags, RFID readers, and a central database. RFID is used in various fields such as logistics, agriculture, medicine, and manufacturing. Furthermore, combining RFID technology with AI technology is expected to optimize and streamline business processes.
This section describes WoT (Web of Things) technology used in Artificial Intelligence and IOT technologies. WoT is an abbreviation for Web of Things, which was defined by W3C, the Internet standards organization, to solve existing IoT issues.
WoT addresses one of the challenges of the IoT, which is the lack of compatibility (at present, in many cases, sensors, platforms, or operating systems work only with certain systems), by addressing the issues of existing web technologies that are already widely used (HTML, Javascript, JSON, etc.) and By using protocols to provide IoT services and applications, we can increase interoperability of devices and add features such as security and access control at the application level, as well as semantic usage of data combined with Semantic Web technologies. The goal is to enable the creation of a wide variety of services.
The Satisfiability of Propositional Logic (SAT: Boolean Satisfiability) is the problem of determining whether or not there exists a variable assignment for which a given propositional logic expression is true. For example, if there is a problem “whether there exists an assignment of A, B, C, D, E, or F such that A and (B or C) and (D or E or F) are true,” this problem is converted into a propositional logic formula and whether the formula is satisfiable is determined.
Such problem setting plays an important role in many application fields, for example, circuit design, program analysis, problems in the field of artificial intelligence, and cryptography theory. From a theoretical aspect, it is known that the algorithm that can solve the SAT problem is an “NP-complete problem,” and current computers have not found an efficient solution for large-scale problems. Therefore, this is a field of technology where research is still being conducted to improve algorithm efficiency, such as increasing speed and developing heuristic search algorithms.
The topic of Daniel Metz’s dissertation at the Business & Information Systems Engineering (BISE) Institute at Siegen University is an analysis of the Real Time Enterprise (RTE) concept and supporting technologies over the past decade. Its main objective is to identify shortcomings. Subsequently, leveraging the Event Driven Architecture (EDA) and Complex Event Processing (CEP) paradigms, a reference architecture was developed that overcomes the gaps in temporal and semantic vertical integration across different enterprise levels that are essential to realize the RTE concept. The developed reference architecture has been implemented and validated in a foundry with typical characteristics of SMEs
3.5 Billion Potential Customers Await! In addition to explaining the current chatbot boom, this book provides a detailed explanation of the changes in the Web and business developments that are expected to come as chatbots become more widespread, with examples.”
The book describes various case studies from the perspective of the chatbot business. In terms of technology, the book focuses on simple rule-based (some AI chatbots include natural language processing technology) configurations.
Virtual currencies such as Bitcoin and the blockchain technology that supports them are extremely novel. They have the potential to significantly change the basic structure of society.
Still sometimes seen as dubious due to the collapse of bitcoin exchange operators, etc.
Bitcoin and blockchain technology, however, have an impact not only on the financial sector, but also on various other industries. What kind of business is about to be created, what kind of technology makes it possible, and how is the Japanese legal system responding?
In this book, experts in business and technology development in Bitcoin and blockchain technology share their know-how and knowledge gained through their practical experience with the aim of developing the industry, not only with financial experts but also with those who are involved in new business development and business planning.
This book shares the know-how and knowledge gained by experts in the development of Bitcoin and blockchain technology, not only with financial experts, but also with a wide range of business people involved in new business development and corporate planning.
This book presents the latest findings on the theory and methods of ontology modeling in the integrity management of physical assets with emphasis on interoperability and heterogeneity in systems consisting of multiple subsystems.
The contents include the plant ontology ISO 15926 in Chapter 1, smart buildings and ontology in Chapter 2, failure/risk analysis and ontology such as FMEA and HAZID in Chapter 3, product data integration and production design in the enterprise in Chapter 4, and interactive fault diagnosis systems in the ship domain, Chapter 6 discusses several risk diagnosis systems, Chapter 7 discusses cost analysis tools in product service systems, and the last chapter, Chapter 8, discusses plant equipment diagnosis systems.
ISO 15926 is a platform for data integration, exchange and sharing, originally developed for the purpose of “industrial automation systems and integration – integration of life cycle data for process plants, including oil and gas production facilities”. The content of the standard is specified from Part 1 to Part 13, with descriptions related to ontology modeling from Part 4 to Part 8.
The main users are plant engineering companies in the U.S. and Europe, FIATECH (Fully Integrated & Automated Technologies) in the U.S., and PSOC Caesar Association (PCA) in Europe. In Europe, PSOC Caesar Association (PCA) provides not only ontologies but also systems that utilize them. In Europe, the PSOC Caesar Association (PCA) provides not only ontologies but also systems that utilize them. A large scale of data has been constructed, and together with XML PLM of siemence, it will be a reference material when constructing ontology for physical assets in the manufacturing industry.
This paper describes building smart and Industry Foundation Classes (IFC). buildingSMART defines specifications for the systematic representation of all objects that make up a building (e.g., elements such as doors, windows, walls, etc.), and Industry Foundation Classes (IFC) summarizes these specifications. While smart city is an application that deals with the entire city from the user’s point of view, smart building is an application that deals with the sharing of building data such as BIM and the sharing of building materials in buildingSMART. While smart city is an application that deals with the entire city from the user’s point of view, smart building is an application that deals with the architect’s point of view by sharing building data such as BIM and sharing building materials in smart.
FMEA stands for Failure Mode and Effect Analysis, and is a systematic method of analyzing potential failures for the purpose of preventing failure problems.
FTA (Fault Tree Analysis) is a similar failure analysis method, but FTA is a top-down method in which the undesirable events of a product are first assumed, and the possible paths to failure or accident are described in a tree structure along with the probability of occurrence. On the other hand, FMEA is a bottom-up analysis method that describes not the failure itself (function), but the failure event that causes the failure. Specifically, after organizing the system information (structure, functions, components, etc.) in preparation, an FMEA sheet is prepared that lists the failure modes, their effects, and the assumed failure modes.
HAZID is an abbreviation for Hazard Identification Study, which is a method for safety assessment of plants and systems to identify potential risks (hazards) and evaluate the magnitude of those risks. The identification of hazards is done by using the What-if method or its improved version, the Structured What-if Technique (SWIFT). In this method, a structured worksheet is used and questions such as “What-if”, “How could”, and “It is possible” are asked. It is used for brainstorming based on questions such as “What-if”, “How could”, and “It is possible”, in order to anticipate various problems in advance.
This paper introduces FMEA and HAZID, followed by specific examples of combining them with ontology.
In this paper, product design and data integration using ontology are described. In particular, it describes data integration and decision making using ontology as a countermeasure against DMSMS (Diminishing Manufacturing Sources and Material Shortages), which is closely related to production planning.
In order to minimize the effects of unexpected system failures, the efficiency of fault diagnosis must be improved. When classical diagnostic techniques are considered, unexpected events are detected from a local perspective, i.e., at the equipment level. However, when complex systems are considered, classical techniques are useless because the whole system may not be monitored and the performance may vary due to the interaction between the equipment and the environment.
Maintenance personnel use their knowledge of component degradation mechanisms, built on multiple technologies of mechanical, electrical, electronic, or software nature, to formulate hypotheses about the causes of failures and performance anomalies based on the occurrence of symptoms.
The ontology approach to them is described for the case of a ship.
Product Service System (PSS) is an abbreviation for Product Service System, which is a method to achieve sustainable consumption. The product-service system provides services to customers based on the idea of “selling services, not goods. Companies in Europe, the U.S., and other parts of the world are shifting from product sales to product-service systems, with some companies aiming to reduce their environmental impact and efficiently fulfill their EPR (extended producer responsibility).
This paper describes an ontology-based approach to estimating the cost of this PSS.
This section describes the methods, tools, and languages used for ontology development to create semantically enhanced legal knowledge systems and web-based applications that aim to improve the searchability of legal information and the reusability of legal knowledge by enabling information interoperability through the application of ontologies and Semantic Web technologies. The paper describes the methods, tools, and languages used to develop ontologies for creating semantically enhanced legal knowledge systems and web-based applications with the aim of improving the reusability of legal knowledge. As a case study, the development of the Ontology of Professional Judicial Knowledge (OPJK) is presented.
This book describes an enterprise ontology as a tool for analyzing, redesigning, and re-engineering an enterprise. This will be an integrated ontology that covers a number of issues such as business processes, in-and-out sourcing, information systems, business management, and staffing.
The content includes an overview of ontology, modeling using ontology for information transactions and processes between customers and clubs in Volley tennis club as a concrete example, and various modeling aspects (operations, transactions, components, organizations, etc.) to actually build the ontology. It also describes the characteristics of various modeling aspects (operations, transactions, components, organizations, etc.) for building ontologies, and the characteristics and application of various models such as interaction models, process models, action models, state models, and interestriction models as methodologies.
The field of Business Intelligence (BI) is expected to benefit from the application of semantic technologies. Semantic BI can be viewed as the convergence of semantics-based enterprise content management and business intelligence. Traditional BI solutions rely on extracting data from one or more data silos, performing analysis on this data, and presenting the key results to business users. With the increasing need to provide real-time information, manual and intensive preparation processes create bottlenecks. In addition, the inclusion of unstructured data such as emails and news feeds may provide a more complete picture, confirming the need to extract knowledge from these and quickly integrate new sources.
There are various research challenges in semantic BI. The knowledge extracted from unstructured sources must be of sufficient quality to be usable in critical BI systems and to be analyzed together with structured data. Correlation of knowledge extracted from different data modalities is also important. The representation, storage, and reuse of the results of BI processes through ontologies is an additional challenge.
I would like to discuss the process of designing semiconductor chips as described in “Computational Elements of Computers and Semiconductor Chips” and semiconductor chips specialized for AI applications, one step further from “Introduction to FPGAs for Software Engineers: Machine Learning”.
In this issue, we will discuss semiconductor manufacturing technology. Semiconductor manufacturing can be broadly classified into two categories: front-end processes and back-end processes. Front-end processes include wafer fabrication, cleaning, film deposition, lithography, etching, and impurity diffusion, while back-end processes include dicing, mounting, bonding, molding, marking, bumping, and packaging.
Search Technology
The first technology to be considered for DX is search technology.
In the following pages of this blog, various technologies related to search technology are discussed in the .
Recommended Technology
In DX, natural language processing will be used in many situations. Among them, recommendation technology will be used in many cases.
In this blog, specific implementation and theory of this recommendation technology are described in the following pages.
Basic Machine Learning and Data Analysis
The following pages of this blog summarize some of the tasks in machine learning. The basic algorithms are regression, which finds a function to predict a continuous output from an input; classification, which is a model that restricts the output to a finite number of symbols; clustering, which is the task of dividing the input data into K sets according to some criterion; and We will discuss clustering, which is the task of dividing input data into K sets according to some criteria, linear dimensionality reduction, which is used to process high-dimensional data commonly found in real systems.
Deep Learning
Deep learning is characterized by its ability to automatically extract features through learning using large amounts of data. Therefore, there is no need to design features manually, and it can be highly versatile compared to conventional machine learning algorithms. In addition, tools in python such as tensorflow/Keras and pythorch are available, and model construction/learning can be achieved relatively easily by using them.
The following pages of this blog summarize the theory of these deep learning techniques and their implementation in various fields such as image recognition, speech recognition, and natural language processing, especially the use of tools in python such as tensorflow/Keras and pythorch.
User Interface and DataVisualization
Using a computer to process data is equivalent to creating value by visualizing the structure within the data. In addition, data itself can be interpreted in multiple ways from multiple perspectives, and in order to visualize them, we need a well-designed user interface.
In the following pages of this blog, I will discuss various examples of this user interface.
コメント