Knowledge Information Processing Technologies

Machine Learning Artificial Intelligence Natural Language Processing Semantic Web Ontology Digital Transformation Probabilistic Generative Model Navigation of this blog

About Knowledge Information Processing Technologies

According to wiki, knowledge is the result of cognition, or an idea or skill that one has about a person or thing. It is the result of cognition, or the ideas and skills one has about people and things.

The term “connaissance” is almost synonymous with “cognition,” but cognition is basically a philosophical term, and knowledge mainly means the “results” obtained through cognition.

According to the Oxford English Dictionary, knowledge in English is defined as follows.

  1. A specialized skill acquired by a person through experience or education. A theoretical or practical understanding of a subject.
  2. What is known about a particular field or in general. Facts and information.
  3. Awareness or knowledge gained by experiencing a fact or situation.

To explain what human beings have said and discussed about knowledge, in the old days, the “tree of knowledge of good and evil” appeared in the story of Adam and Eve in the Book of Genesis in the Old Testament, and each faith has various ideas about knowledge. The first philosophical discussion of knowledge was by Plato in ancient Greece, who described knowledge as “justified true belief,” and various philosophical discussions have continued up to the present day. In the sixteenth and seventeenth centuries, Francis Bacon considered the methods of knowledge acquisition, and his ideas played a major role in the establishment of modern science. (In modern psychology, knowledge acquisition involves complex cognitive processes such as perception, memory, experience, communication, association, and reasoning.

Even now, there is no single definition of knowledge that can be agreed upon by everyone, and there are different theories in different academic fields, some of which are mutually opposing.

There are different theories in different disciplines, some of which are in conflict with each other. The following are some of the categories of knowledge.

Knowledge is treated as long-term memory, and as in the classification of memory, we may classify representational knowledge as “declarative knowledge” and behavioral knowledge as “procedural knowledge. Examples of this declarative knowledge include knowledge of scientific laws (99, gravitational constant on earth, etc.) and knowledge of social conventions (e.g., “the capital of Japan is Tokyo”). Other examples of procedural knowledge include how to use chopsticks, how to play the piano, and how to drive a car. The former is sometimes referred to as “knowing that” and the latter as “knowing how.

From the perspective of formalization and method of transmission, knowledge can be classified into “formal knowledge” and “tacit knowledge”. This classification is used in the world of knowledge management. Tacit knowledge refers to knowledge that is either impossible or extremely difficult to describe declaratively. Procedural knowledge and intuitive cognitive content are considered tacit knowledge. For example, everyone has knowledge about “beauty,” but it cannot be clearly defined.

From a philosophical or biological standpoint, knowledge that we are born with is sometimes categorized as “a priori knowledge,” and knowledge that we acquire after birth through social life is sometimes categorized as “a posteriori knowledge. Whether or not a priori knowledge exists has been a long-standing issue in epistemology. In the continental rationalist tradition, Descartes and others were in the mainstream of accepting some kind of a priori knowledge. Such a position is called the innate theory. In British empiricism, the empiricist position, which denies the existence of a priori knowledge and considers the mind as a blank slate, was advocated by Locke and others (Tabula Rasa).

It is sometimes divided into theoretical knowledge and practical knowledge, which is said to be the distinction between the knowledge of the philosopher and the knowledge of the practitioner, as well as the distinction between “science” (scientia) and “art” (ars).

This blog discusses the information technology (ICT) approach to this knowledge as follows.

Artificial General Intelligence (AGI) refers to AI systems that have a general intelligence similar to human intelligence and can handle a variety of tasks. Whereas current AI systems specialise in specific tasks using dedicated models, AGI aims to be flexible enough to perform a variety of tasks Knowledge information processing in AI deals with vast amounts of data and performs tasks such as extraction, classification, reasoning and interpretation AGI integrates these techniques and attempts to perform multiple tasks similar to human capabilities It seeks to perform multiple tasks that are similar to human capabilities. Graph data, which represents data in terms of nodes and edges, is crucial in AI for understanding relationships and patterns; AGI aims to use graph data effectively to extract advanced knowledge from large data sets. This blog focuses on recent papers presented at international conferences that highlight advances in knowledge information processing and machine learning using graph data.

Bowtie analysis is a risk management technique that is used to organise risks in a visually understandable way. The name comes from the fact that the resulting diagram of the analysis resembles the shape of a bowtie. The combination of bowtie analysis with ontologies and AI technologies is a highly effective approach to enhance risk management and predictive analytics and to design effective responses to risk.

Generative AI refers to artificial intelligence technologies that generate new content such as text, images, audio and video. As generative AI (e.g. image-generating AI and text-generating AI) generates new content based on given instructions (prompts), the quality and appropriateness of the prompts is key to maximising AI performance.

Ontology Based Data Access (OBDA) is a method that allows queries to be performed on data stored in different formats and locations using a unified, conceptual view provided by an ontology, with the semantic integration of data and a user-friendly format for The aim will be to provide access to the data in a format that is easily understood by the user.

Implementation

Case-based reasoning is a technique for finding appropriate solutions to similar problems by referring to past problem-solving experience and case studies. This section provides an overview of this case-based reasoning technique, its challenges, and various implementations.

A knowledge graph is a graph structure that represents information as a set of related nodes (vertices) and edges (connections), and is a data structure used to connect information on different subjects or domains and visualize their relationships. This paper outlines various methods for automatic generation of this knowledge graph and describes specific implementations in python.

A knowledge graph is a graph structure that represents information as a set of related nodes (vertices) and edges (connections), and is a data structure used to connect information on different subjects or domains and visualize their relationships. This section describes various applications of the knowledge graph and concrete examples of its implementation in python.

Implementing an ontology-based data integration and decision-making system in product design and obsolescence management is a way to efficiently manage complex information and support decision-making.

SNAP is an open-source software library developed by the Computer Science Laboratory at Stanford University that provides tools and resources used in various network-related studies, including social network analysis, graph theory, and computer network analysis. The library provides tools and resources used in a variety of network-related research, including social network analysis, graph theory, and computer network analysis.

CDLib (Community Discovery Library) is a Python library that provides community detection algorithms, offering a variety of algorithms for identifying community structure in graph data and helping researchers and data scientists address different It will support researchers and data scientists in dealing with different community detection tasks.

MODULAR is one of the methods and tools used in the research areas of computer science and network science to solve multi-objective optimization problems of complex networks, the approach is designed to simultaneously optimize the structure and dynamics of the network, taking different objective functions ( multi-objective optimization) are taken into account.

The Louvain method (or Louvain algorithm) is one of the effective graph clustering algorithms for identifying communities (clusters) in a network. The Louvain method employs an approach that maximizes a measure called modularity to identify the structure of the communities.

Infomap (Information-Theoretic Modularity) is a community detection algorithm used to identify communities (modules) in a network. It focuses on optimizing the flow and structure of information.

Copra (Community Detection using Partial Memberships) is an algorithm and tool for community detection that takes into account the detection of communities in complex networks and the fact that a given node may belong to multiple communities. Copra is suitable for realistic scenarios where each node can belong to multiple communities using partial community membership information.

D3.js and React, which are based on Javascript, can be used as tools for visualizing relational data such as graph data. In this article, we will discuss specific implementations using D3 and React for 2D and 3D graph displays, and heat maps as a form of displaying relational data.

Displaying and animating graph snapshots on a timeline is an important technique for analyzing graph data, as it helps visualize changes over time and understand the dynamic characteristics of graph data. This section describes libraries and implementation examples used for these purposes.

This paper describes the creation of animations of graphs by combining NetworkX and Matplotlib, a technique for visually representing dynamic changes in networks in Python.

Methods for plotting high-dimensional data in low dimensions using dimensionality reduction techniques to facilitate visualization are useful for many data analysis tasks, such as data understanding, clustering, anomaly detection, and feature selection. This section describes the major dimensionality reduction techniques and their methods.

Gephi is an open-source graph visualization software that is particularly suitable for network analysis and visualization of complex data sets. Here we describe the basic steps and functionality for visualizing data using Gephi.

Cytoscape.js is a graph theory library written in JavaScript that is widely used for visualizing network and graph data. Cytoscape.js makes it possible to add graph and network data visualization to web and desktop applications. Here are the basic steps and example code for data visualization using Cytoscape.js.

Sigma.js is a web-based graph visualization library that can be a useful tool for creating interactive network diagrams. Here we describe the basic steps and functions for visualizing graph data using Sigma.js.

The Satisfiability of Propositional Logic (SAT: Boolean Satisfiability) is the problem of determining whether or not there exists a variable assignment for which a given propositional logic expression is true. For example, if there is a problem “whether there exists an assignment of A, B, C, D, E, or F such that A and (B or C) and (D or E or F) are true,” this problem is converted into a propositional logic formula and whether the formula is satisfiable is determined.

Such problem setting plays an important role in many application fields, for example, circuit design, program analysis, problems in the field of artificial intelligence, and cryptography theory. From a theoretical aspect, it is known that the algorithm that can solve the SAT problem is an “NP-complete problem,” and current computers have not found an efficient solution for large-scale problems. Therefore, this is a field of technology where research is still being conducted to improve algorithm efficiency, such as increasing speed and developing heuristic search algorithms.

  • General Problem Solver and Application Examples, Implementation Examples in LISP and Python

The general problem solver specifically takes as input the description of the problem and constraints, and operates to execute algorithms to find an optimal or valid solution. These algorithms vary depending on the nature and constraints of the problem, and there are a variety of general problem-solving methods, including numerical optimization, constraint satisfaction, machine learning, and search algorithms. This section describes examples of implementations in LISP and Python for this GPS.

A graph neural network (GNN) is a type of neural network for data with a graph structure. ) to express relationships between elements. Examples of graph-structured data include social networks, road networks, chemical molecular structures, and knowledge graphs.

This section provides an overview of GNNs and various examples and Python implementations.

Graph Convolutional Neural Networks (GCN) is a type of neural network that enables convolutional operations on data with a graph structure. While regular convolutional neural networks (CNNs) are effective for lattice-like data such as image data, GCNs were developed as a deep learning method for non-lattice-like data with very complex structures, such as graph data and network data.

ChebNet (Chebyshev network) is a type of Graph Neural Network (GNN), which is one of the main methods for performing convolution operations on graph-structured data. ChebNet is an approximate implementation of convolution operations on graphs using Chebyshev polynomials, which are used in signal processing.

Graph Attention Network (GAT) is a deep learning model that uses an attention mechanism to learn the representation of nodes in a graph structure. GAT is a model that uses a set of mechanisms to learn the representation of a node.

Graph Isomorphism Network (GIN) is a neural network model for learning isomorphism of graph structures. The graph isomorphism problem is the problem of determining whether two graphs have the same structure, and is an important approach in many fields.

GraphSAGE (Graph Sample and Aggregated Embeddings) is a graph embedding algorithm for learning node embeddings (vector representation) from graph data. By sampling and aggregating the local neighborhood information of nodes, it effectively learns the embedding of each node. This approach makes it possible to obtain high-performance embeddings for large graphs.

Heterogeneous Information Network Embedding (HIN2Vec) is a method for embedding heterogeneous information networks into a vector space, where a heterogeneous information network is a network consisting of several different types of nodes and links, for example HIN2Vec aims to effectively represent different types of nodes in a heterogeneous information network, and this technique is part of a field called Graph Embedding. It is part of a field called Graph Embedding, which aims to preserve the network structure and relationships between nodes by embedding them in a low-dimensional vector.

HIN2Vec-GAN is one of the techniques used to learn relations on graphs, specifically, it has been developed as a method for learning embeddings on Heterogeneous Information Networks (HINs) HINs are different graph structures with different types of nodes and edges, which are used to represent data with complex relationships.

HIN2Vec-PCA combines HIN2Vec and Principal Component Analysis (PCA) to extract features from Heterogeneous Information Networks (HINs).

Ontology Technology

The term ontology has been used as a branch of philosophy, and according to the wiki, “It is not concerned with the individual nature of various things (beings), but with the meaning and fundamental rules of being that bring beings into existence, and is considered to be metaphysics or a branch of it, along with cognitive theory.

Metaphysics deals with abstract concepts of things, and ontology in philosophy deals with abstract concepts and laws behind things.

On the other hand, according to the wiki, ontology in information engineering is “a formal representation of knowledge as an ordered sequence of concepts and relations among concepts in a domain, used to reason about entities in the domain and to describe the domain. It is used to reason about entities (realities) in the domain and to describe the domain. It is used to reason about entities (realities) in the domain and to describe the domain. It also states that “an ontology is defined as “a formal and explicit specification of a shared conceptualization” and provides a vocabulary (types, properties, and relations of objects and concepts) that is used to model a domain.

In the following pages of this blog, we will discuss the use of this ontology from the perspective of information engineering.

Semantic Web Technology

Semantic Web technology is “a project to improve the convenience of the World Wide Web by developing standards and tools that make it possible to handle the meaning of Web pages,” and it will evolve Web technology from the current WWW “web of documents” to a “web of data.

The data handled there is not Data in the DIKW (Data Information Knowledge Wisdom) pyramid, but Information and Knowledge information, expressed in ontologies, RDF and other frameworks for expressing knowledge, and used in various DX and AI tasks.

In the following pages of this blog, I discuss about this Semantic Web technology, ontology technology, and conference papers such as information of ISWC (International Semantic Web Conference), which is the world’s leading conference on Semantic Web technology.

Reasoning Technology

There are two types of inference methods: deduction, which is the process of deriving a proposition from a set of statements or propositions, and non-deduction methods, which are induction, projection, analogy, and abduction. Inference can be basically defined as a method of tracing the relationships among various facts.

As algorithms for finding them, the classical approaches are forward and backward inference. Machine learning approaches include relational learning, rule inference using decision trees, sequential pattern mining, and probabilistic generation methods.

Inference technology is a technology that combines such various methods and algorithms to obtain the inference results desired by the user.

In the following pages of this blog, we will discuss classical reasoning as represented by expert systems, the use of satisfiability problems (SAT), solution set programming as logic programming, inductive logic programming, etc.

Knowledge Graph

Pragmatism is a word derived from the Greek word ‘pragma’, meaning ‘action’ or ‘practice’, and is the idea that the truth of things should be judged by the results of action, not by theory or belief. The knowledge graph is a useful technique in terms of the accumulation and utilisation of experience, and has value in a variety of practical settings. A pragmatist approach in pragmatism could be used to elucidate the structure of knowledge and understanding using knowledge graphs and help promote practical use and understanding of meaning.

A Knowledge Graph is a representation of information in the form of a graph structure, which will play an important role in the field of Artificial Intelligence (AI). Knowledge graphs are used to represent the knowledge that multiple entities (e.g., people, places, things, concepts, etc.) have relationships between them (e.g., “A owns B,” “X is part of Y,” “C affects D,” etc.).

Specifically, knowledge graphs play an important role in search engine question answering systems, artificial intelligence dialogue systems, and natural language processing. These systems can use knowledge graphs to efficiently process complex information and provide accurate information to users.

In this issue, we discuss papers presented at CCKS 2018: China Conference on Knowledge Graph and Semantic Computing, held in Tianjin from August 14-17, 2018 CCKS is a conference of the China Information Processing Society (CIPS) on language and CCKS covers a wide range of research areas including knowledge graphs, semantic web, linked data, NLP, knowledge representation, graph databases, etc., and is the leading forum on knowledge graphs and semantic technologies. The goal is to

The development of effective techniques for knowledge representation and reasoning (KRR) is an important aspect of successful intelligent systems. Various representation paradigms, as well as reasoning systems using these paradigms, have been extensively studied. However, new challenges, problems, and issues have emerged in knowledge representation in artificial intelligence (AI), such as the logical manipulation of increasingly large information sets (see, for example, the Semantic Web and bioinformatics). In addition, improvements in storage capacity and computational performance have affected the nature of KRR systems, shifting the focus to expressive power and execution performance. As a result, KRR research faces the challenge of developing knowledge representation structures that are optimal for large-scale inference. This new generation of KRR systems includes graph-based knowledge representation formalisms such as constraint networks (CN), Bayesian networks (BN), semantic networks (SN), concept graphs (CG), formal concept analysis (FCA), CP-net, GAI-net, argumentation frameworks The purpose of the Graph Structures for Knowledge Representation and Reasoning (GKR) workshop series is to bring together researchers involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques. The goal of the Graph Structures for Knowledge Representation and Reasoning (GKR) workshop series is to bring together researchers involved in the development and application of graph-based knowledge representation formats and reasoning techniques.

Data analysis applies algorithmic processes to derive insights. It is now used in many industries to help organizations and companies make better decisions and to validate or disprove existing theories and models. The term data analytics is often used interchangeably with intelligence, statistics, inference, data mining, and knowledge discovery. In the era of big data, big data analytics refers to strategies for analyzing large amounts of data collected from a variety of sources, including social networks, transaction records, video, digital images, and various sensors. This book aims to introduce some of the definitions, methods, tools, frameworks, and solutions for big data processing, starting from information extraction and knowledge representation, through knowledge processing, analysis, visualization, sense-making, and practical applications.

However, this book is not intended to cover all the methods of big data analysis, nor is it intended to be an exhaustive bibliography. The chapters in this book address the appropriate aspects of the data processing chain, with particular emphasis on understanding enterprise knowledge graphs, semantic big data architectures, and smart data analytics solutions.

  • Application of Knowledge Graphs to Question and Answer SystemsA knowledge graph can be defined as “a graph created by describing entities and the relationships among them. Entities” here are things that “exist” physically or non-physically and are not necessarily material entities, but are abstracted to represent things (events in mathematics, law, academic fields, etc.).Examples of knowledge graphs include simple and concrete things such as “there is a pencil on the table” and “Mt. Fuji is located on the border between Shizuoka and Yamanashi prefectures,” as well as more abstract things such as “if a=b, then a+c = b+c,” “the consumption tax is an indirect tax that focuses on “consumption” of goods and services,” “in electronically controlled fuel injection systems In the case of electronically controlled fuel injection systems, the throttle chamber is an intake throttling device that is attached to the collector of the intake manifold and contains a throttle valve to control the amount of intake air.The advantage of using these knowledge graphs, from AI’s perspective, is that machines can access the rules, knowledge, and common sense of the human world through the data in the knowledge graphs. In contrast to the recent black-box approaches, such as deep learning, which require a large amount of teacher data in order to achieve learning accuracy, AI can produce results that are easy for humans to interpret, and AI can generate data based on knowledge data to enable machine learning with small data. Machine learning with small data is possible by generating data based on knowledge data.By applying this knowledge graph to question-answer systems, it is possible to create a hierarchical structure of key terms, rather than simple FAQ question-answer pairs, and further associate them with context-specific questions and their alternatives, synonyms, and machine-learned response classes to provide an intelligent FAQ experience. It is possible to provide an intelligent FAQ experience.
  • Rule Bases and Knowledge Bases, Expert Systems and relational data

It describes a rule-based system that uses data called a knowledge base.

For example, a database system called UniProtKB is one of the knowledge bases used in the life sciences. European institutions collaborate to collect protein information, and through annotation and curation (collecting data, examining it, integrating it, and organizing it), UniProt (Th UniversalProteinResource, URL http://www.uniprot.org/) and analysis tools.

Dendral, a project developed at Stanford University in 1965, is a system for inferring the chemical structure of a measured substance from the numerical value (molecular weight) of the location of a peak obtained by mass spectrometry. The language used is LISP.

MYCIN, a system derived from Dendral and developed in the 1970s, is also an expert system. MYCIN is an expert system that diagnoses patients and contagious blood diseases, and presents the antibiotics to be administered along with the dosage.

  • Extracting Tabular Data from the Web and Documents and Semantic Annotation (SemTab) LearningThere are countless tables of information on the Web and in documents, which are very useful as knowledge information compiled manually. In general, tasks for extracting and structuring such information are called information extraction tasks, and among them, tasks specialized for tabular information have been attracting attention in recent years. Here, we discuss various approaches to extracting this tabular data.

To “awareness” means to observe or perceive something carefully, and when a person notices a situation or thing, it means that he or she is aware of some information or phenomenon and has a feeling or understanding about it. Becoming aware is an important process of gaining new information and understanding by paying attention to changes and events in the external world. In this article, I will discuss this awareness and the application of artificial intelligence technology to it.

Drug discovery and development (D3) is a very expensive and time-consuming process. It takes decades and billions of dollars to bring a drug to market successfully from scratch, making the process highly inefficient in the face of emergencies such as COVID-19. At the same time, a vast amount of knowledge and experience has been accumulated in the D3 process over the past several decades. This knowledge is usually coded in guidelines and biomedical literature, which provide important resources, including insights that can be used as a reference for future D3 processes. Knowledge Graphs (KGs) are an effective way to organize the useful information contained in these documents for efficient retrieval. It also bridges the disparate biomedical concepts involved in the D3 process. In this chapter, we review existing biomedical KGs and show how GNN technology can facilitate the D3 process on KGs. Two case studies, Parkinson’s disease and COVID-19, are also presented to point out future directions.

Prolog, a logic programming language developed in the early 1970s, has attracted attention as a new artificial intelligence language that combines declarative statements based on predicate logic with computational procedures based on theorem proving, and has been widely used in expert systems, natural language processing, and arithmetic databases since the 1980s. Prolog has been widely used in expert systems, natural language processing, and computational databases since the 1980s.

While Prolog is Turing-complete and has high computational power, its base, Horn clause logic programs, have limited applicability to real-world knowledge representation and problem solving due to syntactic constraints and lack of reasoning power.

In order to solve these problems, many attempts to extend the expressive capability of logic programming and to enhance its reasoning capability have been proposed since the late 1980s. As a result, since the late 1990s, the concept of answer set programming, which combines the concepts of logic programming and constraint programming, has been established and is now one of the main languages in logic programming.

This is a collection of papers from a workshop held at the European University Institute in Florence on December 1 and 2, 2006, with the aim of building computable models (i.e., models that enable the development of computer applications for the legal domain) for different ways of understanding and explaining modern law.

The techniques are described with a focus on various specific projects, especially Semantic Web technologies.

Knowledge Data Visualization Technology

D3.js and React, which are based on Javascript, can be used as tools for visualizing relational data such as graph data. In this article, we will discuss specific implementations using D3 and React for 2D and 3D graph displays, and heat maps as a form of displaying relational data.

コメント

Exit mobile version
タイトルとURLをコピーしました