Knowledge Information Processing Technologies

Machine Learning Artificial Intelligence Natural Language Processing Semantic Web Ontology Digital Transformation Probabilistic Generative Model Navigation of this blog

  1. Knowledge Information Processing Technologies
    1. Overview
    2. Topic
        1. Papers at international conferences related to AGI and knowledge information/graph data
        2. Knowledge representation, machine learning, inference and GNN
        3. KBGAT (Knowledge-based GAT) overview and implementation examples
        4. Deep Graph Infomax overview and implementation examples
        5. Edge-GNN overview and implementation examples
        6. Bow tie analysis, ontologies and AI technologies
        7. Approach to automatic generation of prompts for generative AI
        8. Ontology Based Data Access (ODBA), generative AI and GNN
    3. Implementation
        1. Overview of Case-Based Reasoning, Application Examples and Implementation
        2. Automatic Knowledge Graph Generation and Various Implementation Examples
        3. Various uses and implementation examples of knowledge graphs
        4. Ontology development and optimisation for data integration and decision-making in product design and obsolescence management
        5. SNAP (Stanford Network Analysis Platform) Overview and Example Implementations
        6. Overview of CDLib (Community Discovery Library) and Examples of Applications and Implementations
        7. Overview of MODULAR (Multi-objective Optimization of Dynamics Using Links and Relaxations) and Examples of Applications and Implementations
        8. Overview of the Louvain Method and Examples of Applications and Implementations
        9. Overview of Infomap and Examples of Application and Implementation
        10. Copra Overview and Examples of Applications and Implementations
        11. Visualization of knowledge graphs (relational data) using D3 and React
        12. Techniques for displaying and animating graph snapshots on a timeline
        13. Creating Graph Animation by Combining NetworkX and Matplotlib
        14. Plotting high-dimensional data in low dimensions using dimensionality reduction techniques (e.g., t-SNE, UMAP) to facilitate visualization
        15. Data Visualization Using Gephi
        16. Data Visualization with Cytoscape.js
        17. Visualization of Graph Data Using Sigma.js
        18. Overview and Implementation of the Satisfiability of Propositional Logic (SAT: Boolean SAtisfiability) Problem
        19. General Problem Solver and Application Examples, Implementation Examples in LISP and Python
        20. Overview of Graph Neural Networks, Application Examples, and Examples of Python Implementations
        21. Overview, Algorithm and Application of Graph Convolutional Neural Networks (GCN)
        22. Overview of ChebNet and Examples of Algorithms and Implementations
        23. Overview of GAT (Graph Attention Network) and Examples of Algorithms and Implementations
        24. Graph Isomorphism Network (GIN) Overview, Algorithm and Example Implementation
        25. Overview of GraphSAGE and Examples of Algorithms and Implementations
        26. Overview of HIN2Vec, algorithm and implementation examples
        27. Overview of HIN2Vec-GAN and examples of algorithms and implementations
        28. Overview of HIN2Vec-PCA and examples of algorithms and implementations
    4. Ontology Technology
    5. Semantic Web Technology
    6. Reasoning Technology
    7. Knowledge Graph
        1. Pragmatism and the Knowledge Graph
        2. Overview of Knowledge Graphs and Summary of Related Presentations at the International Society for the Study of Knowledge Graphs (ISWC)
        3. Knowledge Graph and Semantic Computing
        4. Graph Structures for Knowledge Representation and Reasoning
        5. Knowledge Graphs and Big Data Processing
        6. Application of Knowledge Graphs to Question and Answer Systems
        7. Rule Bases and Knowledge Bases, Expert Systems and relational data
        8. Extracting Tabular Data from the Web and Documents and Semantic Annotation (SemTab) Learning
        9. Awareness and Artificial Intelligence Technology
        10. GNN-based Biomedical Knowledge Graph Mining in Drug Development
        11. Other Topic
        12. Answer Set Programming : A Brief History of Logic Programming and ASP
        13. A Logic of Implicit and Explicit Belief
        14. The Tractability of Subsumption in Frame-Based Description Languages
        15. Computerized processing of law-related tasks
        16. Papers
    8. Knowledge Data Visualization Technology
        1. Visualization of knowledge graphs (relational data) using D3 and React

Knowledge Information Processing Technologies

Overview

According to wiki, knowledge is the result of cognition, or an idea or skill that one has about a person or thing. It is the result of cognition, or the ideas and skills one has about people and things.

The term “connaissance” is almost synonymous with “cognition,” but cognition is basically a philosophical term, and knowledge mainly means the “results” obtained through cognition.

According to the Oxford English Dictionary, knowledge in English is defined as follows.

  1. A specialized skill acquired by a person through experience or education. A theoretical or practical understanding of a subject.
  2. What is known about a particular field or in general. Facts and information.
  3. Awareness or knowledge gained by experiencing a fact or situation.

To explain what human beings have said and discussed about knowledge, in the old days, the “tree of knowledge of good and evil” appeared in the story of Adam and Eve in the Book of Genesis in the Old Testament, and each faith has various ideas about knowledge. The first philosophical discussion of knowledge was by Plato in ancient Greece, who described knowledge as “justified true belief,” and various philosophical discussions have continued up to the present day. In the sixteenth and seventeenth centuries, Francis Bacon considered the methods of knowledge acquisition, and his ideas played a major role in the establishment of modern science. (In modern psychology, knowledge acquisition involves complex cognitive processes such as perception, memory, experience, communication, association, and reasoning.

Even now, there is no single definition of knowledge that can be agreed upon by everyone, and there are different theories in different academic fields, some of which are mutually opposing.

There are different theories in different disciplines, some of which are in conflict with each other. The following are some of the categories of knowledge.

Knowledge is treated as long-term memory, and as in the classification of memory, we may classify representational knowledge as “declarative knowledge” and behavioral knowledge as “procedural knowledge. Examples of this declarative knowledge include knowledge of scientific laws (99, gravitational constant on earth, etc.) and knowledge of social conventions (e.g., “the capital of Japan is Tokyo”). Other examples of procedural knowledge include how to use chopsticks, how to play the piano, and how to drive a car. The former is sometimes referred to as “knowing that” and the latter as “knowing how.

From the perspective of formalization and method of transmission, knowledge can be classified into “formal knowledge” and “tacit knowledge”. This classification is used in the world of knowledge management. Tacit knowledge refers to knowledge that is either impossible or extremely difficult to describe declaratively. Procedural knowledge and intuitive cognitive content are considered tacit knowledge. For example, everyone has knowledge about “beauty,” but it cannot be clearly defined.

From a philosophical or biological standpoint, knowledge that we are born with is sometimes categorized as “a priori knowledge,” and knowledge that we acquire after birth through social life is sometimes categorized as “a posteriori knowledge. Whether or not a priori knowledge exists has been a long-standing issue in epistemology. In the continental rationalist tradition, Descartes and others were in the mainstream of accepting some kind of a priori knowledge. Such a position is called the innate theory. In British empiricism, the empiricist position, which denies the existence of a priori knowledge and considers the mind as a blank slate, was advocated by Locke and others (Tabula Rasa).

It is sometimes divided into theoretical knowledge and practical knowledge, which is said to be the distinction between the knowledge of the philosopher and the knowledge of the practitioner, as well as the distinction between “science” (scientia) and “art” (ars).

This blog discusses the information technology (ICT) approach to this knowledge as follows.

Topic

Papers at international conferences related to AGI and knowledge information/graph data

Papers at international conferences related to AGI and knowledge information/graph data. Artificial General Intelligence (AGI) refers to AI systems that have a general intelligence similar to human intelligence and can handle a variety of tasks. Whereas current AI systems specialise in specific tasks using dedicated models, AGI aims to be flexible enough to perform a variety of tasks Knowledge information processing in AI deals with vast amounts of data and performs tasks such as extraction, classification, reasoning and interpretation AGI integrates these techniques and attempts to perform multiple tasks similar to human capabilities It seeks to perform multiple tasks that are similar to human capabilities. Graph data, which represents data in terms of nodes and edges, is crucial in AI for understanding relationships and patterns; AGI aims to use graph data effectively to extract advanced knowledge from large data sets. This blog focuses on recent papers presented at international conferences that highlight advances in knowledge information processing and machine learning using graph data.

Knowledge representation, machine learning, inference and GNN

Knowledge representation, machine learning, inference and GNN. Knowledge representation, as described in ‘Knowledge Information Processing Techniques’, and inference, as described in ‘Inference Techniques’, are important areas for structuring information and facilitating semantic understanding, whereas the application of Graph Neural Networks (GNNs), machine learning methods dedicated to processing graph-structured data, as described in The application of graph neural networks (GNNs) is one that allows for a more efficient and effective approach to the task of knowledge representation and inference.

KBGAT (Knowledge-based GAT) overview and implementation examples

KBGAT (Knowledge-based GAT) overview and implementation examples. KBGAT (Knowledge-based Graph Attention Network) is a type of graph neural network (GNN) specialised for handling knowledge graphs (Knowledge Graph). KBGAT is based on the traditional Graph Attention Network (GAT) and is designed to take advantage of the special structure of knowledge graphs, with the following characteristics.

Deep Graph Infomax overview and implementation examples

Deep Graph Infomax overview and implementation examples. Deep Graph Infomax (DGI) is an unsupervised learning method for graph data, which is an information-theoretic targeted approach to learning node representations DGI aims to match local (node-level) and global (graph-level) features of a graph The aim is to obtain high-quality node embeddings by.

Edge-GNN overview and implementation examples

Edge-GNN overview and implementation examples. Edge-GNN (Edge Graph Neural Network) is a neural network architecture that focuses on edges in a graph structure and aims to process edge-level and graph-wide tasks by utilising edge features and weights. -GNNs differ from regular GNNs in that they focus primarily on edges rather than nodes (vertices). As information about the connections between nodes (edges) is important for many graph analysis tasks (e.g. relation prediction and link prediction), Edge-GNNs have the following properties.

Bow tie analysis, ontologies and AI technologies

Bow tie analysis, ontologies and AI technologies. Bowtie analysis is a risk management technique that is used to organise risks in a visually understandable way. The name comes from the fact that the resulting diagram of the analysis resembles the shape of a bowtie. The combination of bowtie analysis with ontologies and AI technologies is a highly effective approach to enhance risk management and predictive analytics and to design effective responses to risk.

Approach to automatic generation of prompts for generative AI

Approach to automatic generation of prompts for generative AI. Generative AI refers to artificial intelligence technologies that generate new content such as text, images, audio and video. As generative AI (e.g. image-generating AI and text-generating AI) generates new content based on given instructions (prompts), the quality and appropriateness of the prompts is key to maximising AI performance.

Ontology Based Data Access (ODBA), generative AI and GNN

Ontology Based Data Access (ODBA), generative AI and GNN. Ontology Based Data Access (OBDA) is a method that allows queries to be performed on data stored in different formats and locations using a unified, conceptual view provided by an ontology, with the semantic integration of data and a user-friendly format for The aim will be to provide access to the data in a format that is easily understood by the user.

Implementation

Overview of Case-Based Reasoning, Application Examples and Implementation

Overview of Case-Based Reasoning, Application Examples and Implementation. Case-based reasoning is a technique for finding appropriate solutions to similar problems by referring to past problem-solving experience and case studies. This section provides an overview of this case-based reasoning technique, its challenges, and various implementations.

Automatic Knowledge Graph Generation and Various Implementation Examples

Automatic Knowledge Graph Generation and Various Implementation Examples. A knowledge graph is a graph structure that represents information as a set of related nodes (vertices) and edges (connections), and is a data structure used to connect information on different subjects or domains and visualize their relationships. This paper outlines various methods for automatic generation of this knowledge graph and describes specific implementations in python.

Various uses and implementation examples of knowledge graphs

Various uses and implementation examples of knowledge graphs. A knowledge graph is a graph structure that represents information as a set of related nodes (vertices) and edges (connections), and is a data structure used to connect information on different subjects or domains and visualize their relationships. This section describes various applications of the knowledge graph and concrete examples of its implementation in python.

Ontology development and optimisation for data integration and decision-making in product design and obsolescence management

Ontology development and optimisation for data integration and decision-making in product design and obsolescence management. Implementing an ontology-based data integration and decision-making system in product design and obsolescence management is a way to efficiently manage complex information and support decision-making.

SNAP (Stanford Network Analysis Platform) Overview and Example Implementations

SNAP (Stanford Network Analysis Platform) Overview and Example Implementations. SNAP is an open-source software library developed by the Computer Science Laboratory at Stanford University that provides tools and resources used in various network-related studies, including social network analysis, graph theory, and computer network analysis. The library provides tools and resources used in a variety of network-related research, including social network analysis, graph theory, and computer network analysis.

Overview of CDLib (Community Discovery Library) and Examples of Applications and Implementations

Overview of CDLib (Community Discovery Library) and Examples of Applications and Implementations. CDLib (Community Discovery Library) is a Python library that provides community detection algorithms, offering a variety of algorithms for identifying community structure in graph data and helping researchers and data scientists address different It will support researchers and data scientists in dealing with different community detection tasks.

Overview of MODULAR (Multi-objective Optimization of Dynamics Using Links and Relaxations) and Examples of Applications and Implementations

Overview of MODULAR (Multi-objective Optimization of Dynamics Using Links and Relaxations) and Examples of Applications and Implementations. MODULAR is one of the methods and tools used in the research areas of computer science and network science to solve multi-objective optimization problems of complex networks, the approach is designed to simultaneously optimize the structure and dynamics of the network, taking different objective functions ( multi-objective optimization) are taken into account.

Overview of the Louvain Method and Examples of Applications and Implementations

Overview of the Louvain Method and Examples of Applications and Implementations. The Louvain method (or Louvain algorithm) is one of the effective graph clustering algorithms for identifying communities (clusters) in a network. The Louvain method employs an approach that maximizes a measure called modularity to identify the structure of the communities.

Overview of Infomap and Examples of Application and Implementation

Overview of Infomap and Examples of Application and Implementation. Infomap (Information-Theoretic Modularity) is a community detection algorithm used to identify communities (modules) in a network. It focuses on optimizing the flow and structure of information.

Copra Overview and Examples of Applications and Implementations

Copra Overview and Examples of Applications and Implementations. Copra (Community Detection using Partial Memberships) is an algorithm and tool for community detection that takes into account the detection of communities in complex networks and the fact that a given node may belong to multiple communities. Copra is suitable for realistic scenarios where each node can belong to multiple communities using partial community membership information.

Visualization of knowledge graphs (relational data) using D3 and React

Visualization of knowledge graphs (relational data) using D3 and React. D3.js and React, which are based on Javascript, can be used as tools for visualizing relational data such as graph data. In this article, we will discuss specific implementations using D3 and React for 2D and 3D graph displays, and heat maps as a form of displaying relational data.

Techniques for displaying and animating graph snapshots on a timeline

Techniques for displaying and animating graph snapshots on a timeline. Displaying and animating graph snapshots on a timeline is an important technique for analyzing graph data, as it helps visualize changes over time and understand the dynamic characteristics of graph data. This section describes libraries and implementation examples used for these purposes.

Creating Graph Animation by Combining NetworkX and Matplotlib

Creating Graph Animation by Combining NetworkX and Matplotlib. This paper describes the creation of animations of graphs by combining NetworkX and Matplotlib, a technique for visually representing dynamic changes in networks in Python.

Plotting high-dimensional data in low dimensions using dimensionality reduction techniques (e.g., t-SNE, UMAP) to facilitate visualization

Plotting high-dimensional data in low dimensions using dimensionality reduction techniques (e.g., t-SNE, UMAP) to facilitate visualization. Methods for plotting high-dimensional data in low dimensions using dimensionality reduction techniques to facilitate visualization are useful for many data analysis tasks, such as data understanding, clustering, anomaly detection, and feature selection. This section describes the major dimensionality reduction techniques and their methods.

Data Visualization Using Gephi

Data Visualization Using Gephi. Gephi is an open-source graph visualization software that is particularly suitable for network analysis and visualization of complex data sets. Here we describe the basic steps and functionality for visualizing data using Gephi.

Data Visualization with Cytoscape.js

Data Visualization with Cytoscape.js. Cytoscape.js is a graph theory library written in JavaScript that is widely used for visualizing network and graph data. Cytoscape.js makes it possible to add graph and network data visualization to web and desktop applications. Here are the basic steps and example code for data visualization using Cytoscape.js.

Visualization of Graph Data Using Sigma.js

Visualization of Graph Data Using Sigma.js. Sigma.js is a web-based graph visualization library that can be a useful tool for creating interactive network diagrams. Here we describe the basic steps and functions for visualizing graph data using Sigma.js.

Overview and Implementation of the Satisfiability of Propositional Logic (SAT: Boolean SAtisfiability) Problem

Overview and Implementation of the Satisfiability of Propositional Logic (SAT: Boolean SAtisfiability) Problem. The Satisfiability of Propositional Logic (SAT: Boolean Satisfiability) is the problem of determining whether or not there exists a variable assignment for which a given propositional logic expression is true. For example, if there is a problem “whether there exists an assignment of A, B, C, D, E, or F such that A and (B or C) and (D or E or F) are true,” this problem is converted into a propositional logic formula and whether the formula is satisfiable is determined.

Such problem setting plays an important role in many application fields, for example, circuit design, program analysis, problems in the field of artificial intelligence, and cryptography theory. From a theoretical aspect, it is known that the algorithm that can solve the SAT problem is an “NP-complete problem,” and current computers have not found an efficient solution for large-scale problems. Therefore, this is a field of technology where research is still being conducted to improve algorithm efficiency, such as increasing speed and developing heuristic search algorithms.

General Problem Solver and Application Examples, Implementation Examples in LISP and Python

General Problem Solver and Application Examples, Implementation Examples in LISP and Python. The general problem solver specifically takes as input the description of the problem and constraints, and operates to execute algorithms to find an optimal or valid solution. These algorithms vary depending on the nature and constraints of the problem, and there are a variety of general problem-solving methods, including numerical optimization, constraint satisfaction, machine learning, and search algorithms. This section describes examples of implementations in LISP and Python for this GPS.

Overview of Graph Neural Networks, Application Examples, and Examples of Python Implementations

Overview of Graph Neural Networks, Application Examples, and Examples of Python Implementations. A graph neural network (GNN) is a type of neural network for data with a graph structure. ) to express relationships between elements. Examples of graph-structured data include social networks, road networks, chemical molecular structures, and knowledge graphs.

This section provides an overview of GNNs and various examples and Python implementations.

Overview, Algorithm and Application of Graph Convolutional Neural Networks (GCN)

Overview, Algorithm and Application of Graph Convolutional Neural Networks (GCN). Graph Convolutional Neural Networks (GCN) is a type of neural network that enables convolutional operations on data with a graph structure. While regular convolutional neural networks (CNNs) are effective for lattice-like data such as image data, GCNs were developed as a deep learning method for non-lattice-like data with very complex structures, such as graph data and network data.

Overview of ChebNet and Examples of Algorithms and Implementations

Overview of ChebNet and Examples of Algorithms and Implementations. ChebNet (Chebyshev network) is a type of Graph Neural Network (GNN), which is one of the main methods for performing convolution operations on graph-structured data. ChebNet is an approximate implementation of convolution operations on graphs using Chebyshev polynomials, which are used in signal processing.

Overview of GAT (Graph Attention Network) and Examples of Algorithms and Implementations

Overview of GAT (Graph Attention Network) and Examples of Algorithms and Implementations. Graph Attention Network (GAT) is a deep learning model that uses an attention mechanism to learn the representation of nodes in a graph structure. GAT is a model that uses a set of mechanisms to learn the representation of a node.

Graph Isomorphism Network (GIN) Overview, Algorithm and Example Implementation

Graph Isomorphism Network (GIN) Overview, Algorithm and Example Implementation. Graph Isomorphism Network (GIN) is a neural network model for learning isomorphism of graph structures. The graph isomorphism problem is the problem of determining whether two graphs have the same structure, and is an important approach in many fields.

Overview of GraphSAGE and Examples of Algorithms and Implementations

Overview of GraphSAGE and Examples of Algorithms and Implementations. GraphSAGE (Graph Sample and Aggregated Embeddings) is a graph embedding algorithm for learning node embeddings (vector representation) from graph data. By sampling and aggregating the local neighborhood information of nodes, it effectively learns the embedding of each node. This approach makes it possible to obtain high-performance embeddings for large graphs.

Overview of HIN2Vec, algorithm and implementation examples

Overview of HIN2Vec, algorithm and implementation examples. Heterogeneous Information Network Embedding (HIN2Vec) is a method for embedding heterogeneous information networks into a vector space, where a heterogeneous information network is a network consisting of several different types of nodes and links, for example HIN2Vec aims to effectively represent different types of nodes in a heterogeneous information network, and this technique is part of a field called Graph Embedding. It is part of a field called Graph Embedding, which aims to preserve the network structure and relationships between nodes by embedding them in a low-dimensional vector.

Overview of HIN2Vec-GAN and examples of algorithms and implementations

Overview of HIN2Vec-GAN and examples of algorithms and implementations. HIN2Vec-GAN is one of the techniques used to learn relations on graphs, specifically, it has been developed as a method for learning embeddings on Heterogeneous Information Networks (HINs) HINs are different graph structures with different types of nodes and edges, which are used to represent data with complex relationships.

Overview of HIN2Vec-PCA and examples of algorithms and implementations

Overview of HIN2Vec-PCA and examples of algorithms and implementations. HIN2Vec-PCA combines HIN2Vec and Principal Component Analysis (PCA) to extract features from Heterogeneous Information Networks (HINs).

Ontology Technology

The term ontology has been used as a branch of philosophy, and according to the wiki, “It is not concerned with the individual nature of various things (beings), but with the meaning and fundamental rules of being that bring beings into existence, and is considered to be metaphysics or a branch of it, along with cognitive theory.

Metaphysics deals with abstract concepts of things, and ontology in philosophy deals with abstract concepts and laws behind things.

On the other hand, according to the wiki, ontology in information engineering is “a formal representation of knowledge as an ordered sequence of concepts and relations among concepts in a domain, used to reason about entities in the domain and to describe the domain. It is used to reason about entities (realities) in the domain and to describe the domain. It is used to reason about entities (realities) in the domain and to describe the domain. It also states that “an ontology is defined as “a formal and explicit specification of a shared conceptualization” and provides a vocabulary (types, properties, and relations of objects and concepts) that is used to model a domain.

In the following pages of this blog, we will discuss the use of this ontology from the perspective of information engineering.

Semantic Web Technology

Semantic Web technology is “a project to improve the convenience of the World Wide Web by developing standards and tools that make it possible to handle the meaning of Web pages,” and it will evolve Web technology from the current WWW “web of documents” to a “web of data.

The data handled there is not Data in the DIKW (Data Information Knowledge Wisdom) pyramid, but Information and Knowledge information, expressed in ontologies, RDF and other frameworks for expressing knowledge, and used in various DX and AI tasks.

In the following pages of this blog, I discuss about this Semantic Web technology, ontology technology, and conference papers such as information of ISWC (International Semantic Web Conference), which is the world’s leading conference on Semantic Web technology.

Reasoning Technology

There are two types of inference methods: deduction, which is the process of deriving a proposition from a set of statements or propositions, and non-deduction methods, which are induction, projection, analogy, and abduction. Inference can be basically defined as a method of tracing the relationships among various facts.

As algorithms for finding them, the classical approaches are forward and backward inference. Machine learning approaches include relational learning, rule inference using decision trees, sequential pattern mining, and probabilistic generation methods.

Inference technology is a technology that combines such various methods and algorithms to obtain the inference results desired by the user.

In the following pages of this blog, we will discuss classical reasoning as represented by expert systems, the use of satisfiability problems (SAT), solution set programming as logic programming, inductive logic programming, etc.

Knowledge Graph

Pragmatism and the Knowledge Graph

Pragmatism and the Knowledge Graph. Pragmatism is a word derived from the Greek word ‘pragma’, meaning ‘action’ or ‘practice’, and is the idea that the truth of things should be judged by the results of action, not by theory or belief. The knowledge graph is a useful technique in terms of the accumulation and utilisation of experience, and has value in a variety of practical settings. A pragmatist approach in pragmatism could be used to elucidate the structure of knowledge and understanding using knowledge graphs and help promote practical use and understanding of meaning.

Overview of Knowledge Graphs and Summary of Related Presentations at the International Society for the Study of Knowledge Graphs (ISWC)

Overview of Knowledge Graphs and Summary of Related Presentations at the International Society for the Study of Knowledge Graphs (ISWC). A Knowledge Graph is a representation of information in the form of a graph structure, which will play an important role in the field of Artificial Intelligence (AI). Knowledge graphs are used to represent the knowledge that multiple entities (e.g., people, places, things, concepts, etc.) have relationships between them (e.g., “A owns B,” “X is part of Y,” “C affects D,” etc.).

Specifically, knowledge graphs play an important role in search engine question answering systems, artificial intelligence dialogue systems, and natural language processing. These systems can use knowledge graphs to efficiently process complex information and provide accurate information to users.

Knowledge Graph and Semantic Computing

Knowledge Graph and Semantic Computing. In this issue, we discuss papers presented at CCKS 2018: China Conference on Knowledge Graph and Semantic Computing, held in Tianjin from August 14-17, 2018 CCKS is a conference of the China Information Processing Society (CIPS) on language and CCKS covers a wide range of research areas including knowledge graphs, semantic web, linked data, NLP, knowledge representation, graph databases, etc., and is the leading forum on knowledge graphs and semantic technologies. The goal is to

Graph Structures for Knowledge Representation and Reasoning

Graph Structures for Knowledge Representation and Reasoning. The development of effective techniques for knowledge representation and reasoning (KRR) is an important aspect of successful intelligent systems. Various representation paradigms, as well as reasoning systems using these paradigms, have been extensively studied. However, new challenges, problems, and issues have emerged in knowledge representation in artificial intelligence (AI), such as the logical manipulation of increasingly large information sets (see, for example, the Semantic Web and bioinformatics). In addition, improvements in storage capacity and computational performance have affected the nature of KRR systems, shifting the focus to expressive power and execution performance. As a result, KRR research faces the challenge of developing knowledge representation structures that are optimal for large-scale inference. This new generation of KRR systems includes graph-based knowledge representation formalisms such as constraint networks (CN), Bayesian networks (BN), semantic networks (SN), concept graphs (CG), formal concept analysis (FCA), CP-net, GAI-net, argumentation frameworks The purpose of the Graph Structures for Knowledge Representation and Reasoning (GKR) workshop series is to bring together researchers involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques. The goal of the Graph Structures for Knowledge Representation and Reasoning (GKR) workshop series is to bring together researchers involved in the development and application of graph-based knowledge representation formats and reasoning techniques.

Knowledge Graphs and Big Data Processing

Knowledge Graphs and Big Data Processing. Data analysis applies algorithmic processes to derive insights. It is now used in many industries to help organizations and companies make better decisions and to validate or disprove existing theories and models. The term data analytics is often used interchangeably with intelligence, statistics, inference, data mining, and knowledge discovery. In the era of big data, big data analytics refers to strategies for analyzing large amounts of data collected from a variety of sources, including social networks, transaction records, video, digital images, and various sensors. This book aims to introduce some of the definitions, methods, tools, frameworks, and solutions for big data processing, starting from information extraction and knowledge representation, through knowledge processing, analysis, visualization, sense-making, and practical applications.

However, this book is not intended to cover all the methods of big data analysis, nor is it intended to be an exhaustive bibliography. The chapters in this book address the appropriate aspects of the data processing chain, with particular emphasis on understanding enterprise knowledge graphs, semantic big data architectures, and smart data analytics solutions.

Application of Knowledge Graphs to Question and Answer Systems

Application of Knowledge Graphs to Question and Answer Systems. A knowledge graph can be defined as “a graph created by describing entities and the relationships among them. Entities” here are things that “exist” physically or non-physically and are not necessarily material entities, but are abstracted to represent things (events in mathematics, law, academic fields, etc.).Examples of knowledge graphs include simple and concrete things such as “there is a pencil on the table” and “Mt. Fuji is located on the border between Shizuoka and Yamanashi prefectures,” as well as more abstract things such as “if a=b, then a+c = b+c,” “the consumption tax is an indirect tax that focuses on “consumption” of goods and services,” “in electronically controlled fuel injection systems In the case of electronically controlled fuel injection systems, the throttle chamber is an intake throttling device that is attached to the collector of the intake manifold and contains a throttle valve to control the amount of intake air.The advantage of using these knowledge graphs, from AI’s perspective, is that machines can access the rules, knowledge, and common sense of the human world through the data in the knowledge graphs. In contrast to the recent black-box approaches, such as deep learning, which require a large amount of teacher data in order to achieve learning accuracy, AI can produce results that are easy for humans to interpret, and AI can generate data based on knowledge data to enable machine learning with small data. Machine learning with small data is possible by generating data based on knowledge data.By applying this knowledge graph to question-answer systems, it is possible to create a hierarchical structure of key terms, rather than simple FAQ question-answer pairs, and further associate them with context-specific questions and their alternatives, synonyms, and machine-learned response classes to provide an intelligent FAQ experience. It is possible to provide an intelligent FAQ experience.

Rule Bases and Knowledge Bases, Expert Systems and relational data

Rule Bases and Knowledge Bases, Expert Systems and relational data. It describes a rule-based system that uses data called a knowledge base.

For example, a database system called UniProtKB is one of the knowledge bases used in the life sciences. European institutions collaborate to collect protein information, and through annotation and curation (collecting data, examining it, integrating it, and organizing it), UniProt (Th UniversalProteinResource, URL http://www.uniprot.org/) and analysis tools.

Dendral, a project developed at Stanford University in 1965, is a system for inferring the chemical structure of a measured substance from the numerical value (molecular weight) of the location of a peak obtained by mass spectrometry. The language used is LISP.

MYCIN, a system derived from Dendral and developed in the 1970s, is also an expert system. MYCIN is an expert system that diagnoses patients and contagious blood diseases, and presents the antibiotics to be administered along with the dosage.

Extracting Tabular Data from the Web and Documents and Semantic Annotation (SemTab) Learning

Extracting Tabular Data from the Web and Documents and Semantic Annotation (SemTab) Learning. There are countless tables of information on the Web and in documents, which are very useful as knowledge information compiled manually. In general, tasks for extracting and structuring such information are called information extraction tasks, and among them, tasks specialized for tabular information have been attracting attention in recent years. Here, we discuss various approaches to extracting this tabular data.

Awareness and Artificial Intelligence Technology

Awareness and Artificial Intelligence Technology. To “awareness” means to observe or perceive something carefully, and when a person notices a situation or thing, it means that he or she is aware of some information or phenomenon and has a feeling or understanding about it. Becoming aware is an important process of gaining new information and understanding by paying attention to changes and events in the external world. In this article, I will discuss this awareness and the application of artificial intelligence technology to it.

GNN-based Biomedical Knowledge Graph Mining in Drug Development

GNN-based Biomedical Knowledge Graph Mining in Drug Development. Drug discovery and development (D3) is a very expensive and time-consuming process. It takes decades and billions of dollars to bring a drug to market successfully from scratch, making the process highly inefficient in the face of emergencies such as COVID-19. At the same time, a vast amount of knowledge and experience has been accumulated in the D3 process over the past several decades. This knowledge is usually coded in guidelines and biomedical literature, which provide important resources, including insights that can be used as a reference for future D3 processes. Knowledge Graphs (KGs) are an effective way to organize the useful information contained in these documents for efficient retrieval. It also bridges the disparate biomedical concepts involved in the D3 process. In this chapter, we review existing biomedical KGs and show how GNN technology can facilitate the D3 process on KGs. Two case studies, Parkinson’s disease and COVID-19, are also presented to point out future directions.

Other Topic
Answer Set Programming : A Brief History of Logic Programming and ASP

Answer Set Programming : A Brief History of Logic Programming and ASP. Prolog, a logic programming language developed in the early 1970s, has attracted attention as a new artificial intelligence language that combines declarative statements based on predicate logic with computational procedures based on theorem proving, and has been widely used in expert systems, natural language processing, and arithmetic databases since the 1980s. Prolog has been widely used in expert systems, natural language processing, and computational databases since the 1980s.

While Prolog is Turing-complete and has high computational power, its base, Horn clause logic programs, have limited applicability to real-world knowledge representation and problem solving due to syntactic constraints and lack of reasoning power.

In order to solve these problems, many attempts to extend the expressive capability of logic programming and to enhance its reasoning capability have been proposed since the late 1980s. As a result, since the late 1990s, the concept of answer set programming, which combines the concepts of logic programming and constraint programming, has been established and is now one of the main languages in logic programming.

A Logic of Implicit and Explicit Belief

A Logic of Implicit and Explicit Belief

The Tractability of Subsumption in Frame-Based Description Languages

The Tractability of Subsumption in Frame-Based Description Languages

Computerized processing of law-related tasks

Computerized processing of law-related tasks. This is a collection of papers from a workshop held at the European University Institute in Florence on December 1 and 2, 2006, with the aim of building computable models (i.e., models that enable the development of computer applications for the legal domain) for different ways of understanding and explaining modern law.

The techniques are described with a focus on various specific projects, especially Semantic Web technologies.

Papers

Knowledge Data Visualization Technology

Visualization of knowledge graphs (relational data) using D3 and React

Visualization of knowledge graphs (relational data) using D3 and React. D3.js and React, which are based on Javascript, can be used as tools for visualizing relational data such as graph data. In this article, we will discuss specific implementations using D3 and React for 2D and 3D graph displays, and heat maps as a form of displaying relational data.

コメント

タイトルとURLをコピーしました