Ontology Technologies

Artificial Intelligence Semantic Web Reasoning Collecting AI Conference Papers Ontology Machine Learning Digital Transformation Knowledge Information Processing Web Technology Workflow & Services Navigation of this blog

Ontology Technologies

Ontology is, philosophically, the study of the nature of existence and existence itself, and in the fields of information science and knowledge engineering, it is a formal model or framework for systematizing information and structuring data.

Ontology uses elements such as terms (classes and properties), relations among terms (hierarchical relations and attribute relations), and instances (entities) of terms to clarify the meaning of information, and knowledge is expressed by combining these elements. From an information science perspective, formal languages and description methods such as RDF (Resource Description Framework) and OWL (Web Ontology Language) are used to improve the machine readability of such information.

Ontologies make it possible to systematize information such as concepts, things, attributes, and relations, and to express them in a sharable format. They are used to improve data consistency and interoperability among different systems and applications in diverse fields such as information retrieval, database design, knowledge management, natural language processing, artificial intelligence, and the Semantic Web. Domain-specific ontologies are also used to support the sharing and integration of information in specific fields and industries.

Specific examples of ontology use are as follows.

  • Knowledge Management: Ontologies can be used to organize knowledge within an organization and manage it in a sharable format. Ontologies can be used to streamline the organization and retrieval of information within an organization and promote knowledge sharing and reuse.
  • Enhancing search engines: ontologies can be used to improve search engine results. Ontologies can be used to increase the semantic relevance of search results and improve query interpretation.
  • Improve natural language processing: ontologies can be used to improve the accuracy of natural language processing. Ontologies can be used to improve interpretation and semantic understanding of textual data, and to improve the accuracy of information extraction and retrieval.
  • Semantic Web: Ontologies can be used to semantically integrate information on the Web (Semantic Web). Ontologies can facilitate the integration and interoperability of information between different data sources and increase the semantic relevance of information on the Web.
  • Building domain-specific knowledge bases: ontologies can be used to build knowledge bases about specific domains. For example, specialized knowledge in a particular domain, such as medicine or finance, can be formalized as an ontology to share expert knowledge and develop applications that leverage the knowledge base.
  • Data Integration: Ontologies can be used to integrate data between different data sources. Data integration is the process of combining information from different data sources into one coherent data set, which provides benefits such as “semantic integration,” “improved data accuracy,” and “increased scalability and reusability.

In addition to the above, ontologies are being considered for application in combination with various other technologies. For details of these, please refer to the detailed techniques described below.

With the recent development of machine learning technology, automatic ontology generation using machine learning has been attracting attention. There are multiple approaches to them, and the following are representative ones.

  • Clustering and class classification by unsupervised learning: Using unsupervised learning algorithms, large amounts of data are analyzed to automatically generate ontology concepts and classes by clustering or classifying concepts and things that have common characteristics.
  • Ontology extraction using natural language processing: automatically generate ontologies by extracting concepts and relations from text data using natural language processing technology. For example, proper nouns and lexical relations can be extracted to create ontology classes and properties.
  • Ontology generation using knowledge graphs: Concepts and relations are automatically generated using a large knowledge base called a knowledge graph. Knowledge graphs are graph data that integrate knowledge from various domains, and ontologies can be generated by using machine learning algorithms to analyze the structure and relationships of the graph.
  • Rule-based ontology generation: rule-based systems are used to automatically generate ontologies by applying predefined rules and constraints. Rule-based approaches can be useful in utilizing domain-specific knowledge and generating domain-specific ontologies.

These approaches can be realized by using a combination of “machine learning techniques” “artificial intelligence techniques” and “ICT techniques” described in this blog.

The details of the ontology technologies are described below.

Implementation

A knowledge graph is a graph structure that represents information as a set of related nodes (vertices) and edges (connections), and is a data structure used to connect information on different subjects or domains and visualize their relationships. This paper outlines various methods for automatic generation of this knowledge graph and describes specific implementations in python.

A knowledge graph is a graph structure that represents information as a set of related nodes (vertices) and edges (connections), and is a data structure used to connect information on different subjects or domains and visualize their relationships. This section describes various applications of the knowledge graph and concrete examples of its implementation in python.

Technical Topics

To “awareness” means to observe or perceive something carefully, and when a person notices a situation or thing, it means that he or she is aware of some information or phenomenon and has a feeling or understanding about it. Becoming aware is an important process of gaining new information and understanding by paying attention to changes and events in the external world. In this article, I will discuss this awareness and the application of artificial intelligence technology to it.

Ontology matching can be a technique that aims to find correspondences between semantically related entities of different ontologies.

These correspondences can represent equivalences between ontology entities or other relations such as consequences, subsumption, disjointness, etc. Many different matching solutions have been proposed from various perspectives such as databases, information systems, artificial intelligence, etc.

Various methods have been proposed for ontology matching, starting from simple string matching, various machine learning approaches, data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user participation in matching.

In the following pages of this blog, we discuss these various techniques for ontology matching.

This book mainly describes the modeling languages such as RDF, RDFS, and OWL, as well as the modeling of ontologies using these languages, with explanations of the languages and the steps of the actual modeling methods. In chapter 11 and 12, where ontologies are discussed, I think it would be helpful to read the literature on formal semantics, which I have already mentioned, in advance for a better understanding.

The use of ontologies in manufacturing can be expected to help understand and optimize products and processes, increase productivity, improve quality, and reduce costs. In this section, the plant ontology ISO 15926, smart buildings and ontology, failure/risk analysis and ontology such as FMEA and HAZID, product data integration and production design in the enterprise, interactive failure diagnosis systems in the ship domain, risk diagnosis systems, product cost analysis tools in service systems, and plant equipment diagnostic systems are described.

Plant engineering refers to all technical work in the design and construction of plants (factories), such as chemical plants and power plants. Plant engineering involves a wide range of technical issues, such as the selection of equipment and facilities required for plant operation, process flow studies, process control design, and handling of environmental measures, etc. To solve these issues, mechanical engineering, electrical engineering, chemical engineering, civil engineering, computer science, control engineering, etc. Expertise is required.

An ontology is a formal definition of concepts and relationships in a particular domain, and is useful for knowledge sharing and information integration in that domain. In plant engineering, ontologies can be used to share design and technical information, and to automate plant operation monitoring and In plant engineering, ontology enables the sharing of design and technical information and the automation of plant operation, monitoring, and control, etc. It also enables the definition of objects representing equipment and devices in a plant, properties representing their functions and parameters, and relationships representing the relationships among equipment and devices.

As mentioned in the previous article “Fusion of Plant Engineering Ontology ISO15926 and AI Technology,” plant engineering is a complex technology involving many elements and requiring a vast amount of knowledge data, so ontology technology is being actively applied. In this article, I would like to discuss the application of ontology technology to plant engineering from an operational perspective.

Smart building is an initiative to improve energy efficiency, security, and convenience through the use of building automation technologies, including Internet of Things (IoT) devices, artificial intelligence, big data, cloud computing, and automation systems. The goal is to streamline building management and operations, reduce costs, and improve comfort by combining technologies such as IoT (Internet of Things) devices, artificial intelligence, big data, cloud computing, and automation systems.

In smart buildings, the goal is to analyze data obtained from sensors installed in the building to optimize the operation and management of the building, which requires various types of sensor data, and ontologies will be used to integrate this data.

Failure risk analysis is a method of assessing the risk of failure of a system, such as machinery or equipment, and predicting the likelihood and impact of a failure. Failure risk analysis is an important task for improving safety and reliability and is used in various industrial fields.

The ontology helps to define terms and concepts such as risk factors and evaluation indicators in failure risk analysis in a unified manner. In machinery failure risk analysis, machine parts, functions, and operating conditions are risk factors, and these terms and concepts can be defined consistently so that different analyses The common understanding can be held even in different analyses.

In this paper, product design and data integration using ontology are described. In particular, it describes data integration and decision making using ontology as a countermeasure against DMSMS (Diminishing Manufacturing Sources and Material Shortages), which is closely related to production planning.

Fleet Case is a system for companies that own multiple devices and products to efficiently and accurately diagnose failures and perform maintenance on those devices and products, and is constructed as an ontology-based knowledge base to systematically organize information on the structure, functions, and relationships among parts of the devices and products owned by the company. The system will systematically organize information on the structure, functions, and relationships among the parts of a company’s equipment and products. FleetCase is used in the manufacturing, energy, and transportation industries, and is expected to improve productivity and reduce costs by optimizing the equipment and products owned by companies and enabling them to diagnose failures and perform maintenance more efficiently.

A product service system (PSS) is a business model that provides a comprehensive value proposition that combines products and services, rather than simply offering products. This allows them to build long-term relationships with their customers and provide added value throughout the product lifecycle. Applying an ontology to this can be a useful tool for designing and implementing a PSS to help share knowledge and integrate information in that domain.

The application of ontologies in the legal field is expected to facilitate the organization and sharing of information on legal matters and contribute to the automation and efficiency of legal processes. This can be achieved, for example, by creating an ontology on a legal domain, which formally defines legal terms, relationships, legal procedures, and other concepts, facilitating automated analysis of legal documents and retrieval of related legal information for efficient use in solving legal problems, or by using an ontology to The sharing of legal information using ontologies facilitates the sharing of information across different jurisdictions, languages, and cultures, and facilitates the exchange of information and knowledge in areas such as international law and business transactions.

By applying ontology to data within a company, it is possible to convert the vast amount of data held by a company into meaningful information. Specifically, information assets held by a company can be managed in a unified manner using common terms and concepts, which will enable more efficient information sharing, automation of business processes, and data analysis within the company.

The field of Business Intelligence (BI) is expected to benefit from the application of semantic technologies. Semantic BI can be viewed as the convergence of semantics-based enterprise content management and business intelligence. Traditional BI solutions rely on extracting data from one or more data silos, performing analysis on this data, and presenting the key results to business users. With the increasing need to provide real-time information, manual and intensive preparation processes create bottlenecks. In addition, the inclusion of unstructured data such as emails and news feeds may provide a more complete picture, confirming the need to extract knowledge from these and quickly integrate new sources.

There are various research challenges in semantic BI. The knowledge extracted from unstructured sources must be of sufficient quality to be usable in critical BI systems and to be analyzed together with structured data. Correlation of knowledge extracted from different data modalities is also important. The representation, storage, and reuse of the results of BI processes through ontologies is an additional challenge.

Due to the complexity of the various factory automation components and technology solutions, information management based on relational databases reaches its limits in terms of maintenance complexity and flexibility of use. In addition to rich schema descriptions, the state-of-the-art reasoning and SPARQL engine claim to offer attractive performance. Below we briefly report on the application of ontologies and reasoning to manage complex product data in the automation domain.

In the materials design domain, much of the data from materials calculations are stored in different heterogeneous databases. Materials databases usually have different data models. Therefore, the users have to face the challenges to find the data from adequate sources and integrate data from multiple sources. Ontologies and ontology-based techniques can address such problems as the formal representation of domain knowledge can make data more available and interoperable among different systems. In this paper, we introduce the Materials Design Ontology (MDO), which defines concepts and relations to cover knowledge in the field of materials design. MDO is designed using domain knowledge in materials science (especially in solid-state physics), and is guided by the data from several databases in the materials design field. We show the application of the MDO to materials data retrieved from well-known materials databases.

Modern workplace physical environments have become an intersection of various systems mainly aiming to increase occupants comfort, safety, and productivity while reducing operational costs. In order to achieve such requirements, several systems installed in the workplace must be working optimally and in concert with each other. A typical workplace will have separate systems for HVAC, physical security, lighting, and fire control, and many others [8]. For large portfolios, it is usually necessary to have multiple management servers for each system. Each system may connect to hundreds of unique equipment types and variations which are often configured differently across installations. The lack of standardization even across similar equipment within the same system makes it very difficult to integrate and interpret data in order to understand the systems’ behavior and provide value-added services for occupants and facility managers. The adoption of the Internet of Things promoted the connectivity of buildings’ sensors, devices and systems to the cloud. Such cloud connectivity is sustained by the ambition of promoting applications which will make use of the collected data. The aim is to rely on the gathered information to drive new business opportunities ranging from monitoring and visualization [8], [9], to energy peak shaving [5], and anomaly detection [12]. Connected things are of heterogeneous types and range from low-end devices such as sensors and actuators to more capable items such as systems which concentrate many devices. In such systems, contextual information of the connected sensors and gateways is organized and expressed more often in a convention or a notation such as the single-line diagram. In a given facility, different systems are usually deployed, such as a Building Management System (BMS) which monitors temperature, humidity and CO2 levels to regulate cooling and heating along with the indoor air quality. A Power Monitoring System is deployed in order to monitor power quality and power consumption of electrical loads often classified by usage such as lighting, heating, cooling and plug loads. Other systems are also deployed to collect presence data or to operate on lighting systems. These BMS systems supervise and control underlying controllers and devic

The Maritime Situational Awareness Heterogeneous Sensor Network (MSA-HSN) ontology formalises the information aspects the maritime surveillance system that is one of the demonstrative use case of the Interactive Extreme-Scale Analytics and Forecasting (INFORE) project. Here, different situational views offered by a variegate suite of sensors and platforms are fused and combined with big data analytics to achieve situational awareness for maritime security. The ontology integrates prominent ontologies for sensors, measures and quantities, events, and maritime information, and extends them to model provenance, quality of information, qualitative temporal nature of information. The talk will introduce the relevant aspects of the ontology design, to demonstrate the formalisation of the information components of a prototypical information fusion systems, and will exemplify the most interesting modelling patterns, from MSA sensor information, to maritime event detection and forecasting, to the modelling of information quality in fusion systems.

Explainability has been a goal for Artificial Intelligence (AI) systems since their conception, with the need for explainability growing as more complex AI models are increasingly used in critical, high-stakes settings such as healthcare. Explanations have often added to an AI system in a non-principled, post-hoc manner. With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system’s AI capabilities. We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types. We indicate how the ontology can support user requirements for explanations in the domain of healthcare. We evaluate our ontology with a set of competency questions geared towards a system designer who might use our ontology to decide which explanation types to include, given a combination of users’ needs and a system’s capabilities, both in system design settings and in real-time operations. Through the use of this ontology, system designers will be able to make informed choices on which explanations AI systems can and should provide.

We have developed a robotics ontology, OntoScene, that extends IEEE CORA [5] and SemNav [1] ontologies. Contrary to the prior work that lacked usage of ontology in scene understanding, the proposed system uses OntoScene to figure out objects and their relations in a scene and create a scene graph for aid in various cognitive robotic tasks where object localization, scene graph generation is important. This work positions semantic web technology as a key enabler in robotic tasks.

With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals, academic and psychological parameters, and the labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use-case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations.

Engineering projects for railway infrastructure typically involve many subsystems which need consistent views of the planned and built infrastructure and its underlying topology. Consistency is typically ensured by exchanging and verifying data between tools using XML-based data formats and UML-based object-oriented models. A tighter alignment of these data representations via a common topology model could decrease the development effort of railway infrastructure engineering tools. A common semantic model is also a prerequisite for the successful adoption of railway knowledge graphs. Based on the RailTopoModel standard, we developed the Rail Topology Ontology as a model to represent core features of railway infrastructures in a standard-compliant manner. This paper describes the ontology and its development method, and discusses its suitability for integrating data of railway engineering systems and other sources in a knowledge graph.
With the Rail Topology Ontology, software engineers and knowledge scientists have a standard-based ontology for representing railway topologies to integrate disconnected data sources. We use the Rail Topology Ontology for our rail knowledge graph and plan to extend it by rail infrastructure ontologies derived from existing data exchange standards, since many such standards use the same base model as the presented ontology, viz., RailTopoModel.

Ontology Design Patterns (ODP) have been proposed to facilitate ontology engineering. Despite numerous conceptual contributions for over more than a decade, there is little empirical work to support the often claimed benefits provided by ODPs. Determining ODP use from ontologies alone (without interviews or other supporting documentation) is challenging as there is no standard (or required) mechanism for stipulating the intended use of an ODP. Instead, we must rely on modelling features which are suggestive of a given ODP’s influence. For the purpose of determining the prevalence of ODPs in ontologies, we developed a variety of techniques to detect these features with varying degrees of liberality. Using these techniques, we survey BioPortal with respect to well-known and publicly available repositories for ODPs. Our findings are predominantly negative. For the vast majority of ODPs we cannot find empirical evidence for their use in biomedical ontologies.

Recent developments in data analysis and machine learning support novel data-driven operations optimizations in the real estate industry, enabling new services, improved well-being for tenants, and reduced environmental footprints. The real estate industry is, however, fragmented in terms of systems and data formats. This paper introduces RealEstateCore (REC), an OWL 2 ontology which enables data integration for smart buildings. REC is developed by a consortium including some of the largest real estate companies in northern Europe. It is available under the permissive MIT license, is developed and hosted at GitHub, and is seeing adoption among both its creator companies and other product and service companies in the Nordic real estate market. We present and discuss the ontology’s development drivers and process, its structure, deployments within several companies, and the organization and plan for maintaining and evolving REC in the future.

In this paper, we present an OWL-based ontology, the Cloud Computing Ontology (CoCoOn), that defines concepts, features, attributes and relations to describe Cloud infrastructure services. We also present datasets that are built using CoCoOn and scripts (i.e. SPARQL template queries and web applications) that demonstrate the real-world applicability of the ontology. We also describe the design of the ontology and the architecture of related services developed with it.

We present an unsupervised approach to process natural language questions that cannot be answered by factual question answering nor advanced data querying, requiring ad-hoc code generation instead.
To address this challenging task, our system, AskCO, performs language-to-code translation by interpreting the natural language question and generating a SPARQL query that is run against CodeOntology, a large RDF repository containing millions of triples representing Java code constructs. The SPARQL query will result in a number of candidate Java source code snippets and methods, ranked by AskCO on both syntactic and semantic features, to find the best candidate, that is then executed to get the correct answer. The evaluation of the system is based on a dataset extracted from StackOverflow and experimental results show that our approach is comparable with other state-of-the-art proprietary systems, such as the closed-source WolframAlpha computational knowledge engine.

Ontologies of research areas are important tools for characterising, exploring, and analysing the research landscape. Some fields of research are comprehensively described by large-scale taxonomies, e.g., MeSH in Biology and PhySH in Physics. Conversely, current Computer Science taxonomies are coarse-grained and tend to evolve slowly. For instance, the ACM classification scheme contains only about 2K research topics and the last version dates back to 2012. In this paper, we introduce the Computer Science Ontology (CSO), a large-scale, automatically generated ontology of research areas, which includes about 15K topics and 70K semantic relationships. It was created by applying the Klink-2 algorithm on a very large dataset of 16M scientific articles. CSO presents two main advantages over the alternatives: i) it includes a very large number of topics that do not appear in other classifications, and ii) it can be updated automatically by running Klink-2 on recent corpora of publications. CSO powers several tools adopted by the editorial team at Springer Nature and has been used to enable a variety of solutions, such as classifying research publications, detecting research communities, and predicting research trends. To facilitate the uptake of CSO we have developed the CSO Portal, a web application that enables users to download, explore, and provide granular feedback on CSO at different levels. Users can use the portal to rate topics and relationships, suggest missing relationships, and visualise sections of the ontology. The portal will support the publication of and access to regular new releases of CSO, with the aim of providing a comprehensive resource to the various communities engaged with scholarly data.

Over the past eight years, we have been involved in the development of a set of complementary and orthogonal ontologies that can be used for the description of the main areas of the scholarly publishing domain, known as the SPAR (Semantic Publishing and Referencing) Ontologies. In this paper, we introduce this suite of ontologies, discuss the basic principles we have followed for their development, and describe their uptake and usage within the academic, institutional and publishing communities.

Major academic publishers need to be able to analyse their vast catalogue of products and select the best items to be marketed in scientific venues. This is a complex exercise that requires characterising with a high precision the topics of thousands of books and matching them with the interests of the relevant communities. In Springer Nature, this task has been traditionally handled manually by publishing editors. However, the rapid growth in the number of scientific publications and the dynamic nature of the Computer Science landscape has made this solution increasingly inefficient. We have addressed this issue by creating Smart Book Recommender (SBR), an ontology-based recommender system developed by The Open University (OU) in collaboration with Springer Nature, which supports their Computer Science editorial team in selecting the products to market at specific venues. SBR recommends books, journals, and conference proceedings relevant to a conference by taking advantage of a semantically enhanced representation of about 27K editorial products. This is based on the Computer Science Ontology, a very large-scale, automatically generated taxonomy of research areas. SBR also allows users to investigate why a certain publication was suggested by the system. It does so by means of an interactive graph view that displays the topic taxonomy of the recommended editorial product and compares it with the topic-centric characterization of the input conference. An evaluation carried out with seven Springer Nature editors and seven OU researchers has confirmed the effectiveness of the solution.

Akoma Ntoso is an OASIS Committee Specification Draft standard for the electronic representations of parliamentary, normative and judicial documents in XML. Recently, it has been officially adopted by the United Nations (UN) as the main electronic format for making UN documents machine-processable. However, Akoma Ntoso does not force nor define any formal ontology for allowing the description of real-world objects, concepts and relations mentioned in documents. In order to address this gap, in this paper we introduce the United Nations System Document Ontology (UNDO), i.e. an OWL 2 DL ontology developed and adopted by the United Nations that aims at providing a framework for the formal description of all these entities.

Electronic Data Capture (EDC) software solutions are progressively being adopted for conducting clinical trials and studies, carried out by biomedical, pharmaceutical and health-care research teams. In this paper we present the MedRed Ontology, whose goal is to represent the metadata of these studies, using well-established standards, and reusing related vocabularies to describe essential aspects, such as validation rules, composability, or provenance. The paper describes the design principles behind the ontology and how it relates to existing models and formats used in the industry. We also reuse well-known vocabularies and W3C recommendations. Furthermore, we have validated the ontology with existing clinical studies in the context of the MedRed project, as well as a collection of metadata of well-known studies. Finally, we have made the ontology available publicly following best practices and vocabulary sharing guidelines.

  • Testing Chatbots with Ontologies (external link)Natural Language Processing (NLP) is a discipline that began around 1950 with the introduction of the famous Turing Test Turing (1950). Virtual assistants are programs that communicate with users in natural language. These NLP programs, called chatbots, have the advantage of being close to natural and intuitive interactions. Typically, these programs understand information from a specific domain. Thus, chatbots often provide specific information in an entertaining and anonymous way. Several studies predict the rise of the chatbot market in the future, so it will be essential to address the functionality of these systems Følstad and Brandtzæg (2017), Grudin (2019). Until now, only a few testing approaches exist to check the correctness of chatbots (e.g., Vasconcelos et al. (2017); Bozic (2019)) . However, users can talk to chatbots in a variety of ways, which may make it difficult to predict their input range. In addition to that, testing chatbots in a generalized way proves to be problematic due to the lack of expectations.

コメント

  1. […] 機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術   AI学会論文    知識情報処理技術   AI学会論文を集めて     推論技術 […]

  2. […] 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術 人工生命とエージェント […]

  3. […] 本書ではFMEA、HAZIDの紹介の後にそれらとオントロジーを組み合わせた具体的な事例について紹介している。目次を以下に示す。 […]

  4. […] 本ブログでは以下のページにて、情報工学の観点からこのオントロジーの活用について述べる。 […]

  5. […] 人工知能技術 機械学習技術 トピックモデル オントロジー技術 デジタルトランスフォーメーション  自然言語処理技術   知識情報処理  […]

  6. […] scheme、それらを推論する為のオントロジー、DLP、OWL、RulesとSPARQL、更にその上部としてLogic […]

  7. […] 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 検索技術 データベース技術 アルゴリズム […]

  8. […] テムから構成されるシステムにおいて、相互運用性と異質性に重点を置いた物理的資産の完全性管理におけるオントロジーモデリングの理論と手法に関する最新の調査結果を提示している […]

  9. […] 機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術   AI学会論文    知識情報処理技術   AI学会論文を集めて     推論技術 […]

  10. […] 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 ウェブ技術 検索技術 データベース技術 アルゴリズム […]

  11. […] 機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術 […]

  12. […] 人工知能技術 ウェブ技術 知識情報処理技術 オントロジー技術 検索技術 データベース技術 ユーザーインターフェース技術 […]

  13. […] 機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術 […]

  14. […] 機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術 […]

  15. […] 機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術  知識情報処理技術  AI学会論文を集めて   推論技術 […]

タイトルとURLをコピーしました