Workflow & Services Technologies

Machine Learning Artificial Intelligence Semantic Web Ontology Digital Transformation Web Technology Knowledge Information Processing Reasoning Technology  Database Technology  User Interface Navigation of this blog

About Workflow & Services Technologies

Workflow and service technology play an important role in businesses and organisations, where workflow defines how tasks and processes are executed and establishes efficient workflows, while service technology refers to the technologies and tools used to deliver value to customers and users.

Workflow and service technology are interrelated and support effective service delivery. For example, automated workflows can help streamline customer support and problem-solving processes and improve service quality and timeliness, while digitalised service technology makes workflows more flexible and allows information to be shared in real time, thereby improving work efficiency.

By combining these elements, companies and organisations can work more efficiently and provide better services to their customers and users. The integration of workflow and service technologies is essential to remain competitive and grow in the modern business environment.

This section summarises information on service platforms, workflow analysis and their application to real business applications, focusing on papers published in ISWC and elsewhere.

Technical Topics

The Science of the Artificial (1969) is a book by Herbert A. Simon on the field of learning science and artificial intelligence, which has particularly influenced design theory. The book is concerned with how man-made phenomena should be categorised and discusses whether such phenomena belong in the realm of ‘science’. System Design and Decision-making Systems.

ISO 31000 is an international standard for risk management, providing guidance and principles to help organisations manage risk effectively.The combination of ISO 31000 and AI technology is a highly effective approach to enhance risk management and support more accurate decision-making. The use of AI technology makes the risk management process more efficient and effective at the following points.

Bowtie analysis is a risk management technique that is used to organise risks in a visually understandable way. The name comes from the fact that the resulting diagram of the analysis resembles the shape of a bowtie. The combination of bowtie analysis with ontologies and AI technologies is a highly effective approach to enhance risk management and predictive analytics and to design effective responses to risk.

Notion is an all-in-one productivity tool that integrates various functions such as document, task management, database and project management, so that any information can be effectively used if it is thrown into Notion anyway.

User-customized learning aids utilizing natural language processing (NLP) are being offered in a variety of areas, including the education field and online learning platforms. This section describes the various algorithms used and their specific implementations.

  • Auto-Grading (automatic grading) technology

Auto-grading refers to the process of using computer programmes and algorithms to automatically assess and score learning activities and assessment tasks. This technology is mainly used in the fields of education and assessment.

Access Control technology is a security technique for controlling access to information systems and physical locations so that only authorized users can access authorized resources, protecting the confidentiality, integrity, and availability of data, and enforcing security It is a widely used technology to protect the confidentiality, integrity, and availability of data and to enforce security policies. This section describes various algorithms and implementation examples for this access control technique.

PlantUML will be a tool that can automatically draw various open source data models and is based on graphviz, an open source drawing tool provided by AT&T Laboratories. It will be a component for quickly creating various diagrams such as those shown below.

There are various ways to use plantUML. (1) using Jar files, (2) using brew on Mac, (3) Web services, and (4) planting in applications.

Displaying and animating graph snapshots on a timeline is an important technique for analyzing graph data, as it helps visualize changes over time and understand the dynamic characteristics of graph data. This section describes libraries and implementation examples used for these purposes.

This paper describes the creation of animations of graphs by combining NetworkX and Matplotlib, a technique for visually representing dynamic changes in networks in Python.

Methods for plotting high-dimensional data in low dimensions using dimensionality reduction techniques to facilitate visualization are useful for many data analysis tasks, such as data understanding, clustering, anomaly detection, and feature selection. This section describes the major dimensionality reduction techniques and their methods.

Gephi is an open-source graph visualization software that is particularly suitable for network analysis and visualization of complex data sets. Here we describe the basic steps and functionality for visualizing data using Gephi.

Cytoscape.js is a graph theory library written in JavaScript that is widely used for visualizing network and graph data. Cytoscape.js makes it possible to add graph and network data visualization to web and desktop applications. Here are the basic steps and example code for data visualization using Cytoscape.js.

Sigma.js is a web-based graph visualization library that can be a useful tool for creating interactive network diagrams. Here we describe the basic steps and functions for visualizing graph data using Sigma.js.

(1) Digital signatures, (2) time stamps, (3) e-seals, (4) website authentication, (5) authentication of the legitimacy of goods, (6) e-delivery, and other mechanisms to verify the authenticity of data and data distribution infrastructure on the Internet to prevent falsification, spoofing of transmission sources, etc. (Trust Services) blogs.itmedia.co.jp

To date on-line processes (i.e. workflows) built in e-Science have been the result of collaborative team efforts. As more of these workflows are built, scientists start sharing and reusing stand-alone compositions of services, or workflow fragments. They repurpose an existing workflow or workflow fragment by finding one that is close enough to be the basis of a new workflow for a different purpose, and making small changes to it. Such a “workflow by example” approach complements the popular view in the SemanticWeb Services literature that on-line processes are constructed automatically from scratch, and could help bootstrap the Web of Science. Based on a comparison of e-Science middleware projects, this paper identifies seven bottlenecks to scalable reuse and repurposing. We include some thoughts on the applicability of using OWL for two bottlenecks: workflow fragment discovery and the ranking of fragments

Semantic Web, using formal languages to represent document content and providing facilities for aggregating information spread around, can improve the functionalities provided nowadays by KM tools. This paper describes a Knowledge Management system, targeted at lawyers, which has been enhanced using Semantic Web technologies. The system assists lawyers during their everyday work, and allows them to manage their information and knowledge. A semantic layer has been added to the system, providing capabilities that make system usage easier and much more powerful, adding new and advanced means for create, share and access knowledge.

Structured Clinical Documentation is a fundamental component of the healthcare enterprise, linking both clinical (e.g., electronic health record, clinical decision support) and administrative functions (e.g., evaluation and management coding, billing). Documentation templates have proven to be an effective mechanism for implementing structured clinical documentation. The ability to create and manage definitions, i.e., definitions management, for various concepts such as diseases, drugs, contraindications, complications, etc. is crucial for creating and maintaining documentation templates in a consistent and cohesive manner across the organization. Definitions management involves the creation and management of concepts that may be a part of controlled vocabularies, domain models and ontologies. In this paper, we present a realworld implementation of a semantics-based approach to automate structured clinical documentation based on a description logics (DL) system for ontology management. In this context we will introduce the ontological underpinnings on which clinical documents are based, namely the domain, document and presentation ontologies. We will present techniques that leverage these ontologies to render static and dynamic templates that contain branching logic. We will also evaluate the role of these ontologies in the context of managing the impact of definition changes on the creation and rendering of these documentation templates, and the ability to retrieve documentation templates and their instances precisely in a given clinical context.

Metadata is already attached to most data and applications.
Real-world objects, such as books, food, movies, and other digital content. In addition, barcodes and RFID can be used to store data electronically, which is expected to explode in popularity in the future. On the other hand, Web services are also becoming more and more popular. In addition, UPnP services and ECHONET are penetrating the Internet. Home Network Our project proposes a new handheld application. It is called Ubiquitous Service Finder. It displays metadata around you in your cell phone as icons, and from there you can manipulate services that are semantically related to the metadata with a simple drag and drop operation.

In this paper, we present a demonstrator system which applies semantic web services technology to business-to-business integration, focussing specifically on a logistics supply chain. The system is able to handle all stages of the service lifecycle – discovery, service selection and service execution. One unique feature of the system is its approach to protocol mediation, allowing a service requestor to dynamically modify the way it communicates with aprovider, based on a description of the provider’s protocol. We present the architecture of the system, together with an overview of the key components (discovery and mediation) and the implementation.

The Royal Instituto Elcano1 (Real Instituto Elcano) in Spain is an independent, authoritative political research institute that provides commentary on the political situation in the world, with a focus on relations with Spain. As part of its information dissemination strategy, it operates a website for the general public. In this paper, we present and evaluate an application of a semantic search engine to improve access to the institute’s content. Instead of searching for documents based on user queries by keywords, this system accepts natural language queries and returns the answers. Instead of linking to a page, it links to a document. The system performs ontology building, automatic ontology construction, and semantic access by natural language. It also performs fault analysis.

The integration of heterogenous data sources is a crucial step for the upcoming semantic web – if existing information is not integrated, where will the data come from that the semantic web builds on? In this paper we present the gnowsis adapter framework, an implementation of an RDF graph system that can be used to integrate structured data sources, together with a set of already implemented adapters that can be used in own applications or extended for new situations. We will give an overview of the architecture and implementation details together with a description of the common problems in this field and our solutions, leading to an outlook on the future developments we expect. Using our presented results, researchers can generate test data for experiments and practitioners can access their desktop data sources as RDF graph.

  • Do not use this gear with a switching lever! Automotive industry experience with semantic guides

One major trend may be observed in the automotive industry: built-toorder. This means reducing the mass production of cars to a limited-lot-production. Emphasis for optimization issues moves then from the production step to earlier steps as the collaboration of suppliers and manufacturer in development and delivering. Thus knowledge has to be shared between different organizations and departments in early development processes. In this paper we describe a project in the automotive industry where ontologies have two main purposes: (i) representing and sharing knowledge to optimize business processes for the testing of cars and (ii) integration of life data into this optimization process. A test car configuration assistant (semantic guide) is built on top of an inference engine equipped with an ontology containing information about parts and configuration rules. The ontology is attached to the legacy systems of the manufacturer and thus accesses and integrates up-to-date information. This semantic guide accelerates the configuration of test cars and thus reduces time to market.

The Semantic Web is a difficult concept for typical end-users to comprehend. There is a lack of widespread understanding on how the Semantic Web could be used in day-to-day applications. While there are now practical applications that have appeared supporting back-end functions such as data integration, there is only a handful of Semantic Web applications that the average Google user would want to use on a regular basis. The Concept Object Web1 is a prototype application for knowledge/intelligence management that aggregates data from text documents, XML files, and databases so that end-users can visually discover and learn about knowledge object (entities) without reading documents. The application addresses limitations with current knowledge/intelligence management tools giving end-users the power of the Semantic Web without the perceived burden and complexity of the Semantic Web.

Since mobile Internet services are rapidly proliferating, finding the most appropriate service or services from among the many offered requires profound knowledge about the services which is becoming virtually impossible for ordinary mobile users. We propose a system that assists non-expert mobile users in finding the appropriate services that solve the real-world problems encountered by the user. Key components are a task knowledge base of tasks that a mobile user performs in daily life and a service knowledge base of services that can be used to accomplish user tasks. We present the architecture of the proposed system including a knowledge modeling framework, and a detailed description of a prototype system. We also show preliminary user test results; they indicate that the system allows a user to find appropriate services quicker with fewer loads than conventional commercial methods.

Situational awareness involves the identification of relationships and the ability to make tasks situational. This problem is generally intractable and requires additional user-defined constraints and guidance in order to build a practical situation awareness system.
In this paper, we describe Situation Awareness Assistant (SAWA) based on Semantic Web technology, which develops user-defined domain knowledge in the form of formal ontologies and rule sets, and facilitates the application of domain knowledge to monitor relevant relationships that occur in situations. SAWA includes tools for developing ontologies in OWL and rules in SWRL, and provides runtime components for collecting event data, storing and querying data, monitoring relationships, and displaying results through a graphical user interface. A runtime component is provided to apply SAWA to a supply logistics scenario and discuss the challenges encountered in using SWRL for this task.

This application shows how to use Semantic Web technologies to provide a personalized syndicated view of distributed Web data. The application consists of the following four steps An information collection step where information from distributed heterogeneous sources is extracted and enriched with machine-readable semantics, an operation step for timely and up-to-date extraction, an inference step where rules are reasoned over, a semantic description created and additional ontology and user profile information, etc. knowledge base, and user interface creation steps, where the RDF descriptions resulting from the inference step are interpreted and transformed into an appropriate personalized user interface. This application was developed to solve the following real-world problem Provide a personalized syndicated view of the publications of a large European research project with more than 20 geographically dispersed partners, and embed this information in the contextual information of the project and its working groups.

.In this paper, we show how to use ontology as a bootstrap for the knowledge acquisition process of extracting product information from tabular data in web pages. Furthermore, we use logic rules to infer product-specific properties and derive higher-order knowledge about product features. We also describe the knowledge acquisition process, including both ontological and procedural aspects. Finally, we provide a qualitative and quantitative evaluation of our results.

In recent years, workflows have been increasingly used in scientific applications. This paper presents novel metadata reasoning capabilities that we have developed to support the creation of large workflows. They include 1) use of semantic web technologies in handling metadata constraints on file collections and nested file collections, 2) propagation and validation of metadata constraints from inputs to outputs in a workflow component, and through the links among components in a workflow, and 3) sub-workflows that generate metadata needed for workflow creation. We show how we used these capabilities to support the creation of large executable workflows in an earthquake science application with more than 7,000 jobs, generating metadata for more than 100,000 new files.

In today’s information-oriented society, access to information is a basic necessity. As one of the main players in the news business, news agencies are required to provide fresh, relevant, and quality information to their customers. Meeting this demand is not an easy task, but as a partner in the NEWS (News Engine Web Services) project, we believe that the use of Semantic Web technologies can help news agencies achieve their goals. In this paper we describe the objectives and main results of the NEWS project that has just finished.

Large heterogeneous online repositories of scientific information have the potential to change the way science is done today. In order for this potential to be realized, numerous challenges must be addressed concerning access to and interoperability of the online scientific data. In our work, we are using semantic web technologies to improve access and interoperability by providing a framework for collaboration and a basis for building and distributing advanced data simulation tools. Our initial scientific focus area is the solar terrestrial physics community. In this paper, we will present our work on the Virtual Solar Terrestrial Observatory (VSTO). We will present the emerging trend of the virtual observatory – a virtual integrated evolving scientific data repository – and describe the general use case and our semantically-enabled architecture. We will also present our specific implementation and describe the benefits of the semantic web in this setting. Further, we speculate on the future of the growing adoption of semantic technologies in this important application area of scientific cyberinfrastructure and semantically enabled scientific data repositories.

We have been developing a task-based service navigation system that offers to the user for his selected services relevant to the task the user wants to perform. We observed that the tasks likely to be performed in a given situation depend on the user’s role such as businessman or father. To further our research, we constructed a role-ontology and utilized it to improve the usability of task-based service navigation. We have enhanced a basic task-model by associating tasks with role-concepts defined in the new role-ontology. We can generate a task-list that is precisely tuned to the user’s current role. In addition, we can generate a personalized task-list from the task-model based on the user’s task selection history. Because services are associated with tasks, our approach makes it much easier to navigate a user to the most appropriate services. In this paper, we describe the construction of our role-ontology and the task-based service navigation system based on the role-ontology.

A central element of emerging Service Oriented Architectures (SOA) is the ability to develop new applications by composing enterprise functionality encapsulated in the form of services – whether within a given organization or across multiple ones. Semantic service annotations, including annotations of both functional and non-functional attributes, offer the prospect of facilitating this process and of producing higher quality solutions. A significant body of work in this area has aimed to fully automate this process, while assuming that all services already have rich and accurate annotations, In this article, we argue that this assumption is often unrealistic. Instead, we describe a mixed initiative framework for semantic web service discovery and composition that aims at flexibly interleaving human decision making and automated functionality in environments where annotations may be incomplete and even inconsistent. An initial version of this framework has been implemented in SAP’s Guided Procedures, a key element of SAP’s Enterperise Service Architecture (ESA).

Clinical trials are studies in human patients to evaluate the safety and effectiveness of new therapies. Managing a clinical trial from its inception to completion typically involves multiple disparate applications facilitating activities such as trial design specification, clinical sites management, participants tracking, and trial data analysis. There remains however a strong impetus to integrate these diverse applications – each supporting different but related functions of clinical trial management – at syntactic and semantic levels so as to improve clarity, consistency and correctness in specifying clinical trials, and in acquiring and analyzing clinical data. The situation becomes especially critical with the need to manage multiple clinical trials at various sites, and to facilitate meta-analyses on trials. This paper introduces a knowledge-based framework that we are building to support a suite of clinical trial management applications. Our initiative uses semantic technologies to provide a consistent basis for the applications to interoperate. We are adapting this approach to the Immune Tolerance Network (ITN), an international research consortium developing new therapeutics in immune-mediated disorders.

The healthcare industry is rapidly advancing towards the widespread use of electronic medical records systems to manage the increasingly large amount of patient data and reduce medical errors. In addition to patient data there is a large amount of data describing procedures, treatments, diagnoses, drugs, insurance plans, coverage, formularies and the relationships between these data sets. While practices have benefited from the use of EMRs, infusing these essential programs with rich domain knowledge and rules can greatly enhance their performance and ability to support clinical decisions. Active Semantic Electronic Medical Record (ASEMR) application discussed here uses Semantic Web technologies to reduce medical errors, improve physician efficiency with accurate completion of patient charts, improve patient safety and satisfaction in medical practice, and improve billing due to more accurate coding. This results in practice efficiency and growth by enabling physicians to see more patients with improved care. ASEMR has been deployed and in daily use for managing all patient records at the Athens Heart Center since December 2005. This showcases an application of Semantic Web in health care, especially small clinics.

コメント

タイトルとURLをコピーしました