Backward inference and Forward inference

Machine Learning Technology   Artificial Intelligence Technology   Natural Language Processing Technology   Semantic Web Technology   Ontology Technology   Digital Transformation Technology Algorithms  Reasoning Technology

The reasoning in prolog and core.logic described above is called backward inference. Backward reasoning is reasoning that leads from a goal to a sub-goal. When the goal A=C is derived from the premise A=B, the sub-goal (B=C) is derived by saying, “To prove A=C, we should show that B=C since A-B.

For example, when a doctor diagnoses a disease, he thinks of as many possible reasons as possible for the symptom and chooses the one that fits the symptoms, or when there is a problem with a device, he lists all the possible causes and finds the true cause among them. For example, when there is a problem with a piece of equipment, you can list all the possible causes and find the true cause from among them, or find some hypothetical law (cause and effect) and organize it (theorizing).

As a more concrete example, let’s consider the application of backward reasoning to the case of a device tripping. When a device that has been operating normally stops working, backward reasoning first cites multiple possibilities for the cause, not just one. For example
(1) Is the cable connected to the device disconnected?
(2) Is the cable disconnected?
(3) Is the fuse or breaker in the unit tripped?
(4) Is the equipment malfunctioning?
(5) Is the air conditioning in the room turned on?
(6) Check if the door of the room is open.
(7) There is a force acting on the room that is beyond the reach of human knowledge.
The following are possible causes.

Items (1) through (4) can easily be considered as direct causes of the equipment not working. (5) and (6) may not seem to be directly related to the trouble at first glance, but they may be related to the possibility that the equipment is shutting down because the temperature of the room has risen due to the failure of the air conditioner, or that the opening and closing of the door is an event unique to the building. From an exhaustive perspective, we also need to consider the possibility of (7). In this way, backward reasoning requires in-depth knowledge of the issue.

This backward reasoning algorithm works well in cases where knowledge can be gathered to some extent, limited to a certain domain. For example, FTA (Fault Tree Analysis) and FMEA (Failure Mode and Effects Analysis), which are often used in manufacturing, are frameworks for comprehensively extracting and analyzing knowledge about failures and problems. The knowledge extracted from these frameworks can be effectively utilized in backward reasoning engines such as prolog.

On the other hand, for reasoning about an unknown world, it is not possible to accumulate sufficient knowledge in advance. To deal with this problem, the approach of abduction, which utilizes knowledge generated by some method (e.g., probabilistic methods, machine learning, etc.), has been studied, but has not yet yielded sufficient results.

As opposed to these backward reasoning, there is forward reasoning. When deriving the goal A=C from the premise A=B, we can say, “A=B. For example, in the example of failure analysis, “the power cable is not connected, so the device will not work even if the switch is turned on.

Let me explain forward inference in more detail. For example, suppose there is a monkey in a zoo and the following facts are found.

(1) There is an apple at the end of the door.
(2) There is a string attached to the door that can be opened, but it is not within reach.
(3) The monkey can pull the string if he can reach it.
(4) The monkey can hold the apple.
(5) There is a platform in the cage and it is low enough.
(6) The monkey is charming.

In contrast, the following inference rules exist.

(a) If the platform is low enough, the monkey can climb it.
(b) If the monkey can pull the string and climb the platform, it can open the door.
(c) If the monkey can hold an apple, it can eat it.

These rules are used to add inference results to the original facts.

(7) The monkey can climb the platform (rule (a) applied to fact (5))
(8)The door opens (Rule (b) is applied to facts (3) and (7))
(9) The monkey can eat the apple (rule (c) is applied to facts (8), (1) and (4))

Thus, the inference results (7), (8), and (9) are generated from the preconditions (1) to (6) and the rules (a), (b), and (c), and the fact that “monkeys can eat apples” is inferred using these results. The production system represented by CLIPS is a tool that incorporates this principle of prospective reasoning. The point of this tool’s inference is that it has a built-in state-management mechanism that generates new facts by applying rules and uses them to construct the next inference.

As you can see from this example, the key to forward inference is how you choose the rules. If you choose the wrong direction, your inference will not be successful. One way to ensure the success of inference is to define an evaluation function that evaluates the progress of the newly accumulated facts toward the goal, and to select rules so that the evaluation function is as high as possible.

In fact, in the AI versions of Go and Shogi, backward inference is not possible, and the evaluation function for forward reasoning is devised as described above. If there is an opportunity, I would like to discuss these as well.

An example of an expert system implementation is CLIPS in Java. The implementation and utilization in Clojure based on them will be described separately.

コメント

タイトルとURLをコピーしました