Deconstruction and graph neural networks

Web Technology Digital Transformation Technology Artificial Intelligence Technology Natural Language Processing Technology Semantic Web Technology Deep Learning Technology Online Learning & Reinforcement Learning Technology Chatbot and Q&A Technology User Interface Technology Knowledge Information Processing Technology Reasoning Technology Philosophy and related topics  Zen and AI Graph Neural Network Navigation of this blog
History of philosophy and pattern recognition in artificial intelligence technology

In the introduction to Introduction to Modern Thought,

‘Man has historically followed the path of ordering society and himself, eliminating noise and striving for what is pure and right.’}

He stated. This can be said to be the activity of man’s pursuit of the essence and truth of things, which has continued since the time of the ancient Greeks, as described in “What is the aim of philosophy” from the Special Lecture “Socrates’ Defence”.

This activity of seeking the essence can also be seen in early Buddhism, such as Kegon Buddhism, as described in “The Internet and Vairochanabutsu – Kegon Sutra and Esoteric Buddhism“, and in religions such as Christianity and Islam, as described in “Reading the Core of Christianity: Three Major Monotheistic Faiths, the Old Testament and Abraham“.

The New Textbook of Philosophy states that modern philosophy since Kant has shifted from this idea that there is something pure and right somewhere, to the idea that we recognise the world by seeing it through our own recognition apparatus, as represented by the phrase ‘I think, therefore I am’.

This ‘anthropocentric’ way of thinking offered a new perspective, but contained the disadvantage that humans would no longer be able to refer directly to the objects themselves from outside their own cognitive apparatus. This implied the possibility of a world in which humans could not engage, beyond the reach of human thought, and also led to the contradiction that there was no departure from the idea that there was somewhere pure and right.

One of the answers to this perception of an outer world was provided by ‘structuralism’, which began with Levi Strauss’s ‘structural anthropology’, the pattern classification and interpretation of myths from all over the world. This approach, which boomed in France in the 1960s, is based on the idea that even if specific forms are different, similar patterns can be found deep within them, and that these patterns can be clarified. Structuralism also had the motivation of discovering patterns across human civilisation, and had the aspect of aiming for a kind of universal science.

This approach can be said to be similar to the deep learning universalism in the field of artificial intelligence, which is based on the idea that “all patterns can be grasped by deep learning embedding” and that these patterns can be used to realise a universal AI. This approach is also close to the expectation that general-purpose AI can be achieved by continuing to build large-scale language models (LLMs), as described in “An overview of prompt engineering and its applications” in recent years.

These ideas may be valid in a static world limited to a certain frame (domain), as described in “Heuristics and the Frame Problem“, but the world is changing, and the frame itself changes from moment to moment with these changes. Therefore, if we try to create something universal, a general-purpose artificial intelligence, there will be problems that cannot be solved by static pattern recognition alone.

Graph neural networks and deconstructionism.

One of the artificial intelligence technologies that responds to such frame changes is the Graph Neural Network (GNN) described in “Graph Neural Networks” GNN applies deep learning to graph data. The Dynamic GNN, described in “Overview, algorithms and implementation examples of dynamic graph embedding“, where the graph changes dynamically, is a model in which the aforementioned frames can represent changes. The model is a model in which the aforementioned frames can represent changes.

Using this model, it is possible to recognise relationships that change over time and the emotions of the people involved in each piece of information as topological relationships, making it possible to recognise different features of the same piece of information.

In the philosophical world, ideas that respond to dynamic frame changes have also been presented. One of them would be what is called poststructuralism.

Post-structuralism recognises dynamic elements as differences, which could not be seen in the static pattern recognition, and discusses the dynamically changing world by taking up issues of pattern change, deviations and deviations from the pattern, and tries to describe something like human creativity.

This approach is similar to the world of graph neural networks in terms of artificial intelligence technology. In graph neural networks, embeddings are not only the characteristics of the network itself, but also incorporate the topology of the relationships with other entities to which they are connected, which are the characteristics of the network itself.

There is also existentialism as an approach to what exists beyond this world. There, the approach is not merely whether there is a thing or not, but also leads to the question of how human beings relate to it.

Derrida, who advocated deconstruction, focuses on the basic element of relative structure as this difference, the dichotomy, and constructs his logic from it.

A concrete example of Derrida’s idea of deconstruction is represented by the ‘deconstruction’ of the hierarchical dichotomy between spoken language (parole) and written language (écriture), which, since Plato, a disciple of Socrates, has been positioned in the West as an entity in which spoken language predominates and written language is subordinated to it (both stand in a hierarchical dichotomous relationship). This is because the spoken language, in which the person who speaks is in front of you and speaks with emotion, is what is important, while the descriptive language that describes it is interpreted differently by different readers, so that the reader does not necessarily come to the same understanding as the person who heard the original spoken language.

In addition, when descriptive language becomes the dominant language, people lose the will to remember what they heard in the spoken language, resulting in a weakening of their ability to retain memories. As descriptive language becomes more prevalent, people’s internal memory is replaced by recall based on looking at an external descriptive medium, as described in “”Dealing with the meaning of symbols in computers“”, and descriptive language was therefore considered to have a “poisonous” element that can reduce memory for people.

In contrast, Derrida,

  1. The differences between internal and external human beings, between spoken language and descriptive language, are subtle and fluid.
  2. that there is no hierarchical dichotomous relationship between the predominance of spoken language and the inferiority of written language, since the internal memory of human beings is limited and the stage at which a person hears spoken language and recalls it as memory requires the mediation of words, which are descriptive language.

The hierarchical dichotomous structure between spoken language (parole) and written language (écriture) is destroyed from within and ‘deconstructed’ by stating.

There is also the dichotomy of ethical and unethical conduct, which is generally considered to be a hierarchical dichotomy with the former as dominant and the latter as subordinate, which can be considered ‘deconstructed’ as follows.

There are cases where any act, no matter how it can be assessed as ethical, may involve unethical aspects in its aspects, and ethical and unethical acts are not simple dichotomies, but relative concepts.

To be more specific, consider a case where a student A was walking down the street and was being unilaterally beaten by an adult B with his bare hands, and when C went to stop him to help him, he stabbed the adult B with a knife he was carrying.

In this case, the act of C stopping to help student A is an ethical act, but the act of stabbing with a knife against a person who is beating him with his bare hands is an excessive act and is assessed as an illogical act in this respect. Therefore, ethical and unethical acts coexist in the rescue act in question, and it is difficult to view it as a simple hierarchical dichotomy.

In many cases, the judgement of what is ethical and what is unethical conduct is subtle or fluid, as the judgement of such conduct changes over time. In the past, acts in which parents inflict some form of punishment on their children in order to discipline them (e.g. slapping them, locking them out of the house, etc.) were considered to be effective educational acts performed as a parent and were considered in the category of ethical conduct.

However, with concepts such as domestic violence and power harassment becoming social problems in recent years, such acts are also increasingly being evaluated as unethical behaviour, capturing aspects of the negative physical and psychological effects they have on children.

Therefore, according to the ‘deconstructive’ way of thinking, the dichotomy between ethical and unethical conduct is not absolute, but a relative concept, in which in some cases both aspects are inherent in a single act.

In the Philosophy and business describes examples of this dichotomy in various fields.

Such a deconstructivist modelling of the world becomes possible with the aforementioned GNN. In other words, GNN is a technology that is one step closer to a general-purpose computer modelling and computing a dynamic world.

コメント

Exit mobile version
タイトルとURLをコピーしました