Universal computing Leibniz to Turing

Web Technology Digital Transformation Technology Artificial Intelligence Technology Natural Language Processing Technology Semantic Web Technology Deep Learning Technology Online Learning & Reinforcement Learning Technology Chatbot and Q&A Technology User Interface Technology Knowledge Information Processing Technology Reasoning Technology Philosophy and related topics  Zen and AI Navigation of this blog
Universal computer Leibniz to Turing

The development of the concept of the ‘universal computer’ has been shaped by humanity’s growing understanding of computational power and the accumulation of ground-breaking ideas on the path to its realisation. This process has been driven by a number of thinkers and mathematicians, from Leibniz (Gottfried Wilhelm Leibniz) to Turing (Alan Turing). Here I will discuss the process of those ideas based on the path from the universal computer Leibniz to Turing.

Contents 
Chapter 1: Leibniz's dream 
Chapter 2: Boole transforms logic into algebra 
Chapter 3: Flöge: from breakthrough achievement to despair 
Chapter 4: Cantor walks through infinity 
Chapter 5: Hilbert's rescue programme 
Chapter 6: Gödel overthrows Hilbert's plans 
Chapter 7: Turing conceives general purpose computer 
Chapter 8: Universal computer becomes reality 
Chapter 9 Beyond Leibniz's dream
Leibniz on Computing Machines and Universal Semiotics

Leibniz (Gottfried Wilhelm Leibniz) was a 17th century philosopher and mathematician who played a pioneering role in the formalisation of computation, logic and knowledge. In particular, two of his ideas, the ‘computational machine’ and ‘Characteristica Universalis’ (Universal Semiotics), are closely linked to concepts that form the basis of modern computer science and artificial intelligence.

Firstly, with regard to computational machines, Leibniz aimed to replace human computing power with machines that would be able to process calculations and logic quickly and accurately, and his computational machines were a pioneering attempt to realise this idea.

In terms of its features, Leibniz’s calculator was designed to perform all additions, subtractions and multiplications automatically, and utilised a ‘gear mechanism’ to perform operations by manually turning a handle, thereby automating the four arithmetic operations. The efficient arrangement of gears for repeating operations is known as the stepping lever system, and Leibniz developed this as an improved version of ‘Pascal’s calculator’ (Pascaline).

Although his calculator employed the decimal system, Leibniz also recognised the value of the binary system (0 and 1), and his understanding and use of the binary system contributed significantly to the development of the theory in later computer science.

Although Leibniz’s computing machines did not become popular due to the technical limitations of the time, his ideas influenced the later Babbage and Turing, and his work in laying the foundations for machine-based logic processing is very significant.

Secondly, with regard to the Characteristica Universalis (Universal Semiotics), he aimed to construct a symbolic system in which all knowledge and thought could be expressed in a single form, and conceived of logical reasoning and argumentation as something that could be processed like mathematical formulae.

Leibniz believed that natural language contains ambiguities and sought a way to represent all knowledge in strict symbols, which would enable arguments and reasoning to be ‘calculated’. Leibniz believed that logical conflicts and arguments could be resolved through symbolic manipulation and wrote: ‘Most human disputes are due to a lack of precise logical reasoning. If a dispute arises, I would say. ‘Let’s calculate’’ (Calculemus!).

Thus, he tried to unify mathematical and philosophical knowledge and to deal with all knowledge through formulas and symbols. Leibniz’s semiotics was a precursor to what we now call ‘formal logic’ and ‘programming languages’. Although his ideas did not lead to the completion of an actual universal symbolic system, they had a profound influence on later logics and theories of computation, such as Frege’s ‘symbolic logic’, Boolean algebra and Turing’s theory of computation.

Computing machines, acting as devices for concrete numerical processing, and universal semiotics as a framework for abstract knowledge processing, became the mechanism for realising Leibniz’s dream of formalising logic and processing it with machines. These are embodied in today’s computers and artificial intelligence, and computer programmes can be seen as an extension of Leibniz’s dream of ‘handling symbolic logic with machines’.

Boolean formalisation of logic

George Boole (1815-1864) became a mathematician and philosopher who developed Leibniz’s universal symbolism and a new approach to the mathematical representation of logical reasoning. This approach, known as ‘Boolean Algebra’, contributed significantly to the development of formal logic and became the basis for modern computer science and digital circuit design.

Boole expressed logic in the form of mathematical expressions, allowing logical conclusions to be drawn through mathematical operations. The breakthrough of Boolean logic was that it put the propositions and logical connectors of classical logic (e.g. ‘and’, ‘or’ and ‘negation’) into a form that could be handled by mathematical expressions.

It specifically treats the truth or falsehood of propositions mathematically as ‘1 (true)’ and ‘0 (false)’ and defines logical operations as algebraic operations such as

  • AND (logical conjunction): ( A land B ) or ( AB )
  • OR (logical OR): ( A lor B )
  • NOT (negation): ( neg A ) or ( 1 – A )

By combining these basic operations, complex logical expressions can be constructed, an approach that is very suitable for computers and digital circuits.

Furthermore, Boolean algebra is closely related to set theory, where logical conjunction (AND) is a form corresponding to the intersection of sets, logical disjunction (OR) is the union of sets and negation (NOT) is the complement of sets.

The mathematical approach to logic constructed by Boole became the basis for later formal logic and computational theory, and formed the core of modern computer science, especially digital circuit design.’ The logic gates (AND, OR, NOT) described in ‘Computational elements and semiconductor chips that make up computers’ are designed based on Boolean logic and are essential for the operation of computer processors and memories.

Boolean mathematical logic also provides an important foundation for the development of 20th century formal logics such as Whitehead and Russell’s Principia Mathematica, which is also discussed in ‘Data types and statically and dynamically typed languages in programming’.

Furthermore, Boolean logic plays a fundamental role in AI knowledge representation and reasoning, and logic programming languages (e.g. Prolog, also discussed in ‘Prolog and knowledge information processing’) have applied Boolean logic structures.

Extension of formal logic from Frege to Hilbert

Gottlob Frege (1848-1925) was one of the founders of formal logic and mathematical foundations, whose main goal was to reconstruct mathematics on the basis of pure logic. Frege’s innovations radically changed modern logic and had an important philosophical impact.

His main contributions were: treating the basic units of logic as ‘propositions’, representing them in a formal system, ‘formalising propositional logic’ with truth values (true and false) at the centre of logic, introducing first-order predicate logic, making complex relations expressible by means of quantifiers (the omnonymous quantifier ( forall ) and the existential quantifier ( exists )) The ‘development of predicate logic’, which made it possible to express complex relations and gave logic the ability to describe individual objects (variables) and their relations.

His 1879 book Conceptual Notation was the first attempt to reconstruct the foundations of mathematics in formal logic, which integrated propositional and predicate logic and provided a rigorous basis for logical reasoning.

Dafit Hilbert (1862-1943) emerged next and became a representative of mathematical foundations and formalism, whose goal was to establish mathematics as a whole as a system free of contradictions. Hilbert inherited the work of Flöge, but adopted a more formal and metamathematical approach.

His main contributions were to treat mathematics as a purely formal system and to prove its legitimacy, specifically, to regard mathematics as a system of symbolic operations and to construct a methodology to show that there is no contradiction, to advocate the Hilbert programme and to set out the following goals: to use formal logic to rigorously justify the entire mathematical To attempt to justify it.

  • Formulate all mathematics as a formal system based on a finite number of axioms.
  • Prove that the system is consistent.
  • The proofs are given in intuitively obvious finite methods.

The reconstruction of Euclidean geometry and arithmetic as formal axiomatic systems (in his methodology, axiomatic systems are completely symbolic and reason according to definite rules).

The introduction of metamathematics, which studies formal systems as objects themselves, and the aim of proving the absence of contradictions and completeness of mathematical systems.

Although the Hilbert programme was fundamentally constrained by Kurt Gödel’s incompleteness theorem (1931), discussed below, which proved that any formal axiomatic system cannot completely prove its own contradiction-free nature, Hilbert’s formalist approach, provided an important foundation in basic mathematical theory. Details of these are given in ‘Making Logic, Part 1: Beginning Logic, Reading Notes’ – ‘Making Logic, Part 4: Logic is interesting from here onwards, Non-Classical Logic, Reading Notes’, etc.

Frege’s formal logic was the first attempt to formulate mathematics on the basis of logic, but faced Russell’s paradox. Hilbert then established mathematics as a symbolic and formal axiomatic system, aiming for contradiction-free proofs, and although these attempts were partly constrained by Gödel’s incompleteness theorem, they formed the basis of modern formal logic and basic mathematical theory and had a profound influence on computer science and philosophy.

Kurt Gödel and the limits of computability

Kurt Gödel (1906-1978) became one of the most influential figures in 20th century mathematics, logic and the theory of computation, providing important insights into computability and the limits of mathematical systems.

He said of formal mathematical systems (in particular axiomatic systems, including natural number theory). ‘In any consistent (non-contradictory) formal system, there are propositions which can neither be proved nor disproved within the system.’ This theorem implies that mathematical systems are incomplete and says that no axiomatic system, no matter how rigorously constructed, can capture all mathematical truths.

Furthermore, Gödel also states that. ‘Any consistent formal system cannot prove its own consistency within the system.’ This reveals that Hilbert’s programme (an attempt to prove that formal mathematical systems are consistent) is unattainable.

Gödel To prove these, he used the method of encoding propositions and proofs within formal logic into natural numbers (Gödel numbers). This encoding enabled logical propositions to be treated as number-theoretic properties, and the limits of mathematical systems to be proved formally.

Gödel’s approach went on to influence the computability theories of Alan Turing and Alonzo Church, which are discussed below.

Götel’s incompleteness theorem provided the basis for defining the boundary between ‘computable’ and ‘non-computable’ in computability theory, and the incompleteness theorem suggested that not all mathematical problems are algorithmically solvable, but Turing introduced the model of the ‘Turing machine’, rigorously defined computability and constructed an extension of Gödel’s formal idea of incompleteness to the theory of computation.

Gödel’s results provide the theoretical background to foresee the following ‘computability’

  • Halting Problem: it is impossible to determine whether any given algorithm halts for all inputs.
  • Church-Turing’s Thesis: computability strictly corresponds to a problem that can be solved by a Turing machine.

Gödel’s theorem showed that ‘truth’ and ‘provability’ do not coincide. This means that there are propositions that are true even if they cannot be formally proved, and has had a profound impact on philosophical arguments for a universal basis for mathematics.

This has, for example, clearly demonstrated the limits of formalism. In other words, it raises the following philosophical questions when considered from the perspective that no matter how well organised a system is, there are truths that cannot be fully described within that system.

  • Does mathematics contain ‘intuitive truths’ that cannot be explained by symbolic manipulation alone?
  • Where do mathematical truths exist?
  • If there are mathematical truths inaccessible to humans, what does it mean to know them?
  • How should we deal with unprovable ‘truths’?
  • Is the notion of ‘unprovable but true’ justified?
  • Should all knowledge be provable? Is it possible to know unprovable truths?
  • What are the limits of the expressive power of language? How should extra-linguistic truths be described?
  • Is there any way to recognise ‘truth’ outside formal systems? Does absolute truth exist?

The formalism advocated by Hilbert was an attempt to formalise mathematics completely, but Gödel’s results made its limits clear. Gödel’s work has also influenced computer science and artificial intelligence (AI). In particular, the existence of problems that cannot be solved by algorithms has become an important concept for the limits of AI.

Turing’s theorisation of the universal computer.

Alan Turing (1912-1954), also mentioned in ‘Turing’s Outline of the Theory of Computation, Reference Books and Neural Turing Machines’, was a foundational figure in computer science and computational theory, and theorised the concept of the ‘universal computer’ (Turing He theorised the concept of the ‘Turing Machine’. This innovative model gave a rigorous definition of computability and has had a major influence on the design of modern digital computers.

Turing’s idea of the Turing Machine, a theoretical model designed to define computability, consists of the following components

  • Tape: an infinitely long, one-dimensional tape with numerous cells (compartments), each with a symbol on it. The tape is readable and writable, and symbols can be erased and new symbols can be written on it.
  • Head: A device that moves left and right on the tape, reading and writing symbols at the current position.
  • State transitions: a machine has a set of ‘finite states’ and decides what to do next based on the current state and the symbol on the tape.
    • Rewriting of symbols
    • Movement of the head (left or right)
    • Transition to a new state
  • Stopped state: a ‘stopped state’ is defined in which the computation ends under certain conditions.

Turing uses this mechanism to generalise Turing machines that perform specific computations and proposes a ‘Universal Turing Machine’ (UTM). This machine has the following characteristics

  • Programme versatility: it can emulate any specific Turing machine. It reads a ‘programme’ on tape and operates according to that programme.
  • Similarity to modern computers: a UTM operating with a programme as input would be the very concept of a modern programmable computer.

Turing defined computability as ‘what can be computed by a Turing machine’, and this universal Turing machine enabled a rigorous definition of what is ‘computable’. This definition forms the basis of the current theory of computation.

As a concrete Turing explanation of the limits of computability, Turing applied the Incompleteness Theorem (Kurt Gödel) to the theory of computation, leading to the following important results:

  • Halting Problem (Halting Problem)
    – Problem statement: is there an algorithm that determines whether an arbitrary program ‘halts’ or ‘runs indefinitely’ for a given input?
    – Turing result: no such algorithm exists (i.e. the halting problem cannot be solved).
  • Significance
    – It was shown that there are intrinsic limits to computability.
    – The limits of computability have implications not only for theoretical computer science, but also for application areas such as AI and cryptography.

Turing machines are recognised as a theoretical model for digital computers, and the idea of programmemable computers has directly influenced John von Neumann’s Stored Programme Method.

It also provides the basis for computability, computational complexity and algorithmic theory, and Turing’s work can be seen as a development of the theory of computation from its previous philosophical question (‘what is computable?’) to a practical one (‘how to compute efficiently?’).

In addition, Turing also contributed to the development of AI, proposing the concept of the ‘Turing Test’ in his paper ‘Computing Machinery and Intelligence’ (1950), also described in ‘The Turing Test, Searle’s Refutation and Artificial Intelligence’, where he proposed a criteria for measuring whether it is intelligent or not.

reference book

As a topic of relevance to the ‘universal computer’, reference books on developments in the theory of computation, mathematics and philosophy from Leibniz to Turing will be discussed.

General books
1. ‘Fundamentals of Computer Science

2. ‘What Can Be Computed?: A Practical Guide to the Theory of Computation

3. ‘Evolutionary Computation with Intelligent Systems: A Multidisciplinary Approach to Society 5.0

4. ‘Leibniz: An Intellectual Biography’,

5. ‘Turing: Pioneer of the Information Age’,

Reference books
1. ‘The Universal Computer: The Road from Leibniz to Turing’ by Martin Davis
A classic book that traces the evolution of the concept of computability through the history of the theory of computation from Leibniz to Turing.

2. ‘Logicomix: An Epic Search for Truth’ by Apostolos Doxiadis and Christos Papadimitriou
A graphic novel that teaches the development of mathematics and logic in a narrative format. There are also indirect connections to Turing and Leibniz.

3. ‘Alan Turing: the Enigma’ by Andrew Hodges.
A detailed biography of Turing, ideal for an in-depth understanding of the theory of computation and his influence.

4. ‘From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931’ by Jean van Heijenoort
Contains the main papers on mathematical logic and provides an insight into the influence of Leibnizian thought and its subsequent development.

Notable topics.
– Leibniz’s conception of ‘universal language’ and ‘computability’.
Learn the background to his search for ways to make logical reasoning computable through The Monadic Theory and his collected papers.

– Turing’s Turing Machine and the definition of computability.
Reading ‘On Computable Numbers, with an Application to the Entscheidungsproblem

コメント

タイトルとURLをコピーしました