Navigate this blog

As described in DEUS EX MACHINA – How to Create a Mechanical God, this blog collects information necessary to create an artificial intelligence system from scratch. Furthermore, by using these fragments, we intend to provide hints for engineers to solve real business problems.

The following is a schematic of the contents of this blog, starting from IT infrastructure, programming techniques, various machine learning and AI techniques, their application to digital transformation, and finally life tips & miscellaneous notes. We believe that if one can master all of these, it may well be possible to create an artificial intelligence system from scratch.

The details of each are described below.

ICT Technologies

First, the infrastructure technologies on which ICT technology is based. the series “ Introduction to Amazon Web Services Networking” will teach you how to set up a network on AWS, build servers, configure security, etc., so that you will have the knowledge to build a complete cloud infrastructure. You will also gain knowledge about building on-premise infrastructure.

Next, in the section on Linux, you will learn the basics of servers on a set-up infrastructure, and you will learn how to handle individual operating systems in the articles on ubuntu and centos, which will be posted at a later date.

In the hardware chapter, we will discuss FPGAs, ASICs, etc., mainly from the perspective of machine learning, and we will also discuss Raspbery Pi as a simple PC, Arduino, and various sensor devices.

After building the server, we will discuss web technology in general in About Web Technology. In this section, we will start with an overview of web technologies (overview of internet technologies, HTTP protocol, web servers, web browsers, web applications, and progarming technologies such as Javascript and React), and then move on to the development of front-end applications using Javascript and CSS. Then we will discuss front-end technologies, including front-end development with Javascript and web design with CSS, and back-end technologies, including PHP and web frameworks, Clojure, and functional programming. These articles also include concrete code, so you can read them and implement them as you go.

In terms of practical use, the articles include the implementation of a chatbot using Node.js and React, the launch and simple usage of MAMP and media wiki as a simple CMS (contents management system) system, a search tool FESS – an OSS search tool that can be set up in 5 minutes, a search tool Elastic Search – an open source search system with ElasticSearch, and a data cleansing tool OpenRefine, D3, and a knowledge graph with React. The articles also describe concrete implementation procedures for visualization of knowledge graphs (relational data) using React, and PF:Apache Spark for high-speed processing of stream data and large amounts of data, so that readers can actually use these tools by reading these articles.

In addition, database technology, which is inseparable from web technology, is described in “ About Database Technology.” In these articles, basic database theory (algorithms) is described in “ Database Algorithms,” RDBMS and SQL – About SQL describes general The RDF store, SPARQL, and Redis sections discuss NoSQL handling. Schema Matching and Mapping describes schema matching as an applied DB technology.

In addition, the book discusses search technology, user interface and data visualization, stream data technology, sensor data, and IoT and WoT technologies, as well as concrete examples of various system technologies, while Semantic Web technology is discussed as a future web technology, including web 2.0 and 3.0. The report also describes the technologies corresponding to web 2.0 and 3.0 and the latest information from various academic societies ( collected papers from AI societies) as future web technologies.

Encryption and Security Techniques and Data Compression Techniques describe various techniques for resolving the conflicting conflicts between the protection and use of personal and confidential information.
Programming Languages

Regarding the programming languages that concretely construct the above ICT technologies are described in the Programming Technology Overview, the articles on programming, object-oriented languages, functional languages, and current web technologies provide an overview or the history of programming languages.

As for the individual languages, the articles include Clojure and functional programming, Python and machine learning, PHP and web frameworks, Prolog and knowledge information processing, LISP and artificial intelligence technology, R language and machine learning, C/C++ and various machine learning algorithms, and other programming languages on the back end. In the languages, the authors give an overview of the languages and their various applications, mainly in terms of artificial intelligence, machine learning, and web technologies, based on specific code. It also describes front-end languages such as front-end development with Javascript, web design with CSS, etc., mainly based on specific codes, which will enable smooth prototyping.

The blog also includes a lot of content on artificial intelligence and machine learning technologies, which are the main themes of the blog.

Machine Learning Technologies

First, on machine learning techniques. First, in articles on scientific thinking, KPI, KGI, OKR, etc., we describe scientific/methodological analysis methods of important issues when considering machine learning, and then we discuss machine learning of a small number of data using stochastic generative models, local learning by K-nearest neighbor method, etc., and noise reduction by statistical processing. Learning and group learning, fusion of logic and rules with machine learning such as fusion of probability and logic, assimilation of simulation data such as simulation and data assimilation, noise reduction and data cleaning by statistical processing, model testing such as Monte Carlo testing, distributed parallel processing by advanced online learning, etc., etc. and distributed parallel processing by Monte Carlo tests, etc., and advanced online learning, etc.

In addition, as an in-depth look at typical machine learning techniques, the mathematical foundations are described in Mathematics in Machine Learning, and basic machine learning algorithms and data structures are described in Algorithms and Data Structures in Machine Learning and General Machine Learning Algorithms.

Graph Data Algorithms and Structural Learning describes algorithms and implementations that focus on the structure of data (e.g., graph structures), and Explainable Machine Learning describes the explainability of learning models that make use of these structures.

In the area of deep learning, which has made significant progress in recent years, the paper describes distributed representations and Boltzmann machines based on Hinton’s paper on where features come from, which pioneered deep learning in recent years, and also discusses autoencoders, convolutional neural networks, Word2Vec, natural language processing by deep learning, and the use of graph neural networks for image and sound processing. Graph neural networks for image, speech, natural language, and other applications are described. As for specific implementations, the book describes various applications using the Keras framework with python, which is the simplest to use, from deep learning with python and Keras, and by reading these examples, you can learn how to perform actual deep learning from general neural networks to CNN, RNN, LSTM, and so on. actual deep learning. In addition, the book provides an overview of recent models such as OpenPose, SSD, AnoGAN, Efficient GAN, DCGAN, Self-Attention, GAN, BERT, Transformer, GAN, PSPNet, 3DCNN, ECO, etc. in evolving deep learning with PyTorch and links to sample code. An overview and links to sample code are also given.

For probabilistic generative models, a different approach from deep learning, the History of Bayesian Statistics and Bayesian Estimation with STAN describes the basic principles of Bayesian statistics, and Bayesian Inference and MCMC Free Software describes the MCMC library STAN, a powerful tool for Bayesian inference, and its specific applications. The book also discusses the principles and implementation of MCMC, variational Bayesian learning, machine learning with Bayesian inference, nonparametric Bayesian and Gaussian processes, and various Bayesian inference methods in Markov Chain Monte Carlo (MCMC) Methods.

In sparsity-based machine learning, the paper describes the theory and implementation of Lasso, which is used as a regularization (optimization of the number of parameters) module in various machine learning applications. The blog also describes applications of kernel functions and duality problems (Lagrangian inequalities) to various machine learning tasks, which are considered a breakthrough in machine learning technology along with Lasso in the kernel method (support vector machine).

Time series data analysis describes data analysis techniques using various probabilistic approaches such as Kalman filters and particle filters used in machine control and business applications based on models called state space models, using data assumed to be autocorrelated on a time axis, and relational In data learning, the blog describes the theory and implementation of analyzing data representing relationships as matrix or tensor data, which is used for a wide range of tasks such as natural language and purchase orientation. Topic models are used to extract what each document has in the document data and what topics are being discussed from a large set of documents. By using this technology, it is possible to find documents that are close in topic and to organize documents based on topics, making it a technology that can be used for search and other solutions. In addition to document data analysis, this technology has applications in many fields, including image processing, recommendation systems, social network analysis, bioinformatics, and music information processing. Statistical causal inference/search is a machine learning approach to derive not just “correlation” but “causality,” which is the relationship between cause and effect, from a vast amount of data, and describes its applications in various situations, including in the medical and manufacturing industries. Submodular Optimization and Machine Learning describes machine learning for “discrete” data, mainly in the area of combinatorial optimization, for tasks such as sensor placement and graph cutting in computer vision. Simulation, Data Science and Artificial Intelligence extends traditional machine learning techniques with data assimilation, a machine learning approach that starts from the simulation side and incorporates “data” into the process of simulation, and “emulation,” in which machine learning is performed by mimicking the output of a simulation instead of real world data.
Artificial Intelligence Technologies
Artificial intelligence technologies other than machine learning introduced in this blog include natural language processing technology for converting human information (linguistic information) into data that can be handled by machines, semantic web technology for automatically generating various types of data using semantic relationships between data, knowledge information processing technology for handling knowledge information and Ontology technology, chatbot and question-and-answer technology that enables human-machine conversations, inference technology that allows machines to make various inferences, image information processing technology, voice recognition technology, anomaly detection and change detection technology, stream data technology, sensor data & IOT technology, artificial life and agents, and various other approaches. There are various approaches.

In addition, as theoretical backgrounds to make these approaches possible, information theory, discrete mathematics, sphere theory, graph theory, logic, dynamic programming, genetic algorithms, PSO, and other theoretical backgrounds are also discussed as individual topics.

Natural language processing is a technology that allows computers to handle human-operated words, and it can be used in a variety of ways, such as writing a series of rules that say “this is what words are like” or solving problems based on statistical models of how words are actually used, and it can also be used in computers. As shown in Handling the Meaning of Symbols, a philosophical/mathematical approach is required to handle the meaning of words. There are also various machine learning techniques applied, such as deep learning, probabilistic generative models, topic models, and so on.
Life Tips & miscellaneous notes

Other topics include problem-solving methods, thinking methods, and experimental design, including Fermi estimation, inference patterns for hypothesis testing based on scientific thinking, and the concept of experimental design for causal inference based on KPI, KGI, OKR, and other problem analysis methodologies.

Furthermore, in Zen, Artificial Intelligence, Machine Learning, and Lifetips, we discusses the relationship between Zen and Artificial Intelligence, focusing on Zen philosophy, Buddhist learning and joyful living, Zen thinking and hints for living through Zen-like living, and artificial incompetence in Zen and Buddabuddha.

Finally, in Travel, History, Sports, and Art, he discusses travel, history, sports, and art in Japan and abroad. Translated with www.DeepL.com/Translator (free version)

Exit mobile version
タイトルとURLをコピーしました