Linear Algebra Overview and Library and Reference Books

Machine Learning Artificial Intelligence Digital Transformation Natural Language Processing Deep Learning Information Geometric Approach to Data Mathematics Navigation of this blog
Linear Algebra and Machine Learning

Linear algebra is a field of mathematics that uses vectors and matrices to analyze linear relationships, and is one of the most important fundamental mathematical tools in machine learning. It is primarily utilized to improve the efficiency of computation of large amounts of data, but there are other examples of its use as well, as shown below.

  • Feature Representation: In machine learning, data is represented as features. Features are often treated as vectors, and linear algebra operations are used to mathematically represent and process the data using feature vectors.
  • Model Parameter Estimation: Machine learning models are built by estimating optimal parameters from the data. This parameter estimation is usually formulated as an optimization problem, such as least squares or maximum likelihood estimation described in “Overview of Maximum Likelihood Estimation and Algorithms and Their Implementationsand is often solved using linear algebra matrix operations in the case of simple models.
  • Data preprocessing: In machine learning, data must be preprocessed and transformed into a format suitable for the model. For example, linear algebra operations are used in data normalization, scaling, and dimensionality reduction.
  • Linear models: Linear algebra is also used as the theoretical basis for linear models (e.g., linear regression and logistic regression). In machine learning, before considering complex models, linear models are often used because they are relatively computationally inexpensive and the effects of parameters are easy to track. The concept of linear algebra plays an important role in this modeling.
  • Matrix Computation: In machine learning, matrix computation is frequently used to efficiently process large data sets. For example, the theory of linear algebra is used to streamline calculations based on matrix products and sums in the computation of models for deep learning and probability calculations for Bayesian inference.

This section provides an overview of this linear algebra, libraries in various languages, and reference books.

What is Linear Algebra?

Linear Algebra is a branch of algebra that studies theories centered on linear spaces and linear transformations. It is also the theory of matrices, determinants, and linear systems of equations as their applications.

Linear space is an abstract mathematical concept defined in linear algebra as a space (an abstract concept that has dimensions (coordinate system) and deals with geometric objects such as points, lines, surfaces, and bodies) that satisfies the conditions for a set of operations defined by two sets, vectors and scalars. Simply put, it refers to a set of vector objects that have two operations, additive and scalar multiplication, and satisfy the following conditions.

  • Vector addition: For any two vectors in linear space, their sum also exists in linear space.
  • Scalar multiplication: for any vector in linear space, the product with a scalar also exists in linear space.
  • Combining law for addition: For any sum of vectors, the combining law holds.
  • Exchange law for addition: exchange law holds for the sum of vectors.
  • Existence of zero vector: Zero vectors (vectors with all elements zero) exist in linear space.
  • Existence of inverse vector: For any vector in linear space, there exists an inverse vector to it.
  • Distribution law with respect to scalar doubles: distribution law holds with respect to scalar doubles.
  • Combining law with respect to scalar doubling: The combining law with respect to scalar doubling holds.
  • Existence of a scalar with 1: There exists a scalar 1 in linear space, and the result of scalar multiplication of 1 is the original vector.

For more information on these axiomatic set theories, see “Overview of Set Theory and Reference Books” and “Making Logic Work, Part 3: Another Look at Logic, Reading Notes.

Linear spaces provide the basic framework for linear algebra, including vector arithmetic, linear equation solving, and the concepts of eigenvalues and eigenvectors, and have a wide range of applications in many scientific fields, including mathematics, physics, and engineering.

Another concept that occupies an important place in linear algebra is that of linear transformations.

A linear transformation is a mapping (function) from one linear space to another. Simply put, it is the operation of mapping a vector to another vector.

Linear transformations are characterized by the fact that they preserve the aforementioned properties of a linear space, preserving the linearity in the vector space and the structure of the linear space. By using them, geometric transformations (translation, rotation, scaling, etc.) can be used in signal processing, image processing, economics, physics, and other fields, and by using matrices in linear transformations, it is possible to compose, inverse transformations, efficiently calculate eigenvalues, and solve complex polynomial and algebraic equations These are used not only in the aforementioned machine learning, but also in physical learning.

These have applications not only in the aforementioned machine learning but also in a wide range of fields such as physics, engineering, cryptography, and computer science.

Library for linear algebra

In order to perform such linear algebra on computers, libraries for linear algebra have been developed in various programming languages. The following is a representative list.

  • Python : Libraries for linear algebra in Python include “NumPy” and “SciPy” for basic scientific computing, “scikit-learn” for machine learning, “TensorFlow” for fast matrix operations for deep learning, ” PyTorch” for fast matrix operations for deep learning.
  • R : Libraries for linear algebra in R include “Matrix” for efficient matrix operations, “LinAlg” for basic linear algebra functions, “MASS” for linear algebra functions in statistics, “caret” and “glmnet” for linear algebra functions in machine learning.
  • C : C language requires low-level programming to handle, but it is used especially for large data sets because of its ability to perform fast computations. Libraries for linear algebra include “GSL (GNU Scientific Library)” a scientific computing library; “LAPACK (Linear Algebra PACKage)” a high-performance numerical linear algebra library; “Eigen” a matrix computing library; “BLAS (Basic Linear Algebra Subprograms)” which is a matrix calculation library.
  • Java : There are many libraries to handle linear algebra efficiently. Specifically, “Apache Commons Math” provides math-related functions, “Jama (Java Matrix Library)” “JBLAS” and “EJML (Efficient Java Matrix Library)” efficiently compute matrices, etc. C libraries can also be used directly via Java’s JNI interface.
  • Clojure : Clojure is a LISP that runs on the JVM, so the Java libraries mentioned above can be run natively without modification, and C libraries can be used directly via Java’s JNI interface. Furthermore, using the flexibility of the language, such as macro functions, Python and R libraries can be used as described in “Clojure-Python Collaboration and Machine Learning” and “Statistical Learning with Clojure-R Collaboration“.”core.matrix” and “clatrix” for matrices, and “incanter” for general linear algebra.
  • Javascript : A linear algebra library in Javascript is also available for use in web applications and graphics that run in the browser. Specifically, “math.js” is a mathematical operation library, and “gl-matrix” is a matrix calculation library for graphics.
reference book (work)

The following is a list of reference books on linear algebra.

First, “Linear Algebra in High School Mathematics: Jun Takeuchi.

Introduction
	Chapter 1: Matrices are Tools for Equation Solving
		What does "linear" mean?
			The mathematics that deals with matrices is called "linear algebra
			Linear
				Linear-algebra
					The relationship between variables is "linear
						Non-linear (non-linear)
							Second and third order functions can also be converted to linear.
		Tsurugame Arithmetic in Matrix Form
			The World of Tsurugame Arithmetic (Expression of the First Degree)
				Expressions using matrices
				Calculation Rules
					Calculation Expressions
					Putting it all together
					Another way to write
						Definition
					aij is called the source or component
					The terms a11, a22, and a33 are "diagonal terms
						The non-diagonal terms are "off-diagonal terms
						The sum of the diagonal terms is the "trace
		Solving Ordinary Equations: Elimination
		How to solve equations using a scaling factor matrix
			How to solve equations using matrices?
			Matrix A and column vector b are the source matrices
				Example
				Scaling factor matrix
			Basic Row Transformations
				Multiply a row by a number (or divide)
				Adding one row to another
				Adding a row to another row by sucking and pluming
				Interchange rows and rows
			Step A
				Step B
					Step C
						Step D
							Final Step
			Basic column transformations" allowed when solving the equation using the scaling factor matrix
				Interchange columns and rows (but not the rightmost column)
		Submatrix: a small matrix within a matrix
			Submatrix
				Submatrix: a submatrix contained within a matrix
			unitary matrix
				A row of 1's on the diagonal
				Denoted by the symbol E
		factorial matrix
			Number used to determine "whether there is only one set of solutions, an infinite number of solutions, or none
			Staircase matrix
				A square matrix Chi or a scaling factor matrix B can be transformed into a staircase matrix by repeating the basic transformation of the rows
				Example
			Number of rows
				the "number of non-zero rows" of the staircase matrix
				Rank
		Important relationship between rank and solution
			Let A be a square matrix and B be a scaling factor matrix.
			there is a solution
				Number of ranks in A = Number of ranks in B
			There is no solution
				Factors of A<Factors of B Since the expansion coefficient matrix is the matrix A plus a column vector of rain, there are no factors of A>Factors of B
		Relationship between the factorial and the equation
		Is there one set of solutions? Or are there an infinite number of them?
	Chapter 2: Unit Matrices and Inverses
		Unit matrix
		Inverse Matrices
		Inverse Matrices and Equations
		How to Find the Inverse Matrix Using Row Deformations
		Row basis transformations can be expressed as matrix multiplications
		Replacing Row Basic Transforms with Matrix Multiplications
		Combine Multiple Row Basic Transforms into One Matrix
	Chapter 3: The Arrival of Matrix Expressions
		Equations as Important as Matrices
		The Matrix Formula was invented by a Japanese
		Young Takakazu Seki
		Salas
		Properties of determinant
		The determinant of matrix multiplication
		Mathematical books and inheritance of postulates in Edo itself
		The Sage of Arithmetic, Seki Takakazu
		What is the inverse of a regular matrix?
		What is a cosine factor matrix?
		The determinant of a transpose matrix is the same
		What is the condition for a matrix to be regular?
		Kramer's formula
		Leibniz and Kramer
		Conditions for non-trivial solutions
		An example of a non-trivial solution
		Terminators and determinants
		Quadratic Termination Formula
		Sylvester
	Chapter 4: Numerical Computation of Matrices
		Cramer's formula is rarely used?
		Gaussian elimination
		Calculating Matrices with Spreadsheets
		Matrix Multiplication
		Calculating Gaussian Elimination
		Deriving Inverses Using Gaussian Elimination
		A Step in the World of Numerical Computing
	Chapter 5: The Curious Relationship between Space and Vectors
		Vectors and Scalars
		What is a First-Order Dependent in Three Dimensions?
		How to take another basis
		Schmidt's orthogonalization
		Matrices that reveal whether they are first-order independent or not
		Vector Spaces
		Equations and first order dependencies
	Chapter 6: What is an Eigenvalue Problem?
		Eigenvalue Problem
		Examples of Eigenvalue Problems
		Diagonalization of Matrices
		Employment vectors belonging to different eigenvalues are first-order independent
		When an eigenvalue of a matrix is a multiple solution
		A square matrix of the third order has a multiple solution
		How to tell if diagonalization is possible
		Similar Matrices
		More interesting properties of similar matrices
	Chapter 7. Matrices with complex numbers
		What are complex numbers?
		How to display complex numbers in coordinates
		Complex conjugates also unify when complex numbers = complex numbers
		What is the inner product of complex numbers?
		Conjugate transpose matrix
		Hermitian Matrix
		Hermitian Matrix
		Eigenvalues of a Hermitian matrix are always prime
		Eigenvectors belonging to different eigenvalues of a Hermitian matrix are orthogonal
		Hermitian matrices can be diagonalized using unisotropic matrices
	Chapter 8. Relation to Quantum Mechanics
		Matrices and Quantum Mechanics
		Schrodinger equation
		How to obtain physical quantities
		Blanket representation
		Examples of Solutions to the Schrodinger Equation
		Normalization Conditions
		Finding the energy of an electron
		Hermite Operators
		Eigenvalues of Hermite operator are real numbers
		Different eigenvalues of the Hermite operator are orthogonal
		Matrix representation of operators
		To Hermitian Matrices 

Next, “Introduction to Linear Algebra.”

Chapter 1 Introduction to Vectors
		1.1 Vectors and Linear Combinations
			At the heart of linear algebra are two operations on vectors
				Adding vectors yields 𝓋+𝓌
				Multiplying a vector by the numbers 𝒸 and 𝒹 yields 𝒸𝓋 and 𝒹𝓌
		1.2 Length and inner product
		1.3 Matrices
	Chapter 2 Solving Linear Equations
		2.1 Vectors and linear equations
		2.2 Concept of elimination
		2.3 Elimination with Matrices
		2.4 Matrix operation rules
		2.5 Inverse matrices
		2.6 Elimination = Decomposition: A=LU
		2.7 Transposition and substitution
	Chapter 3 Vector Spaces and Subspaces
		3.1 Space of vectors
		3.2 Zero space of A: Solving Ax=0
		3.3 Hierarchies and row-simplified staircase matrices
		3.4 General solution for Ax=b
		3.5 Linear independence, basis, and dimension
		3.6 Dimensions of four subspaces
	Chapter 4 Orthogonality
		4.1 Orthogonality of four subspaces
		4.2 Projections
		4.3 Least-squares approximation
		4.4 Orthogonal bases and Gram-Schmidt method
	Chapter 5 Quadratic Expressions
		5.1 Properties of Quadratic Expressions
		5.2 Substitution and cofactors
		5.3 Cramer's theorem, inverse matrices, and volumes
	Chapter 6 Eigenvalues and Eigenvectors
		6.1 Introduction to eigenvalues
		6.2 Diagonalization of matrices
		6.3 Application to differential equations
		6.4 Symmetric matrices
		6.5 Positive definite matrices
		6.6 Similarity matrices
		6.7 Singular value decomposition (SVD)
	Chapter 7. Linear transformations
		7.1 Concept of linear transformations
		7.2 Matrices of linear transformations
		7.3 Diagonalization and pseudo-inverse matrices
	Chapter 8 Applications
		8.1 Matrices in engineering
		8.2 Graphs and networks
		8.3 Markov matrices, population, and economics
		8.4 Linear programming
		8.5 Linear Algebra for Fourier Series and Functions
		8.6 Linear algebra for statistics and probability
		8.7 Computer graphics
	Chapter 9 Numerical Linear Algebra
		9.1 Gaussian elimination in practice
		9.2 Norms and condition numbers
		9.3 Iterative methods and preprocessing
	Chapter 10 Complex Vectors and Matrices
		10.1 Complex numbers
		10.2 Hermitian and unitary matrices
		10.3 Fast Fourier transforms
	Answers to key exercises
	Questions to help you review
	Glossary: Dictionary for Linear Algebra
	Matrix Decomposition
	Matlab educational program code

Finally, “Differential Equations and Linear Algebra (Gilbert Strang)

INTRODUCTION
	Chapter 1: First-Order Ordinary Differential Equations
		1.1 Four examples: linear vs. nonlinear
		1.2 Required Calculus
		1.3 Exponential Functions ET and EAT
		1.4 Four special solutions
		1.5 Real and complex sinusoids
		1.6 Growth and decay models
		1.7 Logistic equations
		1.8 Differential Equations in Variable Separated and Complete Forms
	Chapter 2 Second-Order Differential Equations
		2.1 Second-order derivatives in science and engineering
		2.2 Important facts about complex numbers
		2.3 Constant coefficients A, B, and C
		2.4 Forced oscillations and exponential response
		2.5 Electric circuits and mechanical systems
		2.6 Solutions of second-order equations
		2.7 Laplace transform Y(s) and F(s)
	Chapter 3 Graphical and Numerical Methods
		3.1 Nonlinear differential equation y'=f(t,y)
		3.2 Gushing, sucking, saddle points, and whirlpools
		3.3 Linearization and stability in 2 and 3 dimensions
		3.4 Basic Euler method
		3.5 More accurate Runge-Kutta method
	Chapter 4: Linear Systems and Inverses
		4.1 Two views of simultaneous linear equations
		4.2 Solving simultaneous linear equations by elimination
		4.3 Multiplication of matrices
		4.4 Inverse matrices
		4.5 Symmetric and orthogonal matrices
	Chapter 5. Vector spaces and subspaces
		5.1 Column space of a matrix
		5.2 Zero space of A: Solutions for Av=0
		5.3 General solutions for Av=b
		5.4 Linear independence, basis and dimension
		5.5 Four basic subspaces
		5.6 Graphs and networks
	Chapter 6 Eigenvalues and Eigenvectors
		6.1 Introduction to eigenvalues
		6.2 Diagonalization of matrices
		6.3 Linear differential equation y'=Ay
		6.4 Exponential function of a matrix
		6.5 Second-order ordinary differential equations and symmetric matrices
	Chapter 7 Applied Mathematics and ATA
		7.1 Least squares and projections
		7.2 Positive definite matrices and SVD
		7.3 Replacing initial conditions with boundary conditions
		7.4 Laplace equation and ATA
		7.5 Networks and the graph Laplacian
	Chapter 8 Fourier and Laplace Transforms
		8.1 Fourier Series
		8.2 Fast Fourier series
		8.3 Heat conduction equation
		8.4 Wave equation
		8.5 Laplace transform
		8.6 Convolution (Fourier and Laplace)
	Decomposition of matrices
	Properties of determinants
	Linear Algebra at a Glance

コメント

Exit mobile version
タイトルとURLをコピーしました