線形代数:Linear Algebra

アルゴリズム:Algorithms

Protected: Information Geometry of Positive Definite Matrices (2) From Gaussian Graphical Models to Convex Optimization

Information geometry of positive definite matrices utilized in digital transformation, artificial intelligence, and machine learning tasks From Gaussian graphical models to convex optimization (chordal graphs, triangulation graphs, dual coordinates, Pythagorean theorem, information geometry, geodesics, sample variance-covariance matrix, maximum likelihood Estimation, divergence, knot space, Riemannian metric, multivariate Gaussian distribution, Kullback-Leibler information measure, dual connection, Euclidean geometry, narrowly convex functions, free energy)
アルゴリズム:Algorithms

Protected: Measures for Stochastic Bandid Problems -Theoretical Limitations and the ε-Greedy Method

Theoretical limits and ε-greedy method, UCB method, riglet lower bounds for consistent measures, and KL divergence as measures for stochastic banded problems utilized in digital transformation , artificial intelligence , and machine learning tasks
微分積分:Calculus

Protected: Complexity of Hypothesis Sets in Statistical Mathematics Theory

Complexity of sets of hypotheses in statistical mathematical theory used in digital transformation, artificial intelligence, and machine learning tasks Rademacher complexity, VC dimension, large number factor, law of large uniform numbers, decision stocks, set of linear discriminators, set of linear functions, Cauchy-Schwartz inequality, Jensen inequality, Masar's complement, Talagrande's complement, empirical Rademacher complexity, Sauer's complement, Radon's theorem
アルゴリズム:Algorithms

Protected: Stochastic Optimization and Online Optimization Overview

Stochastic and online optimization used in digital transformation, artificial intelligence, and machine learning tasks expected error, riglet, minimax optimal, strongly convex loss function, stochastic gradient descent, stochastic dual averaging method, AdaGrad, online stochastic optimization, batch stochastic optimization
アルゴリズム:Algorithms

Protected: Unconstrained optimization for continuous optimization in machine learning

Unconstrained Optimization for Continuous Optimization in Machine Learning for Digital Transformation, Artificial Intelligence, and Machine Learning tasks machine epsilon, stopping conditions without scaling, stopping conditions with scaling, Taylor's theorem, stopping conditions for optimization algorithms, Hesse matrix
アルゴリズム:Algorithms

Protected: Unsupervised Learning with Gaussian Processes (1)Overview and Algorithm of Gaussian Process Latent Variable Models

Overview and algorithms of unsupervised learning using Gaussian Process Latent Variable Models GPLVM, an application of probabilistic generative models used in digital transformation, artificial intelligence, and machine learning, Bayesian Gaussian Process Latent Variable Models ,Bayesian GPLVM
アルゴリズム:Algorithms

Protected: Implementation of model-free reinforcement learning in python (2) Monte Carlo and TD methods

Python implementations of model-free reinforcement learning such as Monte Carlo and TD methods Q-Learning, Value-based methods, Monte Carlo methods, neural nets, Epsilon-Greedy methods, TD(lambda) methods, Muli-step Learning, Rainbow, A3C/A2C, DDPG, APE-X DDPG, Muli-step Learning) Epsilon-Greedy method, TD(λ) method, Muli-step Learning, Rainbow, A3C/A2C, DDPG, APE-X DQN
グラフ理論

Protected: Information Geometry of Positive Definite Matrices (1) Introduction of dual geometric structure

Introduction of dual geometric structures as information geometry for positive definite matrices utilized in digital transformation, artificial intelligence, and machine learning tasks (Riemannian metric, tangent vector space, semi-positive definite programming problem, self-equilibrium, Levi-Civita connection, Riemannian geometry, geodesics, Euclidean geometry, ∇-geodesics, tangent vector, tensor quantity, dual flatness, positive definite matrix set)
バンディッド問題

Protected: Fundamentals of Stochastic Bandid Problems

Basics of stochastic bandid problems utilized in digital transformation, artificial intelligence, and machine learning tasks (large deviation principle and examples in Bernoulli distribution, Chernoff-Heffding inequality, Sanov's theorem, Heffding inequality, Kullback-Leibler divergence, probability mass function, hem probability, probability approximation by central limit theorem).
微分積分:Calculus

Protected: Fundamentals of Convex Analysis as a Basic Matter for Sequential Optimization in Machine Learning

Basics of convex analysis as a fundamental matter of continuous optimization utilized in digital transformation, artificial intelligence, and machine learning tasks subgradient, subdifferential, conjugate function, closed truly convex function, conjugate function, strongly convex function, closed truly convex function, upper and lower bounds on function values, Hesse matrix, epigraph, Taylor's theorem, relative interior, Ahuynh envelope, continuity, convex envelope, convex function, convex set
タイトルとURLをコピーしました