幾何学:Geometry

アルゴリズム:Algorithms

Protected: Gauss-Newton and natural gradient methods as continuous optimization for machine learning

Gauss-Newton and natural gradient methods as continuous machine learning optimization for digital transformation, artificial intelligence, and machine learning tasks Sherman-Morrison formula, one rank update, Fisher information matrix, regularity condition, estimation error, online learning, natural gradient method, Newton method, search direction, steepest descent method, statistical asymptotic theory, parameter space, geometric structure, Hesse matrix, positive definiteness, Hellinger distance, Schwarz inequality, Euclidean distance, statistics, Levenberg-Merkert method, Gauss-Newton method, Wolf condition
アルゴリズム:Algorithms

Protected: Application of Neural Networks to Reinforcement Learning Value Function Approximation, which implements value evaluation as a function with parameters.

Application of Neural Networks to Reinforcement Learning used for Digital Transformation, Artificial Intelligence, and Machine Learning tasks Examples of implementing value evaluation with functions with parameters (CartPole, Q-table, TD error, parameter update, Q-Learning, MLPRegressor, Python)
アルゴリズム:Algorithms

Protected: Batch Stochastic Optimization – Stochastic Dual Coordinate Descent

Stochastic dual coordinate descent algorithms as batch-type stochastic optimization utilized in digital transformation, artificial intelligence, and machine learning tasks Nesterov's measurable method, SDCA, mini-batch, computation time, batch proximity gradient method, optimal solution, operator norm, maximum eigenvalue , Fenchel's dual theorem, principal problem, dual problem, proximity mapping, smoothing hinge loss, on-line type stochastic optimization, elastic net regularization, ridge regularization, logistic loss, block coordinate descent method, batch type stochastic optimization
アルゴリズム:Algorithms

Overview of meta-heuristics and reference books

  Overviews Meta-heuristics can be algorithms used to solve optimization problems. An optimization problem is on...
アルゴリズム:Algorithms

Protected: Big Data and Bayesian Learning – The Importance of Small Data Learning

Big Data and Bayesian Learning for Digital Transformation (DX), Artificial Intelligence (AI), and Machine Learning (ML) Tasks - Importance of Small Data Learning
アルゴリズム:Algorithms

Geometric approach to data

Geometric approaches to data utilized in digital transformation, artificial intelligence, and machine learning tasks (physics, quantum information, online prediction, Bregman divergence, Fisher information matrix, Bethe free energy function, the Gaussian graphical models, semi-positive definite programming problems, positive definite symmetric matrices, probability distributions, dual problems, topological, soft geometry, topology, quantum information geometry, Wasserstein geometry, Lupiner geometry, statistical geometry)
アルゴリズム:Algorithms

Topological handling of data using topological data analysis

Topological handling of data using topological data analysis utilized for digital transformation, artificial intelligence, and machine learning tasks application to character recognition, application to clustering, R, TDA, barcode plots, persistent plots , python, scikit-tda, Death - Birth, analysis of noisy data, alpha complex, vitris-lips complex, check complex, topological data analysis, protein analysis, sensor data analysis, natural language processing, soft geometry, hard geometry, information geometry, Euclidean Spaces
アルゴリズム:Algorithms

Protected: Information Geometry of Positive Definite Matrices (3)Calculation Procedure and Curvature

Procedures and curvature of computation of positive definite matrices as informative geometry utilized in digital transformation, artificial intelligence, and machine learning tasks
アルゴリズム:Algorithms

Protected: Measures for Stochastic Banded Problems Likelihood-based measures (UCB and MED measures)

Measures for Stochastic Banded Problems Likelihood-based UCB and MED measures (Indexed Maximum Empirical Divergence policy, KL-UCB measures, DMED measures, Riglet upper bound, Bernoulli distribution, Large Deviation Principle, Deterministic Minimum Empirical Divergence policy, Newton's method, KL divergence, Binsker's inequality, Heffding's inequality, Chernoff-Heffding inequality, Upper Confidence Bound)
アルゴリズム:Algorithms

Protected: Information Geometry of Positive Definite Matrices (2) From Gaussian Graphical Models to Convex Optimization

Information geometry of positive definite matrices utilized in digital transformation, artificial intelligence, and machine learning tasks From Gaussian graphical models to convex optimization (chordal graphs, triangulation graphs, dual coordinates, Pythagorean theorem, information geometry, geodesics, sample variance-covariance matrix, maximum likelihood Estimation, divergence, knot space, Riemannian metric, multivariate Gaussian distribution, Kullback-Leibler information measure, dual connection, Euclidean geometry, narrowly convex functions, free energy)
タイトルとURLをコピーしました