微分積分:Calculus

アルゴリズム:Algorithms

Protected: Kernel functions as the basis of kernel methods in statistical mathematics theory.

Kernel functions (Gaussian kernels, polynomial kernels, linear kernels, kernel functions, regression functions, linear models, regression problems, discriminant problems) as the basis for kernel methods in statistical mathematics theory used in digital transformation, artificial intelligence and machine learning tasks.
アルゴリズム:Algorithms

Protected: Basics of gradient method (linear search method, coordinate descent method, steepest descent method and error back propagation method)

Fundamentals of gradient methods utilized in digital transformation, artificial intelligence, and machine learning tasks (linear search, coordinate descent, steepest descent and error back propagation, stochastic optimization, multilayer perceptron, adaboost, boosting, Wolf condition, Zotendijk condition, Armijo condition, backtracking methods, Goldstein condition, strong Wolf condition)
アルゴリズム:Algorithms

Protected: Machine Learning with Bayesian Inference – Mixture Models, Data Generation Process and Posterior Distribution

Mixture models and data generation processes and posterior distributions (graphical models, Poisson distribution, Gaussian distribution, Dirichlet distribution, categorical distribution) in machine learning with Bayesian inference used in digital transformation, artificial intelligence, machine learning
python

Protected: the application of neural networks to reinforcement learning(1) overview

Overview of the application of neural networks to reinforcement learning utilized in digital transformation, artificial intelligence and machine learning tasks (Agent, Epsilon-Greedy method, Trainer, Observer, Logger, Stochastic Gradient Descent, Stochastic Gradient Descent, SGD, Adaptive Moment Estimation, Adam, Optimizer, Error Back Propagation Method, Backpropagation, Gradient, Activation Function Stochastic Gradient Descent, SGD, Adaptive Moment Estimation, Adam, Optimizer, Error Back Propagation, Backpropagation, Gradient, Activation Function, Batch Method, Value Function, Strategy)
アルゴリズム:Algorithms

Machine Learning by Ensemble Methods – Fundamentals and Algorithms Reading Notes

Fundamentals and algorithms in machine learning with ensemble methods used in digital transformation, artificial intelligence and machine learning tasks class unbalanced learning, cost-aware learning, active learning, semi-supervised learning, similarity-based methods, clustering ensemble methods, graph-based methods, festival label-based methods, transformation-based methods, clustering, optimization-based pruning, ensemble pruning, join methods, bagging, boosting
アルゴリズム:Algorithms

Protected: Information Geometry of Positive Definite Matrices (3)Calculation Procedure and Curvature

Procedures and curvature of computation of positive definite matrices as informative geometry utilized in digital transformation, artificial intelligence, and machine learning tasks
アルゴリズム:Algorithms

Protected: Measures for Stochastic Banded Problems Likelihood-based measures (UCB and MED measures)

Measures for Stochastic Banded Problems Likelihood-based UCB and MED measures (Indexed Maximum Empirical Divergence policy, KL-UCB measures, DMED measures, Riglet upper bound, Bernoulli distribution, Large Deviation Principle, Deterministic Minimum Empirical Divergence policy, Newton's method, KL divergence, Binsker's inequality, Heffding's inequality, Chernoff-Heffding inequality, Upper Confidence Bound)
アルゴリズム:Algorithms

Protected: Overview of Discriminant Adaptive Losses in Statistical Mathematics Theory

Overview of Discriminant Conformal Losses in Statistical Mathematics Theory (Ramp Losses, Convex Margin Losses, Nonconvex Φ-Margin Losses, Discriminant Conformal, Robust Support Vector Machines, Discriminant Conformity Theorems, L2-Support Vector Machines, Squared Hinge Loss, Logistic Loss, Hinge Loss, Boosting, Exponential Losses, Discriminant Conformity Theorems for Convex Margin Losses, Bayes Rules, Prediction Φ-loss, Prediction Discriminant Error, Monotonic Nonincreasing Convex Function, Empirical Φ-loss, Empirical Discriminant Error)
アルゴリズム:Algorithms

Protected: Online Stochastic Optimization and Stochastic Gradient Descent for Machine Learning

Stochastic optimization and stochastic gradient descent methods for machine learning for digital transformation DX, artificial intelligence AI and machine learning ML task utilization
アルゴリズム:Algorithms

Protected: Optimality conditions and algorithm stopping conditions in machine learning

Optimality conditions and algorithm stopping conditions in machine learning used in digital transformation, artificial intelligence, and machine learning scaling, influence, machine epsilon, algorithm stopping conditions, iterative methods, convex optimal solutions, constrained optimization problems, global optimal solutions, local optimal solutions, convex functions, second order sufficient conditions, second order necessary conditions, first order necessary conditions
タイトルとURLをコピーしました