線形代数:Linear Algebra

アルゴリズム:Algorithms

Protected: Basics of gradient method (linear search method, coordinate descent method, steepest descent method and error back propagation method)

Fundamentals of gradient methods utilized in digital transformation, artificial intelligence, and machine learning tasks (linear search, coordinate descent, steepest descent and error back propagation, stochastic optimization, multilayer perceptron, adaboost, boosting, Wolf condition, Zotendijk condition, Armijo condition, backtracking methods, Goldstein condition, strong Wolf condition)
アルゴリズム:Algorithms

Protected: Machine Learning with Bayesian Inference – Mixture Models, Data Generation Process and Posterior Distribution

Mixture models and data generation processes and posterior distributions (graphical models, Poisson distribution, Gaussian distribution, Dirichlet distribution, categorical distribution) in machine learning with Bayesian inference used in digital transformation, artificial intelligence, machine learning
アルゴリズム:Algorithms

Machine Learning by Ensemble Methods – Fundamentals and Algorithms Reading Notes

Fundamentals and algorithms in machine learning with ensemble methods used in digital transformation, artificial intelligence and machine learning tasks class unbalanced learning, cost-aware learning, active learning, semi-supervised learning, similarity-based methods, clustering ensemble methods, graph-based methods, festival label-based methods, transformation-based methods, clustering, optimization-based pruning, ensemble pruning, join methods, bagging, boosting
アルゴリズム:Algorithms

Protected: Information Geometry of Positive Definite Matrices (3)Calculation Procedure and Curvature

Procedures and curvature of computation of positive definite matrices as informative geometry utilized in digital transformation, artificial intelligence, and machine learning tasks
アルゴリズム:Algorithms

Protected: Measures for Stochastic Banded Problems Likelihood-based measures (UCB and MED measures)

Measures for Stochastic Banded Problems Likelihood-based UCB and MED measures (Indexed Maximum Empirical Divergence policy, KL-UCB measures, DMED measures, Riglet upper bound, Bernoulli distribution, Large Deviation Principle, Deterministic Minimum Empirical Divergence policy, Newton's method, KL divergence, Binsker's inequality, Heffding's inequality, Chernoff-Heffding inequality, Upper Confidence Bound)
アルゴリズム:Algorithms

Protected: Overview of Discriminant Adaptive Losses in Statistical Mathematics Theory

Overview of Discriminant Conformal Losses in Statistical Mathematics Theory (Ramp Losses, Convex Margin Losses, Nonconvex Φ-Margin Losses, Discriminant Conformal, Robust Support Vector Machines, Discriminant Conformity Theorems, L2-Support Vector Machines, Squared Hinge Loss, Logistic Loss, Hinge Loss, Boosting, Exponential Losses, Discriminant Conformity Theorems for Convex Margin Losses, Bayes Rules, Prediction Φ-loss, Prediction Discriminant Error, Monotonic Nonincreasing Convex Function, Empirical Φ-loss, Empirical Discriminant Error)
アルゴリズム:Algorithms

Protected: Online Stochastic Optimization and Stochastic Gradient Descent for Machine Learning

Stochastic optimization and stochastic gradient descent methods for machine learning for digital transformation DX, artificial intelligence AI and machine learning ML task utilization
アルゴリズム:Algorithms

Protected: Optimality conditions and algorithm stopping conditions in machine learning

Optimality conditions and algorithm stopping conditions in machine learning used in digital transformation, artificial intelligence, and machine learning scaling, influence, machine epsilon, algorithm stopping conditions, iterative methods, convex optimal solutions, constrained optimization problems, global optimal solutions, local optimal solutions, convex functions, second order sufficient conditions, second order necessary conditions, first order necessary conditions
アルゴリズム:Algorithms

Protected: Unsupervised Learning with Gaussian Processes (2) Extension of Gaussian Process Latent Variable Model

Extension of Gaussian process latent variable models as unsupervised learning by Gaussian processes, an application of stochastic generative models utilized in digital transformation, artificial intelligence, and machine learningtasks ,infinite warp mixture models, Gaussian process dynamics models, Poisson point processes, log Gaussian Cox processes, latent Gaussian processes, elliptic slice sampling
Clojure

Protected: Stochastic gradient descent implementation using Clojure and Hadoop

Stochastic gradient descent implementation using Clojure and Hadoop for digital transformation, artificial intelligence, and machine learning tasks (mini-batch, Mapper, Reducer, Parkour, Tesser, batch gradient descent, join-step Partitioning, uberjar, Java, batch gradient descent, stochastic gradient descent, Hadoop cluster, Hadoop distributed file system, HDFS)
タイトルとURLをコピーしました