機械学習:Machine Learning

アルゴリズム:Algorithms

Protected: Kernel functions as the basis of kernel methods in statistical mathematics theory.

Kernel functions (Gaussian kernels, polynomial kernels, linear kernels, kernel functions, regression functions, linear models, regression problems, discriminant problems) as the basis for kernel methods in statistical mathematics theory used in digital transformation, artificial intelligence and machine learning tasks.
アルゴリズム:Algorithms

Protected: Basics of gradient method (linear search method, coordinate descent method, steepest descent method and error back propagation method)

Fundamentals of gradient methods utilized in digital transformation, artificial intelligence, and machine learning tasks (linear search, coordinate descent, steepest descent and error back propagation, stochastic optimization, multilayer perceptron, adaboost, boosting, Wolf condition, Zotendijk condition, Armijo condition, backtracking methods, Goldstein condition, strong Wolf condition)
アルゴリズム:Algorithms

Protected: Machine Learning with Bayesian Inference – Mixture Models, Data Generation Process and Posterior Distribution

Mixture models and data generation processes and posterior distributions (graphical models, Poisson distribution, Gaussian distribution, Dirichlet distribution, categorical distribution) in machine learning with Bayesian inference used in digital transformation, artificial intelligence, machine learning
推論技術:inference Technology

Protected: Explainable Artificial Intelligence (9) Model-independent interpretation (ALE plot)

ALE plot is one of the posterior interpretation models that can be explained and used for digital transformation (DX), artificial intelligence (AI), and machine learning (ML).
Clojure

Protected: Large-scale Machine Learning with Apache Spark and MLlib

Large-scale machine learning with Apache Spark and MLlib for digital transformation, artificial intelligence, and machine learning tasks (predictive value, RMSE, factor matrix, rank, latent features, neighborhoods, sum of squares error, Mahout, ALS, Scala RDD, alternating least squares, alternating least squares, stochastic gradient descent, persistence, caching, Flambo, Clojure, Java)
python

Protected: the application of neural networks to reinforcement learning(1) overview

Overview of the application of neural networks to reinforcement learning utilized in digital transformation, artificial intelligence and machine learning tasks (Agent, Epsilon-Greedy method, Trainer, Observer, Logger, Stochastic Gradient Descent, Stochastic Gradient Descent, SGD, Adaptive Moment Estimation, Adam, Optimizer, Error Back Propagation Method, Backpropagation, Gradient, Activation Function Stochastic Gradient Descent, SGD, Adaptive Moment Estimation, Adam, Optimizer, Error Back Propagation, Backpropagation, Gradient, Activation Function, Batch Method, Value Function, Strategy)
アルゴリズム:Algorithms

Machine Learning by Ensemble Methods – Fundamentals and Algorithms Reading Notes

Fundamentals and algorithms in machine learning with ensemble methods used in digital transformation, artificial intelligence and machine learning tasks class unbalanced learning, cost-aware learning, active learning, semi-supervised learning, similarity-based methods, clustering ensemble methods, graph-based methods, festival label-based methods, transformation-based methods, clustering, optimization-based pruning, ensemble pruning, join methods, bagging, boosting
アルゴリズム:Algorithms

Protected: Information Geometry of Positive Definite Matrices (3)Calculation Procedure and Curvature

Procedures and curvature of computation of positive definite matrices as informative geometry utilized in digital transformation, artificial intelligence, and machine learning tasks
アルゴリズム:Algorithms

Protected: Measures for Stochastic Banded Problems Likelihood-based measures (UCB and MED measures)

Measures for Stochastic Banded Problems Likelihood-based UCB and MED measures (Indexed Maximum Empirical Divergence policy, KL-UCB measures, DMED measures, Riglet upper bound, Bernoulli distribution, Large Deviation Principle, Deterministic Minimum Empirical Divergence policy, Newton's method, KL divergence, Binsker's inequality, Heffding's inequality, Chernoff-Heffding inequality, Upper Confidence Bound)
アルゴリズム:Algorithms

Protected: Overview of Discriminant Adaptive Losses in Statistical Mathematics Theory

Overview of Discriminant Conformal Losses in Statistical Mathematics Theory (Ramp Losses, Convex Margin Losses, Nonconvex Φ-Margin Losses, Discriminant Conformal, Robust Support Vector Machines, Discriminant Conformity Theorems, L2-Support Vector Machines, Squared Hinge Loss, Logistic Loss, Hinge Loss, Boosting, Exponential Losses, Discriminant Conformity Theorems for Convex Margin Losses, Bayes Rules, Prediction Φ-loss, Prediction Discriminant Error, Monotonic Nonincreasing Convex Function, Empirical Φ-loss, Empirical Discriminant Error)
タイトルとURLをコピーしました