Gradient Descent

アルゴリズム:Algorithms

Protected:  Sparse learning based on group L1 norm regularization

Sparse machine learning based on group L1-norm regularization for digital transformation, artificial intelligence, and machine learning tasks relative dual gap, dual problem, gradient descent, extended Lagrangian function, dual extended Lagrangian method, Hessian, L1-norm regularization, and group L1-norm regularization, dual norm, empirical error minimization problem, prox operator, Nesterov's acceleration method, proximity gradient method, iterative weighted reduction method, variational representation, nonzero group number, kernel weighted regularization term, concave conjugate, regenerative kernel Hilbert space, support vector machine, kernel weight Multi-kernel learning, basis kernel functions, EEG signals, MEG signals, voxels, electric dipoles, neurons, multi-task learning
アルゴリズム:Algorithms

Protected: Overview of nu-Support Vector Machines by Statistical Mathematics Theory

Overview of nu-support vector machines by statistical mathematics theory utilized in digital transformation, artificial intelligence, and machine learning tasks (kernel functions, boundedness, empirical margin discriminant error, models without bias terms, reproducing nuclear Hilbert spaces, prediction discriminant error, uniform bounds Statistical Consistency, C-Support Vector Machines, Correspondence, Statistical Model Degrees of Freedom, Dual Problem, Gradient Descent, Minimum Distance Problem, Discriminant Bounds, Geometric Interpretation, Binary Discriminant, Experience Margin Discriminant Error, Experience Discriminant Error, Regularization Parameter, Minimax Theorem, Gram Matrix, Lagrangian Function).
Clojure

Protected: Clojure implementation of distributed computation processing (map-reduce) used in Hadoop

Clojure implementation of distributed computation processing (map-reduce) used in Hadoop for digital transformation, artificial intelligence, and machine learning tasks Tesser, Reducer function, fold, cost function, gradient descent method, feature extraction, feature-scales function, feature scaling, gradient descent learning rate, gradient descent update rule, iterative algorithm, multiple regression, correlation matrix, fuse, commutative, linear regression, co-reduction, and covariance) feature-scales function, feature scaling, gradient descent learning rate, gradient descent update rule, iterative algorithm, multiple regression, correlation matrix, fuse, commutativity, linear regression, covariance, Hadoop, pararrel fold
アルゴリズム:Algorithms

Protected: Overview of Gaussian Processes(4)Hyperparameter Estimation and Generalization of Gaussian Process Regression

Hyperparameter estimation using the gradient descent method of Gaussian process regression for stochastic generative models utilized in digital transformation, artificial intelligence, and machine learning tasks (SCG method, L-BFGS method, global solution using MCMC)
微分積分:Calculus

This is a good introduction to deep learning (Machine Learning Startup Series)Reading Notes

Overview of deep learning for digital transformation and artificial intelligence tasks, including machine learning, gradient descent, regularization, error back propagation, self-encoders, convolutional neural networks, recurrent neural networks, Boltzmann machines, and reinforcement learning.
タイトルとURLをコピーしました