人工知能:Artificial Intelligence

アルゴリズム:Algorithms

Protected: Sparse Machine Learning with Overlapping Sparse Regularization

Sparse machine learning with overlapping sparse regularization for digital transformation, artificial intelligence, and machine learning tasks main problem, dual problem, relative dual gap, dual norm, Moreau's theorem, extended Lagrangian, alternating multiplier method, stopping conditions, groups with overlapping L1 norm, extended Lagrangian, prox operator, Lagrangian multiplier vector, linear constraints, alternating direction multiplier method, constrained minimization problem, multiple linear ranks of tensors, convex relaxation, overlapping trace norm, substitution matrix, regularization method, auxiliary variables, elastic net regularization, penalty terms, Tucker decomposition Higher-order singular value decomposition, factor matrix decomposition, singular value decomposition, wavelet transform, total variation, noise division, compressed sensing, anisotropic total variation, tensor decomposition, elastic net
機械学習:Machine Learning

Linear Algebra Overview and Library and Reference Books

Linear Algebra and Machine Learning Linear algebra is a field of mathematics that uses vectors and matrices to...
アルゴリズム:Algorithms

Protected: Optimization for the main problem in machine learning

Optimization for main problems in machine learning used in digital transformation, artificial intelligence, and machine learning tasks (barrier function method, penalty function method, globally optimal solution, eigenvalues of Hesse matrix, feasible region, unconstrained optimization problem, linear search, Lagrange multipliers for optimality conditions, integration points, effective constraint method)
アルゴリズム:Algorithms

Protected: Applied Bayesian inference in non-negative matrix factorization: model construction and inference

Non-negative matrix factorization as a construction and inference of applied Bayesian inference models used in digital transformation, artificial intelligence, and machine learning tasks Poisson distribution, latent variable, gamma distribution, approximate posterior distribution, variational inference, spectogram of organ performance data, missing value interpolation, restoration of high frequency components, super-resolution, graphical models, hyperparameters, modeling, auxiliary variables, linear dimensionality reduction, recommendation algorithms, speech data, Fast Fourier Transform, natural language processing
アルゴリズム:Algorithms

Protected: Implementation of two approaches to improve environmental awareness, a weak point of deep reinforcement learning.

Implementation of two approaches to improve environment awareness, a weakness of deep reinforcement learning used in digital transformation, artificial intelligence, and machine learning tasks (inverse predictive, constrained, representation learning, imitation learning, reconstruction, predictive, WorldModels, transition function, reward function Weaknesses of representation learning, VAE, Vision Model, RNN, Memory RNN, Monte Carlo methods, TD Search, Monte Carlo Tree Search, Model-based learning, Dyna, Deep Reinforcement Learning)
アルゴリズム:Algorithms

Protected: Regression analysis using Clojure (2) Multiple regression model

This content is password protected. To view it please enter your password below: Password:
アルゴリズム:Algorithms

Protected: Optimal arm bandit and Bayes optimal when the player’s candidate actions are large or continuous(1)

Optimal arm bandit and Bayes optimal linear curl, linear bandit, covariance function, Mattern kernel, Gaussian kernel, positive definite kernel function, block matrix, inverse matrix formulation, prior simultaneous probability density, Gaussian process, Lipschitz continuous, Euclidean norm, simple riglet, black box optimization, optimal arm identification, regret, cross checking, leave-one-out cross checking, continuous arm bandit
アルゴリズム:Algorithms

Protected: Sparse machine learning based on trace-norm regularization

Sparse machine learning based on trace norm regularization for digital transformation, artificial intelligence, and machine learning tasks PROPACK, random projection, singularity decomposition, low rank, sparse matrix, update formula for proximity gradient, collaborative filtering, singular value solver,. Trace norm, prox action, regularization parameter, singular value, singular vector, accelerated proximity gradient method, learning problem with trace norm regularization, semidefinite matrix, square root of matrix, Frobenius norm, Frobenius norm squared regularization, Torres norm minimization, binary classification problem, multi-task learning group L1 norm, recommendation systems
アルゴリズム:Algorithms

Protected: Optimality conditions for constrained inequality optimization problems in machine learning

Optimality conditions for constrained inequality optimization problems in machine learning used in digital transformation, artificial intelligence, and machine learningtasks duality problems, strong duality, Lagrangian functions, linear programming problems, Slater conditions, principal dual interior point method, weak duality, first order sufficient conditions for convex optimization, second order sufficient conditions, KKT conditions, stopping conditions, first order optimality conditions, valid constraint expressions, Karush-Kuhn-Tucker, local optimal solutions
アルゴリズム:Algorithms

Protected: Fundamentals of convex analysis in stochastic optimization (1) Convex functions and subdifferentials, dual functions

Convex functions and subdifferentials, dual functions (convex functions, conjugate functions, Young-Fenchel inequality, subdifferentials, Lejandre transform, subgradient, L1 norm, relative interior points, affine envelope, affine set, closed envelope, epigraph, convex envelope, smooth convex functions, narrowly convex functions, truly convex closed functions, closed convex closed functions, execution domain, convex set) in basic matters of convex analysis in stochastic optimization used for Digital Transformation, Artificial Intelligence, Machine Learning tasks.
タイトルとURLをコピーしました