機械学習:Machine Learning

アルゴリズム:Algorithms

Protected: Distributed processing of on-line stochastic optimization

Distributed online stochastic optimization for digital transformation, artificial intelligence, and machine learning tasks (expected error, step size, epoch, strongly convex expected error, SGD, Lipschitz continuous, gamma-smooth, alpha-strongly convex, Hogwild!), parallelization, label propagation method, propagation on graphs, sparse feature vectors, asynchronous distributed SGD, mini-batch methods, stochastic optimization methods, variance of gradients, unbiased estimators, SVRG, mini-batch parallelization of gradient methods, Nesterov's acceleration method, parallelized SGD)
アルゴリズム:Algorithms

Protected: Conjugate gradient and nonlinear conjugate gradient methods as continuous optimization in machine learning

Conjugate gradient methods as continuous machine learning optimization for digital transformation, artificial intelligence, and machine learning tasks (moment method, nonlinear conjugate gradient method, search direction, inertia term, Polak-Ribiere method, linear search, Wolf condition, Dai-Yuan method, strong Wolf condition, Fletcher-Reeves method, global convergence, Newton method, rapid descent method, Hesse matrix, convex quadratic function, conjugate gradient method, minimum eigenvalue, maximum eigenvalue, affine subspace, conjugate direction method, coordinate descent method)
アルゴリズム:Algorithms

Protected: Theory of Noisy L1-Norm Minimization as Machine Learning Based on Sparsity (2)

Theory of noisy L1 norm minimization as machine learning based on sparsity for digital transformation, artificial intelligence, and machine learning tasks numerical examples, heat maps, artificial data, restricted strongly convex, restricted isometric, k-sparse vector, norm independence, subdifferentiation, convex function, regression coefficient vector, orthogonal complementary space
アルゴリズム:Algorithms

Theory and algorithms of various reinforcement learning techniques and their implementation in python

Theory and algorithms of various reinforcement learning techniques used for digital transformation, artificial intelligence, and machine learning tasks and their implementation in python reinforcement learning,online learning,online prediction,deep learning,python,algorithm,theory,implementation
推論技術:inference Technology

Protected: Explainable Artificial Intelligence (11) Model-Independent Interpretation (Permutation Feature Importance)

Permutation Feature Importance is one of the posterior interpretation models that can be used to explain digital transformation (DX), artificial intelligence (AI), and machine learning (ML).
python

Protected: Applying Neural Networks to Reinforcement Learning Deep Q-Network Applying Deep Learning to Value Assessment

Application of Neural Networks to Reinforcement Learning for Digital Transformation, Artificial Intelligence, and Machine Learning tasks Deep Q-Network Prioritized Replay, Multi-step applying deep learning to value assessment Deep Q-Network applying deep learning to value assessment (Prioritized Replay, Multi-step Learning, Distibutional RL, Noisy Nets, Double DQN, Dueling Network, Rainbow, GPU, Epsilon-Greedy method, Optimizer, Reward Clipping, Fixed Target Q-Network, Experience Replay, Average Experience Replay, Mean Square Error, Mean Squared Error, TD Error, PyGame Learning Enviroment, PLE, OpenAI Gym, CNN
Clojure

Protected: Network analysis in GraphX Pregel using Clojure

Network analysis in GraphX Pregel using Clojure for digital transformation, artificial intelligence, and machine learning tasks (label propagation, twitter data, community analysis, graph structure analysis, community size, community detection, algorithms, maximum connected components, triangle counting, glittering, Google, Koenigsberg bridge, Euler path)
アルゴリズム:Algorithms

Protected: An example of machine learning by Bayesian inference: inference by Gibbs sampling of a Poisson mixture model

Examples of machine learning with Bayesian inference utilized for digital transformation, artificial intelligence, and machine learning tasks: inference by Gibbs sampling of Poisson mixed models (algorithm, sampling of unobserved variables, Dirichlet distribution, gamma distribution, conditional distribution, categorical distribution, posterior distribution, simultaneous distribution, superparameter, knowledge model, latent variable) categorical distribution, posterior distribution, simultaneous distribution, hyperparameters, knowledge models, data generating processes, latent variables)
アルゴリズム:Algorithms

Protected: Hedge Algorithm and Exp3 Measures in the Adversary Bandid Problem

Hedge algorithm and Exp3 measures in adversarial bandit problems utilized in digital transformation, artificial intelligence, and machine learning tasks pseudo-regret upper bound, expected cumulative reward, optimal parameters, expected regret, multi-armed bandit problem, Hedge Algorithm, Expert, Reward version of Hedge algorithm, Boosting, Freund, Chabile, Pseudo-Code, Online Learning, PAC Learning, Question Learning
アルゴリズム:Algorithms

Protected: Representation Theorems and Rademacher Complexity as the Basis for Kernel Methods in Statistical Mathematics Theory

Representation theorems and Rademacher complexity as a basis for kernel methods in statistical mathematics theory used in digital transformation, artificial intelligence, and machine learning tasks Gram matrices, hypothesis sets, discriminant bounds, overfitting, margin loss, discriminant functions, predictive semidefiniteness, universal kernels, the reproducing kernel Hilbert space, prediction discriminant error, L1 norm, Gaussian kernel, exponential kernel, binomial kernel, compact sets, empirical Rademacher complexity, Rademacher complexity, representation theorem
タイトルとURLをコピーしました