線形代数:Linear Algebra

アルゴリズム:Algorithms

Protected: Quasi-Newton Methods as Sequential Optimization in Machine Learning (2)Quasi-Newton Methods with Memory Restriction

Quasi-Newton method with memory restriction (sparse clique factorization, sparse clique factorization, chordal graph, sparsity, secant condition, sparse Hessian matrix, DFP formula, BFGS formula, KL divergence, quasi-Newton method, maximal clique, positive definite matrix, positive definite matrix completion, positive define matrix composition, graph triangulation, complete subgraph, clique, Hessian matrix, triple diagonal matrix Hestenes-Stiefel method, L-BFGS method)
アルゴリズム:Algorithms

Protected: An example of machine learning by Bayesian inference: inference by collapsed Gibbs sampling of a Poisson mixture model

Inference by collapsed Gibbs sampling of Poisson mixed models as an example of machine learning by Bayesian inference utilized in digital transformation, artificial intelligence, and machine learning tasks variational inference, Gibbs sampling, evaluation on artificial data, algorithms, prior distribution, gamma distribution, Bayes' theorem, Dirichlet distribution, categorical distribution, graphical models
アルゴリズム:Algorithms

Protected: Applying Neural Networks to Reinforcement Learning Applying Deep Learning to Strategy:Advanced Actor Critic (A2C)

Application of Neural Networks to Reinforcement Learning for Digital Transformation, Artificial Intelligence, and Machine Learning tasks Implementation of Advanced Actor Critic (A2C) applying deep learning to strategies (Policy Gradient method, Q-learning, Gumbel Max Trix, A3C (Asynchronous Advantage Actor Critic))
Clojure

Protected: Implementation of recommendation algorithm using Clojure/Mahout

Implementation of recommendation algorithms using Clojure/Mahout for digital transformation, artificial intelligence, and machine learning tasks information retrieval statistics, precision, recall, DCG, IDCG, Ideal Discounted Cumulative Gain, Discounted Cumulative Gain, Discounted Cumulative Gain, fall-out, F-measure, harmonic mean, RMSE, k-nearest neighbor method, Pearson correlation, Spearman's rank correlation coefficient, Pearson correlation similarity, similarity measure Jaccard distance, Euclidean distance, cosine distance, pairwise differences, item-based, user-based
アルゴリズム:Algorithms

Protected: Optimal arm identification and A/B testing in the bandit problem_1

Optimal arm identification and A/B testing in bandit problems for digital transformation, artificial intelligence, and machine learning tasks Heffding's inequality, optimal arm identification, sample complexity, sample complexity, riglet minimization, cumulative riglet minimization, cumulative reward maximization, ε-optimal arm identification, simple riglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence) cumulative reward maximization, ε-optimal arm identification, simple liglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence, A/B testing of the normal distribution, fixed confidence, fixed confidence
アルゴリズム:Algorithms

Protected: Overview of nu-Support Vector Machines by Statistical Mathematics Theory

Overview of nu-support vector machines by statistical mathematics theory utilized in digital transformation, artificial intelligence, and machine learning tasks (kernel functions, boundedness, empirical margin discriminant error, models without bias terms, reproducing nuclear Hilbert spaces, prediction discriminant error, uniform bounds Statistical Consistency, C-Support Vector Machines, Correspondence, Statistical Model Degrees of Freedom, Dual Problem, Gradient Descent, Minimum Distance Problem, Discriminant Bounds, Geometric Interpretation, Binary Discriminant, Experience Margin Discriminant Error, Experience Discriminant Error, Regularization Parameter, Minimax Theorem, Gram Matrix, Lagrangian Function).
アルゴリズム:Algorithms

Protected: Stochastic coordinate descent as a distributed process for batch stochastic optimization

Stochastic coordinate descent as a distributed process for batch stochastic optimization utilized in digital transformation, artificial intelligence, and machine learning tasks (COCOA, convergence rate, SDCA, γf-smooth, approximate solution of subproblems, stochastic coordinate descent, parallel stochastic coordinate descent, parallel computing process, Communication-Efficient Coordinate Ascent, dual coordinate descent)
アルゴリズム:Algorithms

Protected: Quasi-Newton Method as Sequential Optimization in Machine Learning(1) Algorithm Overview

Quasi-Newton methods as continuous machine learning optimization for digital transformation, artificial intelligence, and machine learning tasks (BFGS formulas, Lagrange multipliers, optimality conditions, convex optimization problems, KL divergence minimization, equality constrained optimization problems, DFG formulas, positive definite matrices, geometric structures, secant conditions, update laws for quasi-Newton methods, Hesse matrices, optimization algorithms, search directions, Newton methods)
アルゴリズム:Algorithms

Protected: Example of Machine Learning with Bayesian Inference: Variational Inference for Poisson Mixture Models

Examples of machine learning with Bayesian inference utilized for digital transformation, artificial intelligence, and machine learning tasks: variational inference for Poisson mixed models (Gibbs sampling, variational inference, algorithm, ELBO, computation, variational inference algorithm, latent variable parameters, posterior distribution, Dirichlet distribution, gamma distribution)
アルゴリズム:Algorithms

Protected: Application of Neural Networks to Reinforcement Learning Policy Gradient, which implements a strategy with a function with parameters.

Application of Neural Networks to Reinforcement Learning for Digital Transformation, Artificial Intelligence, and Machine Learning tasks Policy Gradient to implement strategies with parameterized functions (discounted present value, strategy update, tensorflow, and Keras, CartPole, ACER, Actor Critoc with Experience Replay, Off-Policy Actor Critic, behavior policy, Deterministic Policy Gradient, DPG, DDPG, and Experience Replay, Bellman Equation, policy gradient method, action history)
タイトルとURLをコピーしました