確率・統計:Probability and Statistics

アルゴリズム:Algorithms

Protected: TRPO/PPO and DPG/DDPG, an improvement of the Policy Gradient method of reinforcement learning

TRPO/PPO and DPG/DDPG (Pendulum, Actor Critic, SequentialMemory, SequentialMemory, and SequentialMemory), which are improvements of Policy Gradient methods of reinforcement learning used for digital transformation, artificial intelligence, and machine learning tasks. Adam, keras-rl, TD error, Deep Deterministic Policy Gradient, Deterministic Policy Gradient, Advanced Actor Critic, A2C, A3C, Proximal Policy Optimization, Trust Region Policy Optimization, Python)
Clojure

Protected: A recommendation system using a measure of similarity between text documents using k-means in Clojure.

Recommendation systems using measures of similarity between text documents using k-means in Clojure leveraged for digital transformation , artificial intelligence , and machine learning tasks Slope One recommendations, top rating calculations, weighted ratings, average difference between paired items, Weighted Slope One, user-based recommendations, collaborative filtering, item-based recommendations, movie recommendation data
アルゴリズム:Algorithms

Protected: Optimization methods for L1-norm regularization for sparse learning models

Optimization methods for L1-norm regularization for sparse learning models for use in digital transformation, artificial intelligence, and machine learning tasks (proximity gradient method, forward-backward splitting, iterative- shrinkage threshholding (IST), accelerated proximity gradient method, algorithm, prox operator, regularization term, differentiable, squared error function, logistic loss function, iterative weighted shrinkage method, convex conjugate, Hessian matrix, maximum eigenvalue, second order differentiable, soft threshold function, L1 norm, L2 norm, ridge regularization term, η-trick)
アルゴリズム:Algorithms

Protected: Optimal arm identification and AB testing in the bandit problem_2

Optimal arm identification and AB testing in bandit problems utilized in digital transformation, artificial intelligence, and machine learning tasks sequential deletion policy, false positive rate, fixed confidence, fixed budget, LUCB policy, UCB policy, optimal arm, score-based method, LCB, algorithm, cumulative reward maximization, optimal arm identification policy, ε-optimal arm identification
アルゴリズム:Algorithms

Protected: Statistical Mathematical Theory for Boosting

Statistical and mathematical theory boosting generalized linear model, modified Newton method, log likelihood, weighted least squares method, boosting, coordinate descent method, iteratively weighted least squares method, iteratively reweighted least squares method, IRLS method, weighted empirical discriminant error, parameter update law, Hessian matrix, corrected Newton method, Newton method, Newton method, iteratively reweighted least squares method, IRLS method) used for digital transformation, artificial intelligence, machine learning tasks. iteratively reweighted least square method, IRLS method, weighted empirical discriminant error, parameter update law, Hessian matrix, corrected Newton method, modified Newton method, Newton method, Newton method, link function, logistic loss, logistic loss, boosting algorithm, logit boost, exponential loss, convex margin loss, adaboost, weak hypothesis, empirical margin loss, nonlinear optimization
アルゴリズム:Algorithms

Protected: Quasi-Newton Methods as Sequential Optimization in Machine Learning (2)Quasi-Newton Methods with Memory Restriction

Quasi-Newton method with memory restriction (sparse clique factorization, sparse clique factorization, chordal graph, sparsity, secant condition, sparse Hessian matrix, DFP formula, BFGS formula, KL divergence, quasi-Newton method, maximal clique, positive definite matrix, positive definite matrix completion, positive define matrix composition, graph triangulation, complete subgraph, clique, Hessian matrix, triple diagonal matrix Hestenes-Stiefel method, L-BFGS method)
アルゴリズム:Algorithms

Protected: An example of machine learning by Bayesian inference: inference by collapsed Gibbs sampling of a Poisson mixture model

Inference by collapsed Gibbs sampling of Poisson mixed models as an example of machine learning by Bayesian inference utilized in digital transformation, artificial intelligence, and machine learning tasks variational inference, Gibbs sampling, evaluation on artificial data, algorithms, prior distribution, gamma distribution, Bayes' theorem, Dirichlet distribution, categorical distribution, graphical models
アルゴリズム:Algorithms

Protected: Applying Neural Networks to Reinforcement Learning Applying Deep Learning to Strategy:Advanced Actor Critic (A2C)

Application of Neural Networks to Reinforcement Learning for Digital Transformation, Artificial Intelligence, and Machine Learning tasks Implementation of Advanced Actor Critic (A2C) applying deep learning to strategies (Policy Gradient method, Q-learning, Gumbel Max Trix, A3C (Asynchronous Advantage Actor Critic))
Clojure

Protected: Implementation of recommendation algorithm using Clojure/Mahout

Implementation of recommendation algorithms using Clojure/Mahout for digital transformation, artificial intelligence, and machine learning tasks information retrieval statistics, precision, recall, DCG, IDCG, Ideal Discounted Cumulative Gain, Discounted Cumulative Gain, Discounted Cumulative Gain, fall-out, F-measure, harmonic mean, RMSE, k-nearest neighbor method, Pearson correlation, Spearman's rank correlation coefficient, Pearson correlation similarity, similarity measure Jaccard distance, Euclidean distance, cosine distance, pairwise differences, item-based, user-based
アルゴリズム:Algorithms

Protected: Optimal arm identification and A/B testing in the bandit problem_1

Optimal arm identification and A/B testing in bandit problems for digital transformation, artificial intelligence, and machine learning tasks Heffding's inequality, optimal arm identification, sample complexity, sample complexity, riglet minimization, cumulative riglet minimization, cumulative reward maximization, ε-optimal arm identification, simple riglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence) cumulative reward maximization, ε-optimal arm identification, simple liglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence, A/B testing of the normal distribution, fixed confidence, fixed confidence
タイトルとURLをコピーしました