最適化:Optimization

アルゴリズム:Algorithms

Protected: Optimization methods for L1-norm regularization for sparse learning models

Optimization methods for L1-norm regularization for sparse learning models for use in digital transformation, artificial intelligence, and machine learning tasks (proximity gradient method, forward-backward splitting, iterative- shrinkage threshholding (IST), accelerated proximity gradient method, algorithm, prox operator, regularization term, differentiable, squared error function, logistic loss function, iterative weighted shrinkage method, convex conjugate, Hessian matrix, maximum eigenvalue, second order differentiable, soft threshold function, L1 norm, L2 norm, ridge regularization term, η-trick)
アルゴリズム:Algorithms

Protected: Optimal arm identification and AB testing in the bandit problem_2

Optimal arm identification and AB testing in bandit problems utilized in digital transformation, artificial intelligence, and machine learning tasks sequential deletion policy, false positive rate, fixed confidence, fixed budget, LUCB policy, UCB policy, optimal arm, score-based method, LCB, algorithm, cumulative reward maximization, optimal arm identification policy, ε-optimal arm identification
アルゴリズム:Algorithms

Protected: Statistical Mathematical Theory for Boosting

Statistical and mathematical theory boosting generalized linear model, modified Newton method, log likelihood, weighted least squares method, boosting, coordinate descent method, iteratively weighted least squares method, iteratively reweighted least squares method, IRLS method, weighted empirical discriminant error, parameter update law, Hessian matrix, corrected Newton method, Newton method, Newton method, iteratively reweighted least squares method, IRLS method) used for digital transformation, artificial intelligence, machine learning tasks. iteratively reweighted least square method, IRLS method, weighted empirical discriminant error, parameter update law, Hessian matrix, corrected Newton method, modified Newton method, Newton method, Newton method, link function, logistic loss, logistic loss, boosting algorithm, logit boost, exponential loss, convex margin loss, adaboost, weak hypothesis, empirical margin loss, nonlinear optimization
アルゴリズム:Algorithms

Protected: Quasi-Newton Methods as Sequential Optimization in Machine Learning (2)Quasi-Newton Methods with Memory Restriction

Quasi-Newton method with memory restriction (sparse clique factorization, sparse clique factorization, chordal graph, sparsity, secant condition, sparse Hessian matrix, DFP formula, BFGS formula, KL divergence, quasi-Newton method, maximal clique, positive definite matrix, positive definite matrix completion, positive define matrix composition, graph triangulation, complete subgraph, clique, Hessian matrix, triple diagonal matrix Hestenes-Stiefel method, L-BFGS method)
アルゴリズム:Algorithms

Protected: An example of machine learning by Bayesian inference: inference by collapsed Gibbs sampling of a Poisson mixture model

Inference by collapsed Gibbs sampling of Poisson mixed models as an example of machine learning by Bayesian inference utilized in digital transformation, artificial intelligence, and machine learning tasks variational inference, Gibbs sampling, evaluation on artificial data, algorithms, prior distribution, gamma distribution, Bayes' theorem, Dirichlet distribution, categorical distribution, graphical models
Symbolic Logic

Overview of the Knowledge Graph and summary of related presentations at the International Society for the Study of Knowledge Graphs (ISWC)

Overview of knowledge graphs used for digital transformation, artificial intelligence, and machine learning tasks and summary of related presentations at the International Society for the World Wide Web Conference ISWC (ISWC, natural language processing, reasoning techniques, data analytics, robotics, IOT, search engine, inference engine Entity Extraction, Picture Entity Linking, Relational Learning, Deep Learning, Fusion of Logic and Probability, Relationship Extraction, Topic Models, Chatbots, Question Answering, Semantic Web Technologies, Knowledge Information Processing, RDF Store, SPARQL, Ontology Matching, Database Technologies)
アルゴリズム:Algorithms

Protected: Applying Neural Networks to Reinforcement Learning Applying Deep Learning to Strategy:Advanced Actor Critic (A2C)

Application of Neural Networks to Reinforcement Learning for Digital Transformation, Artificial Intelligence, and Machine Learning tasks Implementation of Advanced Actor Critic (A2C) applying deep learning to strategies (Policy Gradient method, Q-learning, Gumbel Max Trix, A3C (Asynchronous Advantage Actor Critic))
Clojure

Protected: Implementation of recommendation algorithm using Clojure/Mahout

Implementation of recommendation algorithms using Clojure/Mahout for digital transformation, artificial intelligence, and machine learning tasks information retrieval statistics, precision, recall, DCG, IDCG, Ideal Discounted Cumulative Gain, Discounted Cumulative Gain, Discounted Cumulative Gain, fall-out, F-measure, harmonic mean, RMSE, k-nearest neighbor method, Pearson correlation, Spearman's rank correlation coefficient, Pearson correlation similarity, similarity measure Jaccard distance, Euclidean distance, cosine distance, pairwise differences, item-based, user-based
アルゴリズム:Algorithms

Protected: Optimal arm identification and A/B testing in the bandit problem_1

Optimal arm identification and A/B testing in bandit problems for digital transformation, artificial intelligence, and machine learning tasks Heffding's inequality, optimal arm identification, sample complexity, sample complexity, riglet minimization, cumulative riglet minimization, cumulative reward maximization, ε-optimal arm identification, simple riglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence) cumulative reward maximization, ε-optimal arm identification, simple liglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence, A/B testing of the normal distribution, fixed confidence, fixed confidence
アルゴリズム:Algorithms

Protected: Overview of nu-Support Vector Machines by Statistical Mathematics Theory

Overview of nu-support vector machines by statistical mathematics theory utilized in digital transformation, artificial intelligence, and machine learning tasks (kernel functions, boundedness, empirical margin discriminant error, models without bias terms, reproducing nuclear Hilbert spaces, prediction discriminant error, uniform bounds Statistical Consistency, C-Support Vector Machines, Correspondence, Statistical Model Degrees of Freedom, Dual Problem, Gradient Descent, Minimum Distance Problem, Discriminant Bounds, Geometric Interpretation, Binary Discriminant, Experience Margin Discriminant Error, Experience Discriminant Error, Regularization Parameter, Minimax Theorem, Gram Matrix, Lagrangian Function).
タイトルとURLをコピーしました