集合論:Set theory

IOT技術:IOT Technology

Overview and implementation of image recognition systems

Image Recognition System Overview An image recognition system will be a technology in which a computer analy...
アルゴリズム:Algorithms

Protected: Explainable Machine Learning (17) Counterfactual Explanations

Explanation of machine learning results by counterfactual explanations utilized in digital transformation, artificial intelligence, and machine learning tasks Anchor, Growing Spheres algorithm, Python, Alibi, categorical features, Rashomon effect, LIME, fully coupled neural networks, counterfactual generation algorithms, Euclidean distance, central absolute deviation, Nelder-Mead method, causal semantics, causes
アルゴリズム:Algorithms

Protected:  Sparse learning based on group L1 norm regularization

Sparse machine learning based on group L1-norm regularization for digital transformation, artificial intelligence, and machine learning tasks relative dual gap, dual problem, gradient descent, extended Lagrangian function, dual extended Lagrangian method, Hessian, L1-norm regularization, and group L1-norm regularization, dual norm, empirical error minimization problem, prox operator, Nesterov's acceleration method, proximity gradient method, iterative weighted reduction method, variational representation, nonzero group number, kernel weighted regularization term, concave conjugate, regenerative kernel Hilbert space, support vector machine, kernel weight Multi-kernel learning, basis kernel functions, EEG signals, MEG signals, voxels, electric dipoles, neurons, multi-task learning
アルゴリズム:Algorithms

Protected: Two-Pair Extended Lagrangian and Two-Pair Alternating Direction Multiplier Methods as Optimization Methods for L1-Norm Regularization

Optimization methods for L1 norm regularization in sparse learning utilized in digital transformation, artificial intelligence, and machine learning tasks FISTA, SpaRSA, OWLQN, DL methods, L1 norm, tuning, algorithms, DADMM, IRS, and Lagrange multiplier, proximity point method, alternating direction multiplier method, gradient ascent method, extended Lagrange method, Gauss-Seidel method, simultaneous linear equations, constrained norm minimization problem, Cholesky decomposition, alternating direction multiplier method, dual extended Lagrangian method, relative dual gap, soft threshold function, Hessian matrix
アルゴリズム:Algorithms

Protected: Quasi-Newton Methods as Sequential Optimization in Machine Learning (2)Quasi-Newton Methods with Memory Restriction

Quasi-Newton method with memory restriction (sparse clique factorization, sparse clique factorization, chordal graph, sparsity, secant condition, sparse Hessian matrix, DFP formula, BFGS formula, KL divergence, quasi-Newton method, maximal clique, positive definite matrix, positive definite matrix completion, positive define matrix composition, graph triangulation, complete subgraph, clique, Hessian matrix, triple diagonal matrix Hestenes-Stiefel method, L-BFGS method)
アルゴリズム:Algorithms

Protected: Optimal arm identification and A/B testing in the bandit problem_1

Optimal arm identification and A/B testing in bandit problems for digital transformation, artificial intelligence, and machine learning tasks Heffding's inequality, optimal arm identification, sample complexity, sample complexity, riglet minimization, cumulative riglet minimization, cumulative reward maximization, ε-optimal arm identification, simple riglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence) cumulative reward maximization, ε-optimal arm identification, simple liglet minimization, ε-best arm identification, KL-UCB strategy, KL divergence, A/B testing of the normal distribution, fixed confidence, fixed confidence
アルゴリズム:Algorithms

Protected: Overview of nu-Support Vector Machines by Statistical Mathematics Theory

Overview of nu-support vector machines by statistical mathematics theory utilized in digital transformation, artificial intelligence, and machine learning tasks (kernel functions, boundedness, empirical margin discriminant error, models without bias terms, reproducing nuclear Hilbert spaces, prediction discriminant error, uniform bounds Statistical Consistency, C-Support Vector Machines, Correspondence, Statistical Model Degrees of Freedom, Dual Problem, Gradient Descent, Minimum Distance Problem, Discriminant Bounds, Geometric Interpretation, Binary Discriminant, Experience Margin Discriminant Error, Experience Discriminant Error, Regularization Parameter, Minimax Theorem, Gram Matrix, Lagrangian Function).
アルゴリズム:Algorithms

Protected: Quasi-Newton Method as Sequential Optimization in Machine Learning(1) Algorithm Overview

Quasi-Newton methods as continuous machine learning optimization for digital transformation, artificial intelligence, and machine learning tasks (BFGS formulas, Lagrange multipliers, optimality conditions, convex optimization problems, KL divergence minimization, equality constrained optimization problems, DFG formulas, positive definite matrices, geometric structures, secant conditions, update laws for quasi-Newton methods, Hesse matrices, optimization algorithms, search directions, Newton methods)
Clojure

Protected: Network analysis with Pagerank using Clojure Glittering

Network analysis with Pagerank (label propagation, Twitter user group analysis, influencers, communities, community graphs, accounts, followers, dumping factor, page rank algorithm) using Clojure Glittering for digital transformation, artificial intelligence and machine learning tasks.
アルゴリズム:Algorithms

Protected: Distributed processing of on-line stochastic optimization

Distributed online stochastic optimization for digital transformation, artificial intelligence, and machine learning tasks (expected error, step size, epoch, strongly convex expected error, SGD, Lipschitz continuous, gamma-smooth, alpha-strongly convex, Hogwild!), parallelization, label propagation method, propagation on graphs, sparse feature vectors, asynchronous distributed SGD, mini-batch methods, stochastic optimization methods, variance of gradients, unbiased estimators, SVRG, mini-batch parallelization of gradient methods, Nesterov's acceleration method, parallelized SGD)
タイトルとURLをコピーしました