微分積分:Calculus

アルゴリズム:Algorithms

Protected: Implementation of model-free reinforcement learning in python (2) Monte Carlo and TD methods

Python implementations of model-free reinforcement learning such as Monte Carlo and TD methods Q-Learning, Value-based methods, Monte Carlo methods, neural nets, Epsilon-Greedy methods, TD(lambda) methods, Muli-step Learning, Rainbow, A3C/A2C, DDPG, APE-X DDPG, Muli-step Learning) Epsilon-Greedy method, TD(λ) method, Muli-step Learning, Rainbow, A3C/A2C, DDPG, APE-X DQN
バンディッド問題

Protected: Fundamentals of Stochastic Bandid Problems

Basics of stochastic bandid problems utilized in digital transformation, artificial intelligence, and machine learning tasks (large deviation principle and examples in Bernoulli distribution, Chernoff-Heffding inequality, Sanov's theorem, Heffding inequality, Kullback-Leibler divergence, probability mass function, hem probability, probability approximation by central limit theorem).
微分積分:Calculus

Protected: Fundamentals of Convex Analysis as a Basic Matter for Sequential Optimization in Machine Learning

Basics of convex analysis as a fundamental matter of continuous optimization utilized in digital transformation, artificial intelligence, and machine learning tasks subgradient, subdifferential, conjugate function, closed truly convex function, conjugate function, strongly convex function, closed truly convex function, upper and lower bounds on function values, Hesse matrix, epigraph, Taylor's theorem, relative interior, Ahuynh envelope, continuity, convex envelope, convex function, convex set
アルゴリズム:Algorithms

Protected: Basic Framework of Statistical Mathematics Theory

Basic framework of statistical mathematics theory used in digital transformation, artificial intelligence, and machine learning tasks regularization, approximation and estimation errors, Höfding's inequality, prediction discriminant error, statistical consistency, learning algorithms, performance evaluation, ROC curves, AUC, Bayes rules, Bayes error, prediction loss, empirical loss
アルゴリズム:Algorithms

Protected: Supervised learning and regularization

Overview of supervised learning regression, discriminant and regularization ridge function, L1 regularization, bridge regularization, elastic net regularization, SCAD, group regularization, generalized concatenated regularization, trace norm regularization as the basis of machine learning optimization methods used for digital transformation, artificial intelligence and machine learning tasks
アルゴリズム:Algorithms

Protected: Spatial statistics of Gaussian processes, with application to Bayesian optimization

Spatial statistics of Gaussian processes as an application of stochastic generative models used in digital transformation, artificial intelligence, and machine learning tasks, and tools ARD, Matern kernelsfor Bayesian optimization GPyOpt and GPFlow and GPyTorch
Clojure

Hierarchical Temporal Memory and Clojure

Deep learning with hierarchical temporal memory and sparse distributed representation with Clojure for digital transformation (DX), artificial intelligence (AI), and machine learning (ML) tasks
Clojure

Network analysis using Clojure (1) Width-first/depth-first search, shortest path search, minimum spanning tree, subgraphs and connected components

Network analysis using Clojure/loop for digital transformation , artificial intelligence and machine learning tasks, width-first/depth-first search, shortest path search, minimum spanning tree, subgraph and connected components
Symbolic Logic

Integration of logic and rules with probability/machine learning

Integration of logic and rules with machine learning (inductive logic programming, statistical relational learning, knowledge-based model building, Bayesian nets, probabilistic logic learning, hidden Markov models) used for digital transformation, artificial intelligence, and machine learning tasks.
アルゴリズム:Algorithms

Fundamentals of Continuous Optimization – Calculus and Linear Algebra

Fundamentals of Continuous Optimization - Calculus and Linear Algebra (Taylor's theorem, Hesse matrix, Landau's symbol, Lipschitz continuity, Lipschitz constant, implicit function theorem, Jacobi matrix, diagonal matrix, eigenvalues, nonnegative definite matrix, positive definite matrix, subspace, projection, 1-rank update, natural gradient method, quasi Newton method, Sherman-Morrison formula, norm, Euclidean norm, p-norm, Schwartz inequality, Helder inequality, function on matrix space)
タイトルとURLをコピーしました