Hadoop

Clojure

Protected: Network Analysis Using Clojure (2)Computing Triangles in a Graph Using Glittering

Network analysis using triangle computation in graphs using Clojure/Glittering for digital transformation, artificial intelligence, and machine learning tasks (GraphX, Pregel API, Twitter dataset, custom triangle count algorithm, message send function, message merge function, outer join, RDD, vertex attributes, Apache Spark, Sparkling, MLlib, Glittering, triangle counting, edge-cut strategy, random-vertex-cut strategy, and social networks, graph parallel computing functions, Hadoop, data parallel systems, RDG, Resilient Distributed Graph, Hama, Giraph)
Clojure

Protected: Stochastic gradient descent implementation using Clojure and Hadoop

Stochastic gradient descent implementation using Clojure and Hadoop for digital transformation, artificial intelligence, and machine learning tasks (mini-batch, Mapper, Reducer, Parkour, Tesser, batch gradient descent, join-step Partitioning, uberjar, Java, batch gradient descent, stochastic gradient descent, Hadoop cluster, Hadoop distributed file system, HDFS)
Clojure

Protected: Clojure implementation of distributed computation processing (map-reduce) used in Hadoop

Clojure implementation of distributed computation processing (map-reduce) used in Hadoop for digital transformation, artificial intelligence, and machine learning tasks Tesser, Reducer function, fold, cost function, gradient descent method, feature extraction, feature-scales function, feature scaling, gradient descent learning rate, gradient descent update rule, iterative algorithm, multiple regression, correlation matrix, fuse, commutative, linear regression, co-reduction, and covariance) feature-scales function, feature scaling, gradient descent learning rate, gradient descent update rule, iterative algorithm, multiple regression, correlation matrix, fuse, commutativity, linear regression, covariance, Hadoop, pararrel fold
IOT技術:IOT Technology

Protected: Apache Spark’s processing model for distributed data processing

Used for digital transformation artificial intelligence and machine learning tasks Apache Spark's processing model (Executor, Task, Scheduler, Driver Program, Master Node, Worker Node, Spark Standalone, Mesos, Hadoop, HFDS, YARN, Partitions, RDD, Transformations, Actions, Resillient Distributed Dataset)
タイトルとURLをコピーしました