K-means clustering is a type of unsupervised learning, which is used when you have unlabeled data (i.e., data without defined categories or groups). The goal of this algorithm is to find groups in the data, with the number of groups represented by the variable K. The algorithm works iteratively to assign each data point to one of K groups based on the features that are provided. Data points are clustered based on feature similarity. The results of the K-means clustering algorithm are:

  • The centroids of the K clusters, which can be used to label new data
  • Labels for the training data (each data point is assigned to a single cluster)

K means clustering is a type of unsupervised learning, which is used when you have un labeled data (i.e., data without

Dataset Description

MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks. Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster. The MapReduce framework consists of a single master ResourceManager, one slave NodeManager per cluster-node, and MRAppMaster per application. Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration

Compile

The dataset used is a subset of MNIST with 6000 examples selected.

To be able to compile and run, you have to install Harp and Hadoop:

Select the profile related to your hadoop version. For ex: hadoop-2.6.0. Supported hadoop versions are 2.6.0, 2.7.5 and 2.9.0

cd $HARP_ROOT_DIR
mvn clean package -Phadoop-2.6.0
cd $HARP_ROOT_DIR/contrib/target
cp contrib-0.1.0.jar $HADOOP_HOME
cd $HADOOP_HOME

Run

hadoop jar contrib-0.1.0.jar edu.iu.kmeans.common.KmeansMapCollective <numOfDataPoints> <num of Centroids> <size of 
vector> <number of map tasks> <number of iteration> <workDir> <localDir> <communication operation>
  • : the number of data points you want to generate randomly
  • : the number of centroids you want to clustering the data to
  • : the number of dimension of the data
  • : number of map tasks
  • : the number of iterations to run
  • : the root directory for this running in HDFS
  • : the harp kmeans will firstly generate files which contain data points to local directory. Set this argument to determine the local directory.
  • includes:
    • [allreduce]: use allreduce operation to synchronize centroids
    • [regroup-allgather]: use regroup and allgather operation to synchronize centroids
    • [broadcast-reduce]: use broadcast and reduce operation to synchronize centroids
    • [push-pull]: use push and pull operation to synchronize centroids

For example:

hadoop jar contrib-0.1.0.jar edu.iu.kmeans.common.KmeansMapCollective 1000 10 10 2 10 /kmeans /tmp/kmeans allreduce

Fetch the results:

hdfs dfs -ls /
hdfs dfs -cat /kmeans/centroids/