Java MapReduce

By | | No Comments

You can compile and run a MapReduce program written in Java on our Hadoop cluster. Below is an example of how to do so. You can get the WordCount.java file from HDFS:

hdfs dfs -get /var/examples/WordCount.java

 

Then run the following:

javac -cp `hadoop classpath` WordCount.java
jar cf wc.jar WordCount*.class
hadoop jar wc.jar WordCount \
/var/examples/romeojuliet.txt /user/<your_uniqname>/wc-output

 

To view the output:

hdfs dfs -cat /user/<your_uniqname>/wc-output/part-r-00000

mrjob

By | | No Comments

Another way to run Hadoop jobs is through mrjob. Mrjob is useful for testing out smaller data on another system (such as your laptop), and later being able to run it on something larger, like a Hadoop cluster. To run an mrjob on your laptop, you can simply remove the “-r hadoop” from the command in the example we use here.

A classic example is a word count, taken from the official mrjob documentation here.

Save this file as mrjob_test.py.

"""The classic MapReduce job: count the frequency of words.
"""
from mrjob.job import MRJob
import re

WORD_RE = re.compile(r"[\w']+")


class MRWordFreqCount(MRJob):

    def mapper(self, _, line):
        for word in WORD_RE.findall(line):
            yield (word.lower(), 1)

    def combiner(self, word, counts):
        yield (word, sum(counts))

    def reducer(self, word, counts):
        yield (word, sum(counts))


if __name__ == '__main__':
     MRWordFreqCount.run()

Then, run the following command:

python mrjob_test.py -r hadoop /etc/motd

You should receive an output with the word count of the file /etc/motd. You can also try this with any other file you have that contains text.

Beeline

By | | No Comments

Beeline is an alternative to using the Hive CLI. While the Hive CLI connects directly to HDFS and the Hive Metastore, Beeline connects to HiveServer2. More information comparing the two can be found here.

To launch Hive using Beeline, run the command below. All Hive queries cane be run normally once connected to Beeline.

beeline -u 'jdbc:hive2://fladoop-nn02.arc-ts.umich.edu:2181,fladoop-nn01.arc-ts.umich.edu:2181,fladoop-rm01.arc-ts.umich.edu:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2'

#Set your queue in the CLI
set tez.queue.name=<your_queue>;

Streaming (Other Programming Methods)

By | | No Comments

It is also possible to write a job in any programming language, such as Python or C, that operates on tab-separated key-value pairs. The same example done above with Hive and Pig can also be written in Python and submitted as a Hadoop job using Hadoop Streaming. Submitting a job with Hadoop Streaming requires writing a mapper and a reducer. The mapper reads input line by line and generates key-value pairs for the reducer to “reduce” into some sort of sensible data. For our case, the mapper will read in lines and output the year as the key and a ‘1’ as the value if the ngram in the line it reads has only appeared in a single volume. The python code to do this is:

(Save this file as map.py)

#!/usr/bin/env python2.7
import fileinput
for line in fileinput.input():
 arr = line.split("\t")
 try:
    if int(arr[3]) == 1:
       print("\t".join([arr[1], '1']))
 except IndexError:
       pass
 except ValueError:
       pass

 

Now that the mapper has done this, the reduce merely needs to sum the values based on the key:

(Save this file as red.py)

#!/usr/bin/env python2.7

import fileinput

data = dict()

for line in fileinput.input():
  arr = line.split("\t")
  if arr[0] not in data.keys():
     data[arr[0]] = int(arr[1])
  else:
     data[arr[0]] = data[arr[0]] + int(arr[1])

for key in data:
 print("\t".join([key, str(data[key])]))

 

Submitting this streaming job can be done by running the below command:

yarn jar $HADOOP_STREAMING \
 -Dmapreduce.job.queuename=<your_queue> \
 -input /var/ngrams/data \
 -output ngrams-out \
 -mapper map.py \
 -reducer red.py \
 -file map.py \
 -file red.py \
 -numReduceTasks 10


hdfs dfs -cat ngrams-out/* | tail -5

streaming outputhdfs dfs -rm -r -skipTrash /user/<your_uniqname>/ngrams-out

Pig

By | | No Comments

Pig is similar to Hive and can do the same thing. The Pig code to do this is a little bit longer due to its design. However, writing long Pig code is generally easier that writing multiple SQL queries that that chain together, since Pig’s language, PigLatin, allows for variables and other high-level constructs.

# Open the interactive pig console
pig -Dtez.job.queuename=<your_queue>

# Load the data
ngrams = LOAD '/var/ngrams' USING PigStorage('\t') AS 
(ngram:chararray,
year:int, count:long, volumes:long);

# Look at the schema of the ngrams variable
describe ngrams;

# Count the total number of rows (should be 1430731493)
ngrp = GROUP ngrams ALL;
count = FOREACH ngrp GENERATE COUNT(ngrams);
DUMP count;

# Select the number of words, by year, that have only appeared in a single volume
one_volume = FILTER ngrams BY volumes == 1;
by_year = GROUP one_volume BY year;
yearly_count = FOREACH by_year GENERATE group, COUNT(one_volume);
DUMP yearly_count;

The last few lines of output should look like this:

More information on Pig can be found on the Apache website.

Hive

By | | No Comments

To demonstrate Hive, below is a short tutorial. The tutorial uses the Google NGrams dataset, which is available in HDFS in /var/ngrams.

# Open the interactive hive console
hive --hiveconf tez.queue.name=<your_queue>

# Create a table with the Google NGrams data in /var/ngrams
CREATE EXTERNAL TABLE ngrams_<your_uniqname>(ngram STRING, year INT, count BIGINT, volumes BIGINT)
     ROW FORMAT DELIMITED
     FIELDS TERMINATED BY '\t'
     STORED AS TEXTFILE
     LOCATION '/var/ngrams';

# Look at the schema of the table
DESCRIBE ngrams_<your_uniqname>;

# Count the total number of rows (should be 1430731493)
SELECT COUNT(*) FROM ngrams_<your_uniqname>;

# Select the number of words, by year, that have only appeared in a single volume
SELECT year, COUNT(ngram) FROM ngrams_<your_uniqname> WHERE 
volumes = 1
GROUP BY year;

# Optional: delete your ngrams table
DROP table ngrams_<your_uniqname>;

# Exit the Hive console
QUIT;

The last few lines of output should look something like this:

More information can be found on the Apache website.