chukwa usage with hadoop 2 (2.4.0) and hbase 0.98

If you want to use chukwa with hadoop 2.4.0 and hbase 0.98 you need to exchange some jars, because the trunk version uses older versions of hadoop and hbase (at the time of writing).

To be sure that I have the newest version I cloned the sources from the github repository

https://github.com/apache/chukwa

using the git command

In the local chukwa directory the build is easy done using maven:

After the successful maven build the fresh tarball of chukwa should be available in the target subdirectory.

Now we can untar the tarball in a place of our choice and configure chukwa like it is described in the tutorial. The hbase schema file has some small typos which I corrected in a fork.

https://github.com/woopi/chukwa/commit/c962186667700e04fe0ed5c040322ee77f3042da

Before starting chukwa we have to exchange some jars. First turn off some jars that are for older hadoop / hbase versions.

Copy the following jars from the hadoop and/or hbase distribution to the same chukwa directory.

Now I started a chukwa collector on my namenode and a agent on a linux box. After some minutes the first log data can be seen in HBase:

If writing to hdfs you will see something like:

 

Apache Hadoop 2.4.0 binary build for 64bit debian linux

At the time of writing this blog apache hadoop builds from the apache website are compiled for 32bit platforms. If you use this on a 64bit platform (with Java 64 bit) you might get some error messages regarding the shared library

libhadoop.so.1.0.0

I have now compiled hadoop 2.4.0 on debian 7.4 (wheezy) 64 bit.

With this version I didn’t get again this error message.

The error message is a bit misleading, but some other blogs as well as a small hint in the documentation page

Native library documentation

directed me to a compilation of hadoop from the source tarball. The build was straight forward with the command

mvn package -Pdist,native -DskipTests=true -Dtar

after installation of necessary packages that have been missing in my box:

For those who want to start directly with the tarball compiled for 64bit platforms find here my hadoop 2.4.0 bundle:

hadoop-2.4.0-64bit.tar.gz

 

 

 

Lambda Architecture vs. Java 8 Lambdas

Actually the Lambda buzz word appears often in IT publications. Some reader may get confused and put the article in the wrong context, because actually there are two completely different topics with a similarity in the title: Lambda (λ)

1. Java SE 8 Lambdas

Lambdas introduced with the new Java SE 8 targets functional programming aspects.

http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html

With the introduction of lamdas, Java has now the ability to handle funtions as method parameters. Other programming languages belonging to the class of functional programming languages are: Haskell, Clojure, Lisp

Here an example taken from the Java SE documentation page:

Call a method an put functions as parameters:

 

2. Lambda Architecture (Big Data)

Lambda Architecture was introduced by Nathan Marz. It describes roughly spoken a design in the big data area, which combines a batch layer of data processing (with higher latency) with a speed layer that makes use of stream processing tools like Storm to produce real time views. The user gets data combined from both layers so that he can see actual data in real time. Real time views from the batch layer, which typically uses Hadoop’s MapReduce to aggregate/transform raw input data, can be achieved by using elephantDB. The layer between the batch layer and the user is called serving layer.

Have a look to Nathan Marz’s book, chapter 1, section 1.7 Summary of the Lambda Architecture to get more informations about this.

http://www.manning.com/marz/BDmeapch1.pdf

LambdaArchitecture

Hadoop WebHDFS usage in combination with HAR (hadoop archive) from PHP

Hadoop is not very efficient for storing a lot of smaller files.

If you need to access a lot of files nevertheless you can use HAR to get rid of the small file problem. Here are the steps that I did to get access to the files from PHP.

1. Copy the files from local filesystem to HDFS

2. Create a hadoop archive

Let hadoop create ONE single HAR file with name hadoop-api.har from the whole directory
/tmp/har/ (HDFS)

This command will start a MapReduce job that creates the HAR file without deleting the original small files in HDFS.

3. Delete the small files from HDFS

4. HAR file content (HAR filesystem)

5. HAR file content (HDFS filesystem)

The HAR file is not really just one file but a directory with a couple of files. Let’s have a look to it with raw hdfs commands

6. Structure of the HAR file index (how to get access the single files)

hdfs dfs -cat /har/hadoop-api.har/_index
...snip...
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fio%2Fserializer%2Fclass-use%2FJavaSerialization.html file part-0 17439924 4592 1401786436896+420+hadoop+supergroup
%2Fapi%2Fsrc-html%2Forg%2Fapache%2Fhadoop%2Frecord%2Fmeta%2FUtils.html file part-0 86374093 9779 1401786547239+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fhttp%2Flib%2FStaticUserWebFilter.StaticUserFilter.html file part-0 12578713 14088 1401786409718+420+hadoop+supergroup
%2Fapi%2Fsrc-html%2Forg%2Fapache%2Fhadoop%2Ffs%2Fftp%2FFTPFileSystem.html file part-0 33753102 56570 1401786511587+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Frecord%2Fcompiler%2Fclass-use%2FConsts.html file part-0 23911123 4493 1401786471791+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fsecurity%2Fproto%2Fclass-use%2FSecurityProtos.GetDelegationTokenResponseProto.html file part-0 27203013 22194 1401786486455+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fha%2FFenceMethod.html file part-0 9642822 11698 1401786398308+420+hadoop+supergroup

=>  %2Fapi%2Forg%2Fapache%2Fhadoop%2Fservice%2Fclass-use%2FService.html file part-0 27995653 13268 1401786490938+420+hadoop+supergroup
                                                                                    --------
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fha%2Fproto%2Fclass-use%2FHAServiceProtocolProtos.MonitorHealthRequestProto.html file part-0 11807109 26499 1401786404845+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Ffs%2Fpermission%2Fclass-use%2FAccessControlException.html file part-0 8820869 4647 1401786392428+420+hadoop+supergroup
%2Fapi%2Fsrc-html%2Forg%2Fapache%2Fhadoop%2Fipc%2Fprotobuf%2FRpcHeaderProtos.RpcRequestHeaderProto.OperationProto.html file part-0 76175228 529335 1401786535978+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Ffs%2FHardLink.LinkStats.html file part-0 6370459 15816 1401786381184+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fio%2FMapFile.Reader.html file part-0 13487270 42673 1401786413666+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fio%2FRawComparator.html file part-0 13752188 12048 1401786415087+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Frecord%2Fcompiler%2Fclass-use%2FJByte.html file part-0 23924635 4482 1401786472012+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Futil%2Fclass-use%2FShell.OSType.html file part-0 29576158 7120 1401786501833+420+hadoop+supergroup
%2Fapi%2Fsrc-html%2Forg%2Fapache%2Fhadoop%2Fha%2Fproto%2FHAServiceProtocolProtos.TransitionToActiveRequestProtoOrBuilder.html file part-0 46601135 515881 1401786515905+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fnet%2Funix%2Fpackage-summary.html file part-0 23093614 4293 1401786467275+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fipc%2Fprotobuf%2Fclass-use%2FRpcHeaderProtos.RpcResponseHeaderProtoOrBuilder.html file part-0 20622979 7652 1401786447960+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Fio%2Fretry%2Fclass-use%2FAtMostOnce.html file part-0 17192339 4502 1401786434497+420+hadoop+supergroup
%2Fapi%2Forg%2Fapache%2Fhadoop%2Ftools%2Fproto%2Fclass-use%2FGetUserMappingsProtocolProtos.GetUserMappingsProtocolService.html file part-0 28531720 7322 1401786494023+420+hadoop+supergroup
...snip...

Each row of the index file contains several space-separated columns:

  • The url encoded path in the HAR file
  • The type of the entry, i.e. file or dir
  • The HDFS file which contains the content
  • The offset in the HDFS content file
  • The length of the file

7. Example access using curl

8. PHP access to WebHDFS

In a simple php script the HAR index file is loaded, parsed and used to construct the URL to download the content of the file (inside the HAR), where the local / relative path is append to the php script URL:

The part behind index.php [/api/org/apache/hadoop/ha/proto/class-use/HAServiceProtocolProtos.MonitorHealthRequestProto.html] is a example html file which is included in the HAR file.

The php file’s source is

 9. Remarks

It would be great if the WebHDFS implementation would allow to access a har filesystem directly.

[1] WebHdfs