Wednesday, December 10, 2014

Installing Ambari from scratch using Vagrant. Step by Step

Being a developer it a pain to configure hadoop every time.
I wanted a process that is easily reproducible.
So at a high level steps involved are:

Make sure you have vagrant installed
Config : 3 node , 3.4GB, 1 core each
1 master, 2 slaves
Create a vagrant file (Make sure you have more than 10GB Ram)

after creating this file all you need to do is 'vagrant up'

Step 1: FDQN
change vi /etc/hosts

prepend with following entries hadoop1 hadoop2 hadoop3

(In All three nodes)
you should be able to ssh using domain names + 'hostname -f' returns fdqn for that machine

Install NTP and NTPD in all nodes

Use the script below to prepare your cluster
./ hosts.txt

And Vola . Just run the commands below.

PS : Make sure you had entries to the hosts in Windows. Remember to check if the hostname is pingable from windows.



Tuesday, November 11, 2014

MapR SingleNode in Ubuntu

Creating a single node instance of MapR in ubuntu.

Saturday, November 1, 2014

Using Pig to Load and Store data from HBase

Lets first store data from HDFS to our HBase Table. For this we will be using

Class HBaseStorage

public HBaseStorage(String columnList) throws org.apache.commons.cli.ParseException,IOException)

Warning: Make sure that your PIG_CLASSPATH refers to all the library files in HBASE,HADOOP and ZOOKEEPER. Doing this will save you countless hours of debugging.

Lets Create a HBase table for the data given below named as testtable.

Make sure that your first column is the ROWKEY while doing an insert to HBase table.

Lets Create a Table for this data in HBase.

>> cd $HBASE_HOME\bin
>> ./hbase shell

This will take you to your HBase shell

>> create 'testtable','cf'
>> list 'testtable'
>> scan 'testtable'

Now lets fire up grunt shell

Type in the following commands in the grunt shell

and TaDa....

Pig Casting and Schema Management

Pig is quite flexible when schema need to be manipulated.

Consider this data set

Suppose we needed to define schema after some processing we could cast the columns with their data types

That all for today folks.


Friday, October 31, 2014

Sum in Pig

This is a simple PigScript i wrote to understand the concepts of pig.
Well here it goes

Today we will see how to do a simple sum operation in Pig.

Consider this as my input data

The first example - sum
The second example - group and sum

Wednesday, October 29, 2014

Installing R with RStudio in Ubuntu

Helllo people,
lets start the installation by opening your terminal first.

and type in the flowering commands.

sudo apt-get install r-base
sudo apt-get install gdebi-core
sudo apt-get install libapparmor1
sudo gdebi rstudio-server-0.98.1085-amd64.deb

and here you go with a brand new R-Studio.

Apache Pig Day 1

Hello People,
I increasingly spending more time working on pig. (Thankgod).
This experience has been very valuable as it has increased my knowledge on data.
Pig is an open source project.
Language for developing pig scripts is called pig latin.
Its easier to write pig script than mapreduce code (Its saves the developers time)
Pig is a dataflow language that means it allows the users to describe data.
Pig Latin Scripts are describes a DAG. Directed Acyclic Graph
Developed by Yahoo!.
Aim to this language is to find a sweet spot between SQL and Shell Scripts.
To sum it up Pig is you to build data pipelines / run ETL workloads.

Friday, July 25, 2014

TO DO List

This order is of importance...
a) Data Structures and Algorithms
b) Machine Learning
c) Dev Ops (Linux)
d) Start a youtube channel
e) Spring Framework Learn

I guess this will take a life time to master.

Currently preparing for Hadoop Admin Certification.

Monday, June 16, 2014


I will posting series of posts on Pig.
I personally feel amazed at the simplicity and the power of this language.

Pig Cheat Sheet:


Tuesday, April 29, 2014

Crawling - Scrapy

What is Scrapy?

Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.


OS Ubuntu

Installing Dependencies 
sudo apt-get install build-essential libssl-dev libffi-dev python-dev

Install scrapy
sudo pip install Scrapy

The above scripts will install scrapy

Monday, April 28, 2014


I am taking algorithms course in coursera.

Wednesday, April 23, 2014

Python week 4

Week 4 of coursera.
Been busy with odd jobs just not able to finish the assignments and be done with it.
I hope to complete all video lectures and assignments today.

Sunday, April 20, 2014

Julia Meetup

I had an amazing experience organizing the first Julia meetup in Inmobi with Abhijit and Kiran.
Gave my first formal open source talk and it felt great.
Link to my slides -

Friday, April 18, 2014

Distributed Cache - Pig

I had been trying to use Distributed-Cache in Pig.
After a lot of trial and errors behold SUCCESS!
Lets get to the meat.

Lets go through the steps.
a)Create an Eval UDF
b)Initialize Distributed Cache using getCachedFiles()
c)Initialize the Data Structure using step b.
d)Finally apply your logic on the data.

Saturday, April 12, 2014

Python Week 3

Week 3 was easy. I also managed to score a whooping 92% in the test.
I am enjoying the mini assignments. Hope to complete every thing.


Tuesday, April 8, 2014

Python Week 2

I completed the mini project however i forgot to give my weekly quiz :( . I was mad at my self for doing this after long research i found that i would be loosing around ~2% from my final score.


Wednesday, March 26, 2014

Learning Python

Just for the record i am a big fan of python. Main reason my frustration with JAVA.
I am taking the interactive python course in Coursera.
Just finished week 0.
Wish me luck. I want to complete at least 1 MOOC fully.
Will post weekly updates on how it goes.


I have always excited with the NOSQL hype. End result HBase certification.
I took up the cloudera certification.

My thoughts:
It was a good investment of time and money.
Really exposes you to BigData NoSql Space.
It improved my over understanding of the NoSQL BigData Eco-System.

Now off to prepare for the Cloudera Admin Program.


Wednesday, February 26, 2014

Using Sublime Text - JULIA (Ubuntu)

Installing Sublime Text 2

sudo add-apt-repository ppa:webupd8team/sublime-text-3
sudo apt-get update
sudo apt-get install sublime-text-installer

Run Julia

And then follow the steps in this Site: