Showing posts from 2014

Issues & Solution : HDFS High Availability for Hadoop 2.X

Here I have discussed few error / issues during the Hadoop HA setup.
Error 1)
when I start resourcemanager from active namenode in Hadoop HA ,
root@master:/opt/hadoop-2.2.0# sbin/ start resourcemanager

Problem binding to [master:8025] Cannot assign requested address; Solution
Check your /etc/hosts file, If you have multiple enrty for same IP/localhost, delete and make sure only one valued entry. Just I removed all other entry for the IP '' from /etc/hosts standby
Error 2)
Once I have configured Haddop HA, strated Haddop cluster root@master[bin]#hdfs namenode -format

FATAL namenode.NameNode:Exception in namenode join

org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: Call From standby/ to master:8485 failed on connection exception: Solution
start all t…

Hadoop High Availability - Daemons overview

Discussed few concept which I came across setting up Hadoop Cluster High Availability

Role of StandBy Node

  Standby is simply acting as a slave, maintaining enough state to provide a fast failover if necessary.

  In order provide fast fail-over Standby node to keep its state synchronized with the Active node, both nodes communicate with a group of separate daemons called "JournalNodes" (JNs). When any namespace modification is performed by the Active node, it durably logs a record of the modification to a majority of these JNs.

 The Standby node is capable of reading the edits from the JNs, and is constantly watching them for changes to the edit log. As the Standby Node sees the edits, it applies them to its own namespace.

  In the event of a failover, the Standby will ensure that it has read all of the edits from the JounalNodes before promoting itself to the Active state. This ensures that the namespace state is fully synchronized before a failover …

Headroom Miscalculation can lead to Deadlock in Hadoop Two

What is headroom? It is defined as "the maximum resource of an application can get". Headroom calculation is used mainly for reducer preemption. What if the headroom calculation is wrong?

In our production clusters, we observed that some MR jobs stuck in deadlock states. The scenario usually happens when some map tasks fail and the application master needs to reschedule them. Due to the headroom miscalculation, the application master thinks there are enough resources to relaunch the failed map tasks, thus it does not preempt reduce tasks. But in fact, all the resources are occupied by the reducers. As a result, the reducers wait for all map outputs before they could finish, on the other hand, the failed map tasks wait for reducers to finish to release resource so that they can have a chance to rerun.

This issue is discussed in YARN-1198 and its sub JIRAs YARN-1857, YARN-1680, and YARN-2008.

This issue partly comes from the fact that the headroom is calculated in …

How to configure Hadoop HA for a running hadoop two cluster

Sometimes, we need to add HA to a running cluster. Take a hadoop 2.4.1 cluster as an example, I will list the steps to setup journal quorum based name node HA and resource manager HA.

Backup the HDFS Edit Files and fsimage Files
Before we do the upgrade, we need to backup the HDFS edit files and fsimage files. It is possible to use the following curl command to download the fsimage file from a running cluster, where is the name node IP.

$ curl -o fsimage.test ""
But be aware that the fsimage file only contains the metadata when the name node starts since Name Node merges fsimage and edits files only during start up as stated in Hadoop document as follows.

The NameNode stores modifications to the file system as a log appended to a native file system file, edits. When a NameNode starts up, it reads HDFS state from an image file, fsimage, and then applies edits from the edits log file. It …

How to configure replication factor and block size for HDFS?

Hadoop Distributed File System (HDFS) stores files as data blocks and distributes these blocks across the entire cluster. As HDFS was designed to be fault-tolerant and to run on commodity hardware, blocks are replicated a number of times to ensure high data availability. The replication factor is a property that can be set in the HDFS configuration file that will allow you to adjust the global replication factor for the entire cluster. For each block stored in HDFS, there will be n – 1 duplicated blocks distributed across the cluster. For example, if the replication factor was set to 3 (default value in HDFS) there would be one original block and two replicas. Open the hdfs-site.xml file. This file is usually found in the conf/ folder of the Hadoop installation directory. Change or add the following property to
hdfs-site.xml: <property> <name>dfs.replication<name> <value>3<value> <description>Block Replication<description> <pro…

Hadoop Solr Tutorial - Installtion and Configuration

Apache Solr is an open source text search server. It is based on the Apache Lucene search libraries. Solr does full-text search, highlight the hits, near real-time indexing.
It has an extremely scalable search infrastructure that provides replication, load-balanced search query, and automatic failover. This can get the input data that needs to be indexed for searching from various sources including information from a database. You can use HTTP/XML, JASON APIs provided by Apache Solr and write application code in any programming language.
The Apache Solr search server is written in Java. So, it needs a servlet container in the backend to run. By default, when you install Apache Solr, it comes with Jetty as the servlet container that you can use to run some examples.
But in real life, when you install Apache Solr, you want to install with much more robust servlet container like Tomact. This article explains how to install Solr with Tomcat.

Create a Solr accountOn your system, c…

HIVE Sorting and Join

Sorting and AggregatingSorting data in Hive can be achieved by use of a standard ORDER BY clause, but there is a catch. ORDER BY produces a result that is totally sorted, as expected, but to do so it sets the number of reducers to one, making it very inefficient for large datasets. (Hopefully, a future release of Hive will employ the techniques described in Total Sort to support efficient parallel sorting.) When a globally sorted result is not required—and in many cases it isn’t—then you can use Hive’s nonstandard extension, SORT BY instead. SORT BY produces a sorted file per reducer. In some cases, you want to control which reducer a particular row goes to, typically so you can perform some subsequent aggregation. This is what Hive’s DISTRIBUTE BY clause does. Here’s an example to sort the weather dataset by year and temperature, in such a way to ensure that all the rows for a given …

Log Parsing through Hadoop, Hive & Python

One of the primary analysis done on web access logs is some cohort analysis where one need to pull user access date time and along with other dimensions like user, ip, geo data, etc. Here I will be using Hadoop/ Hive/ Python to pull date, ip data from access log into Hadoop and run some queries. The example illustrates using Hadoop (version 0.20.1) streaming, SERDE, Hive’s (version 0.40) plugin customer mapper (get_access_log_ip).

The steps below load few thousand rows into a target table (dw_log_ip_test – data warehouse access log) “access_log_2010_01_25” then extract date from format like DD/Mon/YYYY:HH:MM:SS -800 to ‘DD/Mon/YYYY’ along with remote ip address through a Python streaming script.

Step 1: First create a table to access log (access_log_2010_01_25) and then load data into it.

hive> CREATE TABLE access_log_2010_01_25 ( request_date STRING, remote_ip STRING, method STRING, request STRING, protocol STRING, user STRING, status STRING, size ST…


Steps to install mysqlRun the command :sudo apt-get install mysql-server and give appropriate username and password. Using sqoop to perform import to hadoop from sql Download mysql-connector-java-5.1.28-bin.jar and move to/usr/lib/sqoop/lib using command user@ubuntu:~$ sudo cp mysql-connnectpr-java-5.1.28-bin.jar >/usr/lib/sqoop/lib/ Login to mysql using command user@ubuntu:~$ mysql -u root -p Login to secure shell using command user@ubuntu:~$ ssh localhost Start hadoop using the command user@ubuntu:~$ bin/hadoop Run the command user@ubuntu:~$ sqoop import -connect >jdbc:mysql://localhost:3306/sqoop -username root -pasword abc >-table employees -m This command imports the employees table from the sqoop directory of myql to hdfs. Error points Do check if the hadoop is in safe mode using command user@ubuntu:~$hadoop dfsadmin -safemode get If you are getting safemode is on, run the command
user@ubuntu:~$hadoop dfsadmin -safemode leave and again run the command


This section refers to the installation settings of Hive on a standalone system as well as on a system existing as a node in a cluster.
INTRODUCTIONApache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. Apache Hive supports analysis of large datasets stored in Hadoop’s HDFS and compatible file systems such as Amazon S3 filesystem. It provides an SQL-like language called HiveQL(Hive Query Language) while maintaining full support for map/reduce. Hive Installation Installing HIVE: Browse to the link: Click the apache-hive-0.13.0-bin.tar.gz Save and Extract it Commandsuser@ubuntu:~$ cd /usr/lib/ user@ubuntu:~$ sudo mkdir hive user@ubuntu:~$ cd Downloads user@ubuntu:~$ sudo mv apache-hive-0.13.0-bin /usr/lib/hive Setting Hive environment variable: Commands
user@ubuntu:~$ cd user@ubuntu:~$ sudo gedit ~/.bashrc Copy and paste the following lines at end of the file
# Set HIVE…