PacktLib: Hadoop Operations and Cluster Management Cookbook

Hadoop Operations and Cluster Management Cookbook

Credits

About the Author

About the Reviewers

www.PacktPub.com

Preface

Big Data and Hadoop

Introduction

Defining a Big Data problem

Building a Hadoop-based Big Data platform

Choosing from Hadoop alternatives

Preparing for Hadoop Installation

Introduction

Choosing hardware for cluster nodes

Designing the cluster network

Configuring the cluster administrator machine

Creating the kickstart file and boot media

Installing the Linux operating system

Installing Java and other tools

Configuring SSH

Configuring a Hadoop Cluster

Introduction

Choosing a Hadoop version

Configuring Hadoop in pseudo-distributed mode

Configuring Hadoop in fully-distributed mode

Validating Hadoop installation

Configuring ZooKeeper

Installing HBase

Installing Hive

Installing Pig

Installing Mahout

Managing a Hadoop Cluster

Introduction

Managing the HDFS cluster

Configuring SecondaryNameNode

Managing the MapReduce cluster

Managing TaskTracker

Decommissioning DataNode

Replacing a slave node

Managing MapReduce jobs

Checking job history from the web UI

Importing data to HDFS

Manipulating files on HDFS

Configuring the HDFS quota

Configuring CapacityScheduler

Configuring Fair Scheduler

Configuring Hadoop daemon logging

Configuring Hadoop audit logging

Upgrading Hadoop

Hardening a Hadoop Cluster

Introduction

Configuring service-level authentication

Configuring job authorization with ACL

Securing a Hadoop cluster with Kerberos

Configuring web UI authentication

Recovering from NameNode failure

Configuring NameNode high availability

Configuring HDFS federation

Monitoring a Hadoop Cluster

Introduction

Monitoring a Hadoop cluster with JMX

Monitoring a Hadoop cluster with Ganglia

Monitoring a Hadoop cluster with Nagios

Monitoring a Hadoop cluster with Ambari

Monitoring a Hadoop cluster with Chukwa

Tuning a Hadoop Cluster for Best Performance

Introduction

Benchmarking and profiling a Hadoop cluster

Analyzing job history with Rumen

Benchmarking a Hadoop cluster with GridMix

Using Hadoop Vaidya to identify performance problems

Balancing data blocks for a Hadoop cluster

Choosing a proper block size

Using compression for input and output

Configuring speculative execution

Setting proper number of map and reduce slots for the TaskTracker

Tuning the JobTracker configuration

Tuning the TaskTracker configuration

Tuning shuffle, merge, and sort parameters

Configuring memory for a Hadoop cluster

Setting proper number of parallel copies

Tuning JVM parameters

Configuring JVM Reuse

Configuring the reducer initialization time

Building a Hadoop Cluster with Amazon EC2 and S3

Introduction

Registering with Amazon Web Services (AWS)

Managing AWS security credentials

Preparing a local machine for EC2 connection

Creating an Amazon Machine Image (AMI)

Using S3 to host data

Configuring a Hadoop cluster with the new AMI

Index