Big Data and Hadoop - Big-data-chennai

Hadoop Developer - HDFS,MapReduce,Pig,hive,Hbase,OOzie,Flume,Sqoop

Hadoop Developer - Big-data-chennai

Hadoop developer responsibilities involve actual coding/programming of Hadoop applications. Ideally the candidate should have at least 2 years of experience as a programmer. Hadoop developer roles and responsibilities are synonymous to a software developer or application developer - refers to the same role but in the Big Data domain.

Hadoop Developer Job Description

Hadoop Developer is a consultant with prior experience in building and designing applications using procedural languages in the Hadoop space. Most of the job portals like, define Hadoop Developer Job Description as “A person comfortable in explaining design concepts to customers as well being capable of managing a team of developers."

Hadoop Developer Roles and Responsibilities

Defining job flows
Managing and Reviewing Hadoop Log Files
Manage Hadoop jobs using scheduler
Cluster Coordination services through Zookeeper
Support MapReduce programs running on the Hadoop cluster.

Skills Required

In most job sites like Monster,Dice, and Glassdoor you will find that the Hadoop developer job description will list the requirements for these specific skills:

Ability to write MapReduce jobs
Experience in writing Pig Latin scripts
Hands on experience in HiveQL
Familiarity with data loading tools like Flume, Sqoop
Knowledge of workflow/schedulers like Oozie

Key Features

3 Projects with 6 Unique Data Sets Included

Free JAVA,LINUX,SQL Training Include

Free Ebooks,Video Material and Software given in the end of the class

Course Curriculum

Lesson 1: Introduction to Hadoop

- Understand what Hadoop is
- Understand what Big Data is
- Learn about other open source software related to Hadoop
- Understand how Big Data solutions can work on the Cloud

Lesson 2: Hadoop architecture

- Understand the main Hadoop components
- Learn how HDFS works
- List data access patterns for which HDFS is designed
- Describe how data is stored in an HDFS cluster

Lesson 3: Introduction to MapReduce

- Understand the concepts of map and reduce operations
- Describe how Hadoop executes a MapReduce job
- List MapReduce fault tolerance and scheduling features
- List MapReduce fundamental data types
- Describe a MapReduce data flow

Lesson 4: - PIG

-Installing and Running Pig
-Grunt, Pig's Data Model
-Pig Latin
-Developing & Testing Pig Latin Scripts
-Writing Evaluation
- Load & Store Functions.

Lesson 5:- HIVE

Hive Architecture
-Running Hive
-Comparison with Traditional Database (Schema on Read Versus Schema on Write
Updates,Transactions and Indexes)
-HiveQL (Data Types, Operators and Functions)
-Tables (Managed Tables and External Tables, Partitions and Buckets
-Storage Formats, Importing Data, Altering Tables, Dropping Tables)
-Querying Data (Sorting And Aggregating, Map Reduce Scripts, Joins & Subqueries & Views, Map and Reduce site Join to optimize Query)
-User Defined Functions
-Appending Data into existing Hive Table
-Custom Map/Reduce in Hive.

Lesson 6:- HBASE

-Client API - Basics
-Client API - Advanced Features
-Client API - Administrative Features
-Available Client, Architecture
-MapReduce Integration
-Advanced Usage
-Advance Indexing.

Lesson 7:- ZOOKEEPER

-Data Modal
-Sessions States
-Building Applications with Zookeeper (Zookeeper in Production).

Lesson 8:SQOOP

- Database Imports
-Working with Imported Data
-Importing Large Objects
-Performing Exports
-Exports - A Deeper Look

Required skill set:

Basics of Java(classes, objects, inheritence, methods etc), basics of SQL.[People with non JAVA background also have Pig and hive {languages} which were developed for the sole purpose of non JAVA developers, to be able to work with Hadoop.

free linux training
free java training
free sql training
free networking training
BASIC NetworkingFREE

Style Switcher

12 Predefined Color Skins Top Bar Color Layout Style Patterns for Boxed Version