Sqoop Install Configure Run Hello World

Sqoop Install Configure Run Hello World

Sqoop is a tool designed to transfer data between Hadoop and Relational Databases or Mainframes. You can use Sqoop to import data from a Relational Database System (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS), transform the data in Hadoop MapReduce and then export the data back into an RDBMS.

For databases Sqoop will read the table row-by-row into HDFS. For Mainframe datasets, Sqoop will read records from each Mainframe dataset into HDFS. The output of this import process is a set of files containing a copy of the imported table or datasets. The import process is performed in parallel. For this reason, the output will be in multiple files. These files may be delimited text files ( for example with commas or tabs separating each field), or binary Avro or SequenceFiles containing serialized record data.

A by-product of the import process is a generated Java Class which can encapsulate one row of the imported table.

Versions we used

$ java –version ## from jdk-8u73-linux-x64.gz
java version “1.8.0_73”
$ hadoop version
Hadoop 2.6.3
$ mysql –user=xxx –password=xxx
Server version: 5.1.73 Source distribution

Pre Requisites: Install & Configure MySQL

# sudo yum install mysql-server
# sudo service mysqld start
$ mysqladmin -u root password root
$ mysql –user=root –password=root
mysql> CREATE USER ‘newuser’@’localhost’ IDENTIFIED BY ‘password’;
mysql> GRANT ALL PRIVILEGES ON *.* TO ‘newuser’@’localhost’;
mysql> select * from mysql.user;
$ mysql –user=newuser –password=password –host=localhost
mysql> show databases;
| information_schema |
| mysql |
| test |

Pre Requisites: Check Hadoop Cluster

$ hadoop version
Hadoop 2.6.3
$ start-dfs.sh
$ start-yarn.sh
$ mr-jobhistory-daemon.sh start historyserver
$ jps
1901 NameNode
2190 SecondaryNameNode
2026 DataNode
2333 ResourceManager
2432 NodeManager
2779 JobHistoryServer
$
$ mkdir ~/input
$ cp ~/hadoop2/etc/hadoop/* ~/input/.
$ hdfs dfs -mkdir /input
$ hdfs dfs -put ~/input/* /input
$ hdfs dfs -rm -r /output
$ hadoop jar /home/hadoop/hadoop-2.6.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.3.jar grep /input /output ‘dfs[a-z.]+’
16/02/28 22:18:19 INFO mapreduce.Job: map 0% reduce 0%
16/02/28 22:19:48 INFO mapreduce.Job: map 100% reduce 100%

$ hdfs dfs -cat /output/part-r-00000 | head -3
1 dfsadmin
1 dfs.server.namenode.
1 dfs.replication
$

Download Sqoop

From
http://sqoop.apache.org/
sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz

Install & Configure

$ tar -xvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz
$ ln -s sqoop-1.4.6.bin__hadoop-2.0.4-alpha sqoop
$ cd /home/hadoop/sqoop/conf
$ cp sqoop-env-template.sh sqoop-env.sh
$ vi sqoop-env.sh
export HADOOP_MAPRED_HOME=/home/hadoop/hadoop2
export HADOOP_COMMON_HOME=/home/hadoop/hadoop2
$ vi ~/.bash_profile
export SQOOP_HOME=/home/hadoop/sqoop
export PATH=$SQOOP_HOME/bin:$PATH
$ . ~/.bash_profile

Install & Configure MYSQL Connector

http://dev.mysql.com/downloads/connector/j/
mysql-connector-java-5.1.38.tar.gz
cd /home/hadoop/sqoop/lib
tar -tvf mysql-connector-java-5.1.38.tar.gz | grep bin.jar
-rw-r–r– 0/0 983914 2015-12-02 03:02 mysql-connector-java-5.1.38/mysql-connector-java-5.1.38-bin.jar
tar -xvf mysql-connector-java-5.1.38.tar.gz mysql-connector-java-5.1.38/mysql-connector-java-5.1.38-bin.jar
find mysql-connector-java-5.1.38/mysql-connector-java-5.1.38-bin.jar
mv mysql-connector-java-5.1.38/mysql-connector-java-5.1.38-bin.jar /home/hadoop/sqoop/lib/.
rmdir mysql-connector-java-5.1.38
$ ls -l /home/hadoop/sqoop/lib/mysql-connector-java-5.1.38-bin.jar
-rw-r–r– 1 hadoop hadoop 983914 Dec 2 03:02 /home/hadoop/sqoop/lib/mysql-connector-java-5.1.38-bin.jar
$

Create Test Data to Load using Sqoop in MySQL

mysql –user=newuser –password=password –host=localhost
mysql> create database testing;
mysql> use testing;
mysql> create table employee(firstname varchar(255), lastname varchar(255));
mysql> insert into employee values (‘pathik’,’paul’);
mysql> insert into employee values (‘ashok’,’sharma’);
mysql> insert into employee values (‘imon’,’paul’);
mysql> insert into employee values (‘ella’,’paul’);
mysql> select * from employee;
mysql> create table employee2 as select * from employee;
mysql> select * from employee2;
mysql> create table employee3(firstname varchar(255), lastname varchar(255));

We have two tables to work with “employee” and “employee2” in the database “testing”

Import One Table

$ sqoop/bin/sqoop-version
Sqoop 1.4.6

$ sqoop/bin/sqoop import –connect jdbc:mysql://localhost/testing –username newuser –password password –table employee
16/03/21 16:59:19 ERROR tool.ImportTool: Error during import: No primary key could be found for table employee. Please specify one with –split-by or perform a sequential import with ‘-m 1’.

$ sqoop/bin/sqoop import –connect jdbc:mysql://localhost/testing –username newuser –password password –table employee -m1
16/03/21 17:02:04 INFO mapreduce.Job: map 0% reduce 0%
16/03/21 17:02:10 INFO mapreduce.Job: map 100% reduce 0%
$ hdfs dfs -lsr /user/hadoop/employee
-rw-r–r– 1 hadoop supergroup 0 2016-03-21 17:02 /user/hadoop/employee/_SUCCESS
-rw-r–r– 1 hadoop supergroup 45 2016-03-21 17:02 /user/hadoop/employee/part-m-00000

$ hdfs dfs -cat /user/hadoop/employee/part-m-00000
pathik,paul
ashok,sharma
imon,paul
ella,paul
$

Import All Tables

$ hdfs dfs -rmr /user/hadoop/employee
$ sqoop/bin/sqoop import-all-tables –connect jdbc:mysql://localhost/testing –username=newuser –password=password -m 1
$ hdfs dfs -ls -R /user/hadoop
drwxr-xr-x – hadoop supergroup 0 2016-01-16 12:34 /user/hadoop/employee
-rw-r–r– 1 hadoop supergroup 0 2016-01-16 12:34 /user/hadoop/employee/_SUCCESS
-rw-r–r– 1 hadoop supergroup 48 2016-01-16 12:34 /user/hadoop/employee/part-m-00000
drwxr-xr-x – hadoop supergroup 0 2016-01-16 12:34 /user/hadoop/employee2
-rw-r–r– 1 hadoop supergroup 0 2016-01-16 12:34 /user/hadoop/employee2/_SUCCESS
-rw-r–r– 1 hadoop supergroup 48 2016-01-16 12:34 /user/hadoop/employee2/part-m-00000
$

Sqoop to Export to RDMS

$ sqoop/bin/sqoop export –connect jdbc:mysql://localhost/testing –username=newuser –password=password –table employee3 -m1 –export-dir /user/hadoop/employee

$ mysql –user=newuser –password=password
mysql> use testing;
mysql> select * from employee3;
+———–+———-+
| pathik | paul |
| ashok | sharma |
| imon | paul |
| ella | paul |
+———–+———-+
mysql>

Sqoop Eval and other Commands

]$ sqoop/bin/sqoop eval –connect jdbc:mysql://localhost/testing –username newuser –password password –query “select * from employee limit 2”
———————————————–
| firstname | lastname |
———————————————–
| pathik | paul |
| ashok | sharma |
———————————————–
$

$ sqoop/bin/sqoop list-databases –connect jdbc:mysql://localhost/ –username newuser –password password
information_schema
mysql
test
testing
$

$ sqoop/bin/sqoop list-tables –connect jdbc:mysql://localhost/testing –username newuser –password password
employee
employee2
employee3
$

Sqoop on Cloudera 5.5

Create Test Data to Load using Sqoop in MySQL Cloudera 5.5

mysql –user=retail_dba –password=cloudera –host=quickstart
use retail_db;
create table employee(firstname varchar(255), lastname varchar(255));
insert into employee values (‘pathik’,’paul’);
insert into employee values (‘ashok’,’sharma’);
insert into employee values (‘imon’,’paul’);
insert into employee values (‘ella’,’paul’);
select * from employee;
create table employee2 as select * from employee;
select * from employee2;
create table employee3(firstname varchar(255), lastname varchar(255));

We have two tables to work with “employee” and “employee2” in the database “retail_db”

Import One Table Cloudera 5.5

$ sqoop version
Sqoop 1.4.6-cdh5.5.0
$

$ sqoop import –connect jdbc:mysql://quickstart:3306/retail_db –username retail_dba –password cloudera –table employee
16/03/21 16:59:19 ERROR tool.ImportTool: Error during import: No primary key could be found for table employee. Please specify one with –split-by or perform a sequential import with ‘-m 1’.

$ sqoop import –connect jdbc:mysql://quickstart:3306/retail_db –username retail_dba –password cloudera –table employee -m1
16/03/21 17:02:04 INFO mapreduce.Job: map 0% reduce 0%
16/03/21 17:02:10 INFO mapreduce.Job: map 100% reduce 0%

$ hadoop fs -ls -R /user/cloudera
drwxr-xr-x – cloudera cloudera 0 2016-03-23 19:26 /user/cloudera/employee
-rw-r–r– 1 cloudera cloudera 0 2016-03-23 19:26 /user/cloudera/employee/_SUCCESS
-rw-r–r– 1 cloudera cloudera 45 2016-03-23 19:26 /user/cloudera/employee/part-m-00000
$

$ hadoop fs -cat /user/cloudera/employee/part-m-00000
pathik,paul
ashok,sharma
imon,paul
ella,paul
$

Sqoop to Export to RDMS Cloudera 5.5

$ sqoop export –connect jdbc:mysql://quickstart:3306/retail_db –username=retail_dba –password=cloudera –table employee3 -m1 –export-dir /user/cloudera/employee

$ mysql –user=retail_dba –password=cloudera
mysql> use retail_db;
mysql> select * from employee3;
+———–+———-+
| pathik | paul |
| ashok | sharma |
| imon | paul |
| ella | paul |
+———–+———-+
mysql>

Sqoop Eval and other Commands (Same as above)

$ sqoop eval –connect jdbc:mysql://quickstart:3306/retail_db –username retail_dba –password cloudera –query “select * from employee limit 2”
———————————————–
| firstname | lastname |
———————————————–
| pathik | paul |
| ashok | sharma |
———————————————–
$

$ sqoop list-databases –connect jdbc:mysql://quickstart:3306/ –username retail_dba –password cloudera
information_schema
retail_db
$

$ sqoop list-tables –connect jdbc:mysql://quickstart:3306/retail_db –username retail_dba –password cloudera | grep emp
employee
employee2
employee3
$

Ref: http://www.tutorialspoint.com/sqoop

Author: Ankur Raj and Pathik Paul

Leave a Reply

Your email address will not be published. Required fields are marked *