Hadoop is not a new name in the Big Data industry and is an industry standard. Hadoop is an open source software which is written in Java for and is widely used to process large amount of data through nodes/computers in the cluster.
Get the most out of your data with CDH, the industry’s leading modern data management platform. Built entirely on open standards, CDH features a suite of innovative open source technologies to store, process, discover, model, serve, secure and govern all types of data, cost effectively, at petabyte scale. However, I still have networking issues with my cloudera virtual machine, when I do 'ifconfig' I do not see any ip address to use. When I try to ping my Mac by hostname or IP address I am getting connect error: Network is unreachable. The VM machine network is configured with 'Adapter 1: Intel PRO / 1000 MT Desktop (NAT)'.
Currently large amount of data is produced with different speed and variety, that is why we need Hadoop for parallel processing and currently every big company is using big data technology like Amazon, Adobe, Facebook, Yahoo, Google to name a few. See complete list of companies and website using Hadoop.
This post is only for sharing Hadoop practice material for your self study and practice and we are not going to discuss Hadoop in detail in this article.
Currently we have three major third party distributors who are providing customized Hadoop, Viz Cloudera, Hortonwokrs and MAPR.
Recommended Article:Veritas Cluster Server Simulator
If you are interested in learning Hadoop, there are lots of resource available online. And if you are preparing for Cloudera Hadoop certification or learning just for fun, you should try their demo QuickStart VM.
This Cloudera QuickStart VMs can be downloaded for VMware, VirtualBox, and KVM and all will require 64-bit host operating system. This means that if you have 64 bit OS and your computer supports the virtualization feature, then only you can run this sample Hadoop cluster.
Note: Use this demo Hadoop VM only for your learning purpose and it should not be used as a starting point for your cluster servers.
In this Cloudera Hadoop virtual machine (VMs), you can test everything like CDH, Cloudera Manager, Cloudera Impala, and Cloudera Search.
Prerequisites for using Cloudera Hadoop Cluster VM
You must meet some requirement for using this Hadoop cluster VM form Cloudera. Below given are the requirements.
1. Host computer should be 64 Bit.
2. To use a VMware VM, you must use a player compatible with WorkStation 8.x or higher.
3. The RAM requirement varies as per environment, but minimum 4GB RAM is required.
Just go to the above link and fill up simple details and get a direct download link. In coming tutorial we will show how to use this VM.
Hope you will take advantage of this awesome FREE Cloudera Hadoop cluster VM and it will surely help you in learning Hadoop technology. You can also download Apache Hadoop from official Apache Hadoop project website as a TAR ball and can install on your server.
The purpose of this post is to provide instructions on how to get started with the Cloudera Quickstart VM and what are some of the main things to know about the VM. This includes where to find certain configuration files, how to setup certain things that will make your life easier and more.
The Cloudera Quickstart VM is a Virtual Machine that comes with a pseudo distributed version of Hadoop preinstalled on it along with the main services that are offered by Cloudera. This includes the Cloudera Manager and Impala as the most notable.
- Make sure your computer is setup to allow virtualization. This can be set in your bios on startup.
- To use the Cloudera Manager, you will need to allocate 10GB to your VM and 2 Virtual CPU Cores.
- The Cloudera Manager comes disabled by default, and all the Hadoop daemons are started up on startup and run just fine without it. so you don’t absolutely need the Cloudera Manager.
Latest Quickstart VM
Importing into VirtualBox
- Download the Quickstart VM with the above links
- Open VirtualBox
- Click on File -> Import Appliance
- Select the Quickstart VM you just download
- Click Continue
- Optional: Double click on the name, and change it to whatever you want.
- Click Import
- Wait for the machine to import and when it is done, it will be list in the window to startup
Recommended VirtualBox Configurations
- Right click on the VirtualMachine and click Settings
- Setup the VM to allow you to copy and paste from that machine to your local and vice-versa
- Click on General -> Advanced
- Set Shared Clipboard to Bidirectional
- Setup port forwarding from port 2222 to port 22 to allow SSH to the machine
- Click on Network -> Advanced -> Port Forwarding
- Add a new entry
- Name: 2222
- Host Port: 2222
- Guest Port: 22
SSH’ing to the Machine
Default SSH Credentials: cloudera/cloudera
Host to connect to: localhost
Because of the Recommended VirtualBox Configuration above, we’re forwarding connections from port 2222 to 22. So you would want to use port 2222 to connect.
- Open a command line terminal
- Use the ssh command to login
- Enter the password
- Open putty
- Set localhost as the Host Name
- Set 2222 as the port
- Connection Type: SSH
- Click open
- Enter the password
Setup password-less SSH (Optional)
- Generate a public and private key locally
- You can follow these instructions:
- Login to the machine with the instructions above
- create the ~/.ssh directory
- Create the file ~/.ssh/authorized_keys
- Open file
- Add your public key to the authorized_keys file
- Save the authorized_keys file
- Change permissions of .ssh
- Change permissions of the ~/.ssh/authorized_keys
- Change permissions of: chmod 740 /home/cloudera/
- Now if you try SSH’ing to the machine, you shouldn’t have to provide the password
Copying Files to the VM
- Open a command line terminal
- Use the following command:
FileZilla or anther FTP App
- Open your desired FTP Application
- Create a new connection
- Host: localhost
- Username: cloudera
- Password: cloudera
- Port: 22
Configure Apache Spark to Connect to Hive
If you’re intending to use Apache Spark, you will also probably want to connect to Hive using SparkSQL so you can interact with that relational store. To do this you need to include the hive-site.xml file in the spark configurations so Spark knows how to interact with Hive. If you don’t do this, the app will still run, but you wont be able to view the same tables you have in Hive and you wont be able to store data in tables.
- SSH into the Machine
- Login as root
- Create a symlink to Link the hive-site.xml in the spark conf directory
Configure Apache Spark History Server to allow you to view previously ran Spark jobs
If you’re intending to use Apache Spark, you may end up trying to view past runs via the Apache Spark History Server. There is a small issue right off the bat with the Quickstart VM where you can’t view past runs, because of a permissions issue with the applicationHistory directory in HDFS (/user/spark/applicationHistory). The spark user, is not able to read the contents of the directory. You can follow these steps to fix this:
- SSH into the Machine
- Login as hdfs user
- Run “$ sudo su” to login as root, then “$ su hdfs”
- Change the permissions of the applicationHistory directory under the spark home directory in hdfs
- Now when you visit the Apache Spark History server you will see any past jobs that have ran
Using Beeline to connect to Hive
Beeline is a new command line shell that is supported by HiveServer2. It is recommended to use this over the normal hive shell since it supports better security and functionality.
Starting Shell with beeline Command
This will start the beeline shell.
Note: If you were to run a command such as “show tables” to list the hive tables in the currently selected database at this time you will get the following error:
No current connection
This is because you haven’t technically connected to the HiveServer2 to be able to run hive commands.
To connect you can run the following command. This will prompt you for credentials.
To avoid having to enter credentials each time, you can include the username and password in the connect statement like so:
Cloudera Vmware Download
Starting Shell with beeline Command and arguments
Instead of having to use the connect command upon starting the beeline shell, you can automatically connect to the HiveServer2 using command line arguments.
Shutting down the Shell
HBase Master UI
Apache Spark History
Cloudera Quickstart Vm 5.14 Download
Cloudera Virtualbox Vm
$ mysql -u root -p
Cloudera Quickstart Vm Vmware
$ beeline -u jdbc:hive2://localhost:10000/default -n cloudera -p cloudera