Hive Setup is quite straightforward could be achieved by following just 1,2,3 steps. Below is brief description how to setup Hive.
1. Java 6
2. Installed Hadoop2.x as explain in section Hadoop Setup
3. Cygwin in case of Windows OS
1. Download tar from link http://hive.apache.org/releases.html
2. $ tar -xzvf hive-x.y.z.tar.gz
3. Extract into /Users/nitin/software/hive-a.b.c
1. $ export HIVE_HOME=/Users/application/apache-hive-0.13.0
2. $ export PATH=$HIVE_HOME/bin:$PATH
3. Add it into your bash profile so you can avoid setting again and again
4. Launch the Hive shell as below
Hive could be launch using hive command or we could also use the -e option run from command inline.
By default Hive keeps the default database
hive&amp;gt; show databases; Or $hive -e 'show databases' OK Default
HiveQL is not case sensitive so ‘show databases’ works also.
Below command use to display the list of tables in current databases which is ‘default’
Hive&amp;gt; show tables; OK
A Hive table’s data stored in HDFS filesystem and associate with schema stored as metadata in Metastore locally.
Table could be basically two types
1. Managed table: Hive managed the data stored in HDFS and if table got drop then data would also removed from HDFS.
2. External table: External table will map already existing data in HDFS filesystem and if table dropped data will still available in HDFS filesystem
Create file with value
Hadoop fs –copyFromLocal employee.txt /tmp/employee/ CREATE EXTERNAL TABLE IF NOT EXISTS employee (NAME STRING, ADDRESS STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY 'n' LOCATION '/tmp/employee'; hive&amp;gt;select * from employee; OK name1 address1 name2 address2