Labour Day Special - Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: top65certs

Free Apache-Hadoop-Developer Hortonworks Updates

Hadoop 2.0 Certification exam for Pig and Hive Developer Questions and Answers

Question 5

You have user profile records in your OLPT database, that you want to join with web logs you have already ingested into the Hadoop file system. How will you obtain these user records?

Options:

A.

HDFS command

B.

Pig LOAD command

C.

Sqoop import

D.

Hive LOAD DATA command

E.

Ingest with Flume agents

F.

Ingest with Hadoop Streaming

Question 6

In Hadoop 2.0, which TWO of the following processes work together to provide automatic failover of the NameNode? Choose 2 answers

Options:

A.

ZKFailoverController

B.

ZooKeeper

C.

QuorumManager

D.

JournalNode

Question 7

All keys used for intermediate output from mappers must:

Options:

A.

Implement a splittable compression algorithm.

B.

Be a subclass of FileInputFormat.

C.

Implement WritableComparable.

D.

Override isSplitable.

E.

Implement a comparator for speedy sorting.

Question 8

You have just executed a MapReduce job. Where is intermediate data written to after being emitted from the Mapper’s map method?

Options:

A.

Intermediate data in streamed across the network from Mapper to the Reduce and is never written to disk.

B.

Into in-memory buffers on the TaskTracker node running the Mapper that spill over and are written into HDFS.

C.

Into in-memory buffers that spill over to the local file system of the TaskTracker node running the Mapper.

D.

Into in-memory buffers that spill over to the local file system (outside HDFS) of the TaskTracker node running the Reducer

E.

Into in-memory buffers on the TaskTracker node running the Reducer that spill over and are written into HDFS.