Apache Hadoop | Running MapReduce Jobs

Apache Hadoop | Running MapReduce Jobs After setting up your environment and running the HDFS and YARN daemons, we can start working on running MapReduce jobs on our local machine. We need to compile our code, produce a JAR file, move our inputs, and run a MapReduce job on Hadoop. Step 1 - Configure extra environment variables As a preface, it is best to setup some extra environment variables to make running jobs from the CLI quicker and easier. You can name these environment variables anything you want, but we will name them HADOOP_CP and HDFS_LOC to not potentially conlict with other environment variables. Open the Start Menu and type in 'environment' and press enter. A new window with System Properties should open up. Click the Environment Variables button near the bottom right. HADOOP_CP environment variable This is used to compile your Java files. The backticks (eg. `some command here`) do not work on Win...