这篇文章主要介绍了Hadoop中MapReduce的示例分析,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。
MapReduce设计理念
MapReduce之Helloworld(Word Count)处理过程

MapReduce的Split大小 - max.split(200M) - min.split(50M) - block(128M) - max(min.split,min(max.split,block))=128M
Mapper

Reduce

shuffler(最为复杂的一个环节)

附:Helloworld之WordCount
//WCJob.java
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.StringUtils;
/**
* MapReduce_Helloworld程序
*
* WCJob
* @since V1.0.0
* Created by SET on 2016-09-11 11:35:15
* @see
*/
public class WCJob {
public static void main(String[] args) throws Exception {
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://master:8020");
config.set("yarn-resourcemanager.hostname", "slave2");
FileSystem fs = FileSystem.newInstance(config);
Job job = new Job(config);
job.setJobName("word count");
job.setJarByClass(WCJob.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setMapperClass(WCMapper.class);
job.setReducerClass(WCReducer.class);
job.setCombinerClass(WCReducer.class);
FileInputFormat.addInputPath(job, new Path("/user/wc/wc"));
Path outputpath = new Path("/user/wc/output");
if(fs.exists(outputpath)) {
fs.delete(outputpath, true);
}
FileOutputFormat.setOutputPath(job, outputpath);
boolean flag = job.waitForCompletion(true);
if(flag) {
System.out.println("Job success@!");
}
}
private static class WCMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
/**
* 格式:hadoop hello world
* map 拿到每一行数据 切分
*/
String[] strs = StringUtils.split(value.toString(), ' ');
for(String word : strs) {
context.write(new Text(word), new IntWritable(1));
}
}
}
private static class WCReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for(IntWritable intWritable : values) {
sum += intWritable.get();
}
context.write(new Text(key), new IntWritable(sum));
}
}
}
感谢你能够认真阅读完这篇文章,希望小编分享的“Hadoop中MapReduce的示例分析”这篇文章对大家有帮助,同时也希望大家多多支持天达云,关注天达云行业资讯频道,更多相关知识等着你来学习!