org.apache.hadoop.examples.pi.package.html Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of hadoop-mapreduce-examples Show documentation
Show all versions of hadoop-mapreduce-examples Show documentation
Apache Hadoop MapReduce Examples
This package consists of a map/reduce application,
distbbp,
which computes exact binary digits of the mathematical constant π.
distbbp is designed for computing the nth bit of π,
for large n, say n > 100,000,000.
For computing the lower bits of π, consider using bbp.
The distbbp Program
The main class is DistBbp
and the actually computation is done by DistSum jobs.
The steps for launching the jobs are:
- Initialize parameters.
- Create a list of sums.
- Read computed values from the given local directory.
- Remove the computed values from the sums.
- Partition the remaining sums into computation jobs.
- Submit the computation jobs to a cluster and then wait for the results.
- Write job outputs to the given local directory.
- Combine the job outputs and print the π bits.
The Bits of π
The table on the right are the results computed by distbbp.
- Row 0 to Row 7
- They were computed by a single machine.
- A single run of Row 7 took several seconds.
- Row 8 to Row 14
- They were computed by a 7600-task-capacity cluster.
- A single run of Row 14 took 27 hours.
- The computations in Row 13 and Row 14 were completed on May 20, 2009.
It seems that the corresponding bits were never computed before.
- The first part of Row 15 (6216B06)
- The first 30% of the computation was done in idle cycles of some
clusters spread over 20 days.
- The remaining 70% was finished over a weekend on Hammer,
a 30,000-task-capacity cluster, which was also used for the
petabyte sort benchmark.
- The log files are available
here.
- The result was posted in
this YDN blog.
- The second part of Row 15 (D3611)
- The starting position is 1,000,000,000,000,053, totally 20 bits.
- Two computations, at positions n and n+4, were performed.
- A single computation was divided into 14,000 jobs
and totally 7,000,000 tasks.
It took 208 years of CPU-time
or 12 days of cluster (with 7600-task-capacity) time.
- The log files are available
here.
- The computations were completed on June 30, 2009.
The last bit, the 1,000,000,000,000,072nd bit,
probably is the highest bit (or the least significant bit) of π
computed ever in the history.
Position n π bits (in hex) starting at n
0 1 243F6A8885A3*
1 11 FDAA22168C23
2 101 3707344A409
3 1,001 574E69A458F
4 10,001 44EC5716F2B
5 100,001 944F7A204
6 1,000,001 6FFFA4103
7 10,000,001 6CFDD54E3
8 100,000,001 A306CFA7
9 1,000,000,001 3E08FF2B
10 10,000,000,001 0A8BD8C0
11 100,000,000,001 B2238C1
12 1,000,000,000,001 0FEE563
13 10,000,000,000,001 896DC3
14 100,000,000,000,001 C216EC
15 1,000,000,000,000,001 6216B06 ... D3611
*
By representing π in decimal, hexadecimal and binary, we have
π = 3.1415926535 8979323846 2643383279 ...
= 3.243F6A8885 A308D31319 8A2E037073 ...
= 11.0010010000 1111110110 1010100010 ...
The first ten bits of π are 0010010000.
Command Line Usages
The command line format is:
$ hadoop org.apache.hadoop.examples.pi.DistBbp \
<b> <nThreads> <nJobs> <type> <nPart> <remoteDir> <localDir>
And the parameters are:
<b>
The number of bits to skip, i.e. compute the (b+1)th position.
<nThreads>
The number of working threads.
<nJobs> The number of jobs per sum.
<type> 'm' for map side job, 'r' for reduce side job, 'x' for mix type.
<nPart>
The number of parts per job.
<remoteDir>
Remote directory for submitting jobs.
<localDir>
Local directory for storing output files.
Note that it may take a long time to finish all the jobs when <b> is large.
If the program is killed in the middle of the execution, the same command with
a different <remoteDir> can be used to resume the execution. For example, suppose
we use the following command to compute the (10^15+57)th bit of π.
$ hadoop org.apache.hadoop.examples.pi.DistBbp \
1,000,000,000,000,056 20 1000 x 500 remote/a local/output
It uses 20 threads to summit jobs so that there are at most 20 concurrent jobs.
Each sum (there are totally 14 sums) is partitioned into 1000 jobs.
The jobs will be executed in map-side or reduce-side. Each job has 500 parts.
The remote directory for the jobs is remote/a and the local directory
for storing output is local/output. Depends on the cluster configuration,
it may take many days to finish the entire execution. If the execution is killed,
we may resume it by
$ hadoop org.apache.hadoop.examples.pi.DistBbp \
1,000,000,000,000,056 20 1000 x 500 remote/b local/output