Difference between revisions of "Benchmark Programs"

From eLinux.org
Jump to: navigation, search
(7 intermediate revisions by 4 users not shown)
Line 3: Line 3:
 
<div style="margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#ffffcc; align:right; ">
 
<div style="margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#ffffcc; align:right; ">
 
''Note: It is important to recognize that benchmarks between systems may be misleading.
 
''Note: It is important to recognize that benchmarks between systems may be misleading.
Benchmark usage by the forum is primarily to determine differences in performance for different
+
Benchmarks should primarily be used to determine differences in performance for different
 
software configurations on the '''same''' hardware system.''
 
software configurations on the '''same''' hardware system.''
 
</div>
 
</div>
== [[Unix Bench]] ==
+
== Unix Bench ==
 
FYI, the URL to the UnixBench is as follows;
 
FYI, the URL to the UnixBench is as follows;
  
http://www.tux.org/pub/tux/benchmarks/System/unixbench/
+
OLD site: http://www.tux.org/pub/tux/benchmarks/System/unixbench/
 +
 
 +
NEW site: http://code.google.com/p/byte-unixbench/
  
 
UnixBench contains 9 kinds of tests:
 
UnixBench contains 9 kinds of tests:
Line 22: Line 24:
 
# System Call Overhead
 
# System Call Overhead
  
UnixBench is preferred over lmbench since (according to reports) lmbench
+
== lmbench ==
cannot be easily cross-compiled.
+
The LMBench home page is at: http://www.bitmover.com/lmbench/ and/or http://lmbench.sourceforge.net/<br>
 +
The sourceforge project page is at: http://sourceforge.net/projects/lmbench
 +
 
 +
=== Instructions for lmbench-3.0-a9 ===
 +
 
 +
(Adjust CC and OS according to your needs.)
 +
 
 +
cd lmbench-3.0-a9/src
 +
make CC=arm-linux-gcc OS=arm-linux TARGET=linux
 +
 
 +
Make the whole lmbench-3.0-a9 directory accessible on the target,
 +
e.g. by copying or NFS mount.  Make sure the benchmark scripts
 +
can write the configuration file and results, and also unpack
 +
a tarball used during the benchmark (in case tar is not available
 +
on target):
  
I have cross-compiled lmbench - it's not too hard.  
+
chmod a+w ../bin/arm-linux ../results
My target didn't have "make" or a compiler, but it did have bash.
+
tar xf webpage-lm.tar
  
Here is my recipe:
+
To run the benchmark on the target:
# Obtain lmbench. Make sure to get it from sourceforge (I used 3.0-a4), not bitmover, because the bitmover package is slightly tangled with a bitkeeper file "bk.ver". It's
 
# relatively easy to debug and disentangle it, but someone's already done that and put it up on sourceforge.
 
# Unpack lmbench in your build system.
 
# cd to lmbench-3.0-a4/src. Note that this is not the top level, which does have a Makefile.
 
# do "make OS=armv5EJl-linux-gnu CC=arm-linux-gcc" (or whatever your os and cross-compiler are)
 
# After everything is compiled, transport the whole directory tree to the target.
 
# cd to lmbench-3.0-a4/scripts
 
# Do ./config.run and answer some questions. This creates a config file that you can reuse.
 
# Do ./results
 
# Inside lmbench-3.0-a4/results will be a folder named armv5EJl-linux-gnu or something similar; inside that folder is a text file named imx21_199.0 or something similar.
 
# This is the raw lmbench results. Transport it back to civilization.
 
# When lmbench tries to save the results, it increments the last part ".0" until it finds an unused name. Therefore, you can rerun lmbench many times with a simple one-line
 
# bash script "for i in {1..100}; do ./results; done", and the result files will not overlap.
 
# You can get various kinds of summary postprocessing from lmbench. The "getsummary" script was sufficient for my purposes.
 
# To figure out which binary is generating which measurement, it may be useful to read the "lmbench" shell script in parallel with the raw results file.
 
  
== lmbench ==
+
cd lmbench-3.0-a9/src
The [[LMBench]] home page is at: http://www.bitmover.com/lmbench/
+
hostname foo    # make sure hostname is set, the scripts use it to name config and result files
The sourceforge project page is at: http://sourceforge.net/projects/lmbench
+
OS=arm-linux ../scripts/config-run
 +
OS=arm-linux ../scripts/results
 +
 
 +
This worked for me on a target using BusyBox v1.10.2 ash.
 +
 
 +
The results are written into lmbench-3.0-a9/results/, for each run of the ../scripts/results
 +
a new file is created.  You can copy the results back to your PC and run
 +
various kinds of summary postprocessing scripts from lmbench, e.g.
 +
 
 +
../scripts/getsummary ../results/arm-linux/*
 +
 
 +
== Wishlist ==
 +
 
 +
A list of benchmark results would be useful:
 +
* Comparing performance of different FFT implementations on Beagleboard-XM: http://pmeerw.dyndns.org/blog/programming/arm_fft.html
 +
 
 +
[[Category:Development Tools]]

Revision as of 04:05, 28 October 2011

Here are some different programs for performing benchmarking.

Note: It is important to recognize that benchmarks between systems may be misleading. Benchmarks should primarily be used to determine differences in performance for different software configurations on the same hardware system.

Unix Bench

FYI, the URL to the UnixBench is as follows;

OLD site: http://www.tux.org/pub/tux/benchmarks/System/unixbench/

NEW site: http://code.google.com/p/byte-unixbench/

UnixBench contains 9 kinds of tests:

  1. Dhrystone 2 using register variables
  2. Double-Precision Whetstone
  3. Execl Throughput
  4. File Copy
  5. Pipe Throughput
  6. Pipe-based Context Switching
  7. Process Creation
  8. Shell Script
  9. System Call Overhead

lmbench

The LMBench home page is at: http://www.bitmover.com/lmbench/ and/or http://lmbench.sourceforge.net/
The sourceforge project page is at: http://sourceforge.net/projects/lmbench

Instructions for lmbench-3.0-a9

(Adjust CC and OS according to your needs.)

cd lmbench-3.0-a9/src
make CC=arm-linux-gcc OS=arm-linux TARGET=linux

Make the whole lmbench-3.0-a9 directory accessible on the target, e.g. by copying or NFS mount. Make sure the benchmark scripts can write the configuration file and results, and also unpack a tarball used during the benchmark (in case tar is not available on target):

chmod a+w ../bin/arm-linux ../results
tar xf webpage-lm.tar

To run the benchmark on the target:

cd lmbench-3.0-a9/src
hostname foo    # make sure hostname is set, the scripts use it to name config and result files
OS=arm-linux ../scripts/config-run
OS=arm-linux ../scripts/results

This worked for me on a target using BusyBox v1.10.2 ash.

The results are written into lmbench-3.0-a9/results/, for each run of the ../scripts/results a new file is created. You can copy the results back to your PC and run various kinds of summary postprocessing scripts from lmbench, e.g.

../scripts/getsummary ../results/arm-linux/*

Wishlist

A list of benchmark results would be useful: