Исключение при выполнении теста - PullRequest
0 голосов
/ 11 июля 2011

Я просто пытался написать простой тест -

public class IntegerCompareBenchmark extends SimpleBenchmark {

        private Integer left;
        private Integer right;

        @Override protected void setUp() {
            left = 100;
            right = 200;
        }

        public int timeIntsCompare(int reps) {
            int val = 0;
            for (int i = 0; i < reps; i++) {
                val += Ints.compare(left, right);
            }
            return val;
        }

        public int timeIntegerCompare(int reps) {
            int val = 0;
            for (int i = 0; i < reps; i++) {
                val += left.compareTo(right);
            }
            return val;
        }


    public static void main(String[] args) throws Exception {
        Runner.main(IntegerCompareBenchmark.class, args);
    }
}

, но происходит сбой за исключением -

Failed to execute java -cp com.google.caliper.InProcessRunner --warmupMillis 3000 --runMillis 1000 --measurementType TIME --marker //ZxJ/ -Dbenchmark=IntsCompare com.poc.IntegerCompareBenchmark
starting Scenario{vm=java, trial=0, benchmark=IntsCompare}
[caliper] [starting warmup]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
[caliper] [starting measured section]
[caliper] [done measured section]
Error: Doing 2x as much work didn't take 2x as much time! Is the JIT optimizing away the body of your benchmark?

Usage: Runner [OPTIONS...] <benchmark>

  <benchmark>: a benchmark class or suite

OPTIONS

  -D<param>=<value>: fix a benchmark parameter to a given value.
        Multiple values can be supplied by separating them with the
        delimiter specified in the --delimiter argument.

        For example: "-Dfoo=bar,baz,bat"

        "benchmark" is a special parameter that can be used to specify
        which benchmark methods to run. For example, if a benchmark has
        the method "timeFoo", it can be run alone by using
        "-Dbenchmark=Foo". "benchmark" also accepts a delimiter
        separated list of methods to run.

  -J<param>=<value>: set a JVM argument to the given value.
        Multiple values can be supplied by separating them with the
        delimiter specified in the --delimiter argument.

        For example: "-JmemoryMax=-Xmx32M,-Xmx512M"

  --delimiter <delimiter>: character or string to use as a delimiter
        for parameter and vm values.
        Default: ","

  --warmupMillis <millis>: duration to warmup each benchmark

  --runMillis <millis>: duration to execute each benchmark

  --captureVmLog: record the VM's just-in-time compiler and GC logs.
        This may slow down or break benchmark display tools.

  --measureMemory: measure the number of allocations done and the amount of
        memory used by invocations of the benchmark.
        Default: off

  --vm <vm>: executable to test benchmark on. Multiple VMs may be passed
        in as a list separated by the delimiter specified in the
        --delimiter argument.

  --timeUnit <unit>: unit of time to use for result. Depends on the units
        defined in the benchmark's getTimeUnitNames() method, if defined.
        Default Options: ns, us, ms, s

  --instanceUnit <unit>: unit to use for allocation instances result.
        Depends on the units defined in the benchmark's
        getInstanceUnitNames() method, if defined.
        Default Options: instances, K instances, M instances, B instances

  --memoryUnit <unit>: unit to use for allocation memory size result.
        Depends on the units defined in the benchmark's
        getMemoryUnitNames() method, if defined.
        Default Options: B, KB, MB, GB

  --saveResults <file/dir>: write results to this file or directory

  --printScore: if present, also display an aggregate score for this run,
        where higher is better. This number has no particular meaning,
        but can be compared to scores from other runs that use the exact
        same arguments.

  --uploadResults <file/dir>: upload this file or directory of files
        to the web app. This argument ends Caliper early and is thus
        incompatible with all other arguments.

  --debug: run without measurement for use with debugger or profiling.

  --debug-reps: fixed number of reps to run with --debug.
        Default: "1000"
[caliper] [scenarios finished]

An exception was thrown from the benchmark code.
com.google.caliper.ConfigurationException: Failed to execute java -cp <class path files> com.google.caliper.InProcessRunner --warmupMillis 3000 --runMillis 1000 --measurementType TIME --marker //ZxJ/ -Dbenchmark=IntsCompare com.poc.IntegerCompareBenchmark
    at com.google.caliper.Runner.measure(Runner.java:309)
    at com.google.caliper.Runner.runScenario(Runner.java:229)
    at com.google.caliper.Runner.runOutOfProcess(Runner.java:378)
    at com.google.caliper.Runner.run(Runner.java:97)
    at com.google.caliper.Runner.main(Runner.java:423)
    at com.google.caliper.Runner.main(Runner.java:440)
    at com.poc.IntegerCompareBenchmark.main(IntegerCompareBenchmark.java:45)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)

Process finished with exit code 1

Мой другой бенчмарк работает отлично .. Я что-то здесь не так делаю?

1 Ответ

2 голосов
/ 13 июля 2011

Похоже, что JIT вашей виртуальной машины оптимизировал тело вашего цикла в ничто.Рассмотрите возможность использования вращающихся значений влево и вправо?(возможно на основе val?)

...