All Downloads are FREE. Search and download functionalities are using the official Maven repository.

org.optaplanner.benchmark.impl.report.benchmarkReport.html.ftl Maven / Gradle / Ivy

<#-- @ftlvariable name="benchmarkReport" type="org.optaplanner.benchmark.impl.report.BenchmarkReport" -->
<#-- @ftlvariable name="reportHelper" type="org.optaplanner.benchmark.impl.report.ReportHelper" -->



    
    
    ${benchmarkReport.plannerBenchmarkResult.name} Planner benchmark report
    
    
    
    
    
    

<#macro addSolverBenchmarkBadges solverBenchmarkResult>
    <#if solverBenchmarkResult.favorite>
        ${solverBenchmarkResult.ranking}
    <#elseif solverBenchmarkResult.ranking??>
        ${solverBenchmarkResult.ranking}
    

    <#if solverBenchmarkResult.hasAnyFailure()>
        F
    <#elseif solverBenchmarkResult.hasAnyUninitializedSolution()>
        !
    <#elseif solverBenchmarkResult.hasAnyInfeasibleScore()>
        !
    

<#macro addProlblemBenchmarkBadges problemBenchmarkResult>
    <#if problemBenchmarkResult.hasAnyFailure()>
        F
    

<#macro addSolverProblemBenchmarkResultBadges solverProblemBenchmarkResult>
    <#if solverProblemBenchmarkResult.winner>
        ${solverProblemBenchmarkResult.ranking}
    <#elseif solverProblemBenchmarkResult.ranking??>
        ${solverProblemBenchmarkResult.ranking}
    

    <#if solverProblemBenchmarkResult.hasAnyFailure()>
        F
    <#elseif !solverProblemBenchmarkResult.initialized>
        !
    <#elseif !solverProblemBenchmarkResult.scoreFeasible>
        !
    

<#macro addScoreLevelChartList chartFileList idPrefix>
    
<#assign scoreLevelIndex = 0> <#list chartFileList as chartFile>
<#assign scoreLevelIndex = scoreLevelIndex + 1>

Benchmark report

<#if benchmarkReport.plannerBenchmarkResult.hasAnyFailure()>

${benchmarkReport.plannerBenchmarkResult.failureCount} benchmarks have failed!

<#list benchmarkReport.warningList as warning>

${warning}

Result summary

Best score summary

Useful for visualizing the best solver configuration.

<@addScoreLevelChartList chartFileList=benchmarkReport.bestScoreSummaryChartFileList idPrefix="summary_bestScore" /> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult> class="favoriteSolverBenchmark"> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#if !solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)??> <#else> <#assign singleBenchmarkResult = solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)> <#if !singleBenchmarkResult.hasAllSuccess()> <#else> <#if solverBenchmarkResult.subSingleCount lte 1> <#else>
Solver Total Average Standard Deviation Problem
${problemBenchmarkResult.name}
${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/> ${solverBenchmarkResult.totalScore!""} ${solverBenchmarkResult.averageScore!""} ${solverBenchmarkResult.standardDeviationString!""}Failed${singleBenchmarkResult.averageScore!""} <@addSolverProblemBenchmarkResultBadges solverProblemBenchmarkResult=singleBenchmarkResult/>

Best score scalability summary

Useful for visualizing the scalability of each solver configuration.

<@addScoreLevelChartList chartFileList=benchmarkReport.bestScoreScalabilitySummaryChartFileList idPrefix="summary_bestScoreScalability" />

Best score distribution summary

Useful for visualizing the reliability of each solver configuration.

<#assign maximumSubSingleCount = benchmarkReport.plannerBenchmarkResult.getMaximumSubSingleCount()>

Maximum subSingle count: ${maximumSubSingleCount!""}

<#if maximumSubSingleCount lte 1>

The benchmarker did not run multiple subSingles, so there is no distribution and therefore no reliability indication.

<@addScoreLevelChartList chartFileList=benchmarkReport.bestScoreDistributionSummaryChartFileList idPrefix="summary_bestScoreDistribution" />

Winning score difference summary

Useful for zooming in on the results of the best score summary.

<@addScoreLevelChartList chartFileList=benchmarkReport.winningScoreDifferenceSummaryChartFileList idPrefix="summary_winningScoreDifference" /> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult> class="favoriteSolverBenchmark"> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#if !solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)??> <#else> <#assign singleBenchmarkResult = solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)> <#if !singleBenchmarkResult.hasAllSuccess()> <#else>
Solver Total Average Problem
${problemBenchmarkResult.name}
${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/> ${solverBenchmarkResult.totalWinningScoreDifference!""} ${solverBenchmarkResult.averageWinningScoreDifference!""}Failed${singleBenchmarkResult.winningScoreDifference} <@addSolverProblemBenchmarkResultBadges solverProblemBenchmarkResult=singleBenchmarkResult/>

Worst score difference percentage summary (ROI)

Useful for visualizing the return on investment (ROI) to decision makers.

<@addScoreLevelChartList chartFileList=benchmarkReport.worstScoreDifferencePercentageSummaryChartFileList idPrefix="summary_worstScoreDifferencePercentage" /> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult> class="favoriteSolverBenchmark"> <#if !solverBenchmarkResult.averageWorstScoreDifferencePercentage??> <#else> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#if !solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)??> <#else> <#assign singleBenchmarkResult = solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)> <#if !singleBenchmarkResult.hasAllSuccess()> <#else>
Solver Average Problem
${problemBenchmarkResult.name}
${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/>${solverBenchmarkResult.averageWorstScoreDifferencePercentage.toString(.locale)}Failed${singleBenchmarkResult.worstScoreDifferencePercentage.toString(.locale)} <@addSolverProblemBenchmarkResultBadges solverProblemBenchmarkResult=singleBenchmarkResult/>

Performance summary

Score calculation speed summary

Useful for comparing different score calculators and/or constraint implementations (presuming that the solver configurations do not differ otherwise). Also useful to measure the scalability cost of an extra constraint.

<#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult> class="favoriteSolverBenchmark"> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#if !solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)??> <#else> <#assign singleBenchmarkResult = solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)> <#if !singleBenchmarkResult.hasAllSuccess()> <#else> <#if solverBenchmarkResult.subSingleCount lte 1> <#else>
Solver Average Problem
${problemBenchmarkResult.name}
Problem scale ${benchmarkReport.plannerBenchmarkResult.averageProblemScale!""}${problemBenchmarkResult.problemScale!""}
${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/> ${solverBenchmarkResult.averageScoreCalculationSpeed!""}/sFailed${singleBenchmarkResult.scoreCalculationSpeed}/s

Worst score calculation speed difference percentage

Useful for comparing different score calculators and/or constraint implementations (presuming that the solver configurations do not differ otherwise). Also useful to measure the scalability cost of an extra constraint.

<#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult> class="favoriteSolverBenchmark"> <#if solverBenchmarkResult.averageWorstScoreCalculationSpeedDifferencePercentage??> <#else> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#if !solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)??> <#else> <#assign singleBenchmarkResult = solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)> <#if !singleBenchmarkResult.hasAllSuccess()> <#else>
Solver Average Problem
${problemBenchmarkResult.name}
${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/>${solverBenchmarkResult.averageWorstScoreCalculationSpeedDifferencePercentage?string["0.00%"]!""}Failed${singleBenchmarkResult.worstScoreCalculationSpeedDifferencePercentage?string["0.00%"]!""}

Time spent summary

Useful for visualizing the performance of construction heuristics (presuming that no other solver phases are configured).

<#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult> class="favoriteSolverBenchmark"> <#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult> <#if !solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)??> <#else> <#assign singleBenchmarkResult = solverBenchmarkResult.findSingleBenchmark(problemBenchmarkResult)> <#if !singleBenchmarkResult.hasAllSuccess()> <#else> <#if solverBenchmarkResult.subSingleCount lte 1> <#else>
Solver Average Problem
${problemBenchmarkResult.name}
Problem scale ${benchmarkReport.plannerBenchmarkResult.averageProblemScale!""}${problemBenchmarkResult.problemScale!""}
${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/> ${solverBenchmarkResult.averageTimeMillisSpent!""}Failed${singleBenchmarkResult.timeMillisSpent}

Time spent scalability summary

Useful for extrapolating the scalability of construction heuristics (presuming that no other solver phases are configured).

Best score per time spent summary

Useful for visualizing trade-off between the best score versus the time spent for construction heuristics (presuming that no other solver phases are configured).

<@addScoreLevelChartList chartFileList=benchmarkReport.bestScorePerTimeSpentSummaryChartFileList idPrefix="summary_bestScorePerTimeSpent" />
<#list benchmarkReport.plannerBenchmarkResult.unifiedProblemBenchmarkResultList as problemBenchmarkResult>

${problemBenchmarkResult.name}

<#if problemBenchmarkResult.hasAnyFailure()>

${problemBenchmarkResult.failureCount} benchmarks have failed!

Entity count: ${problemBenchmarkResult.entityCount!""}
Variable count: ${problemBenchmarkResult.variableCount!""}
Maximum value count: ${problemBenchmarkResult.maximumValueCount!""}
Problem scale: ${problemBenchmarkResult.problemScale!""} <#if problemBenchmarkResult.inputSolutionLoadingTimeMillisSpent??>
Time spent to load the inputSolution:  <#if problemBenchmarkResult.inputSolutionLoadingTimeMillisSpent lt 1> < 1 ms <#else> ${problemBenchmarkResult.inputSolutionLoadingTimeMillisSpent} ms <#if problemBenchmarkResult.averageUsedMemoryAfterInputSolution??>
Memory usage after loading the inputSolution (before creating the Solver): ${problemBenchmarkResult.averageUsedMemoryAfterInputSolution?string.number} bytes on average.

<#if problemBenchmarkResult.hasAnySuccess() && problemBenchmarkResult.hasAnyStatistic()> <#if problemBenchmarkResult.getMaximumSubSingleCount() gt 1 >

Only the median sub single run of each solver is shown in the statistics below.

<#assign firstRow = true> <#list problemBenchmarkResult.problemStatisticList as problemStatistic>
<#list problemStatistic.warningList as warning>

${warning}

<#if problemStatistic.graphFileList?? && problemStatistic.graphFileList?size != 0> <#if problemStatistic.problemStatisticType.hasScoreLevels()>
<#assign scoreLevelIndex = 0> <#list problemStatistic.graphFileList as graphFile>
<#assign scoreLevelIndex = scoreLevelIndex + 1>
<#else>
<#else>

Graph unavailable (statistic unavailable for this solver configuration or benchmark failed).

<#if !benchmarkReport.plannerBenchmarkResult.aggregation> CSV files per solver:
<#list problemStatistic.subSingleStatisticList as subSingleStatistic>
<#assign firstRow = false> <#list problemBenchmarkResult.extractSingleStatisticTypeList() as singleStatisticType>
<#list problemBenchmarkResult.extractPureSubSingleStatisticList(singleStatisticType) as pureSubSingleStatistic>

${pureSubSingleStatistic.subSingleBenchmarkResult.singleBenchmarkResult.solverBenchmarkResult.name}

<#if pureSubSingleStatistic.graphFileList?? && pureSubSingleStatistic.graphFileList?size != 0> <#if singleStatisticType.hasScoreLevels()>
<#assign scoreLevelIndex = 0> <#list pureSubSingleStatistic.graphFileList as graphFile>
<#assign scoreLevelIndex = scoreLevelIndex + 1>
<#else>
<#else>

Graph unavailable (statistic unavailable for this solver configuration or benchmark failed).

<#if !benchmarkReport.plannerBenchmarkResult.aggregation> CSV file:
<#assign firstRow = false>
<#list benchmarkReport.plannerBenchmarkResult.solverBenchmarkResultList as solverBenchmarkResult>

${solverBenchmarkResult.name} <@addSolverBenchmarkBadges solverBenchmarkResult=solverBenchmarkResult/>

<#if solverBenchmarkResult.hasAnyFailure()>

${solverBenchmarkResult.failureCount} benchmarks have failed!

${solverBenchmarkResult.solverConfigAsHtmlEscapedXml}
<#if benchmarkReport.plannerBenchmarkResult.warmUpTimeMillisSpentLimit??> <#else> <#if benchmarkReport.plannerBenchmarkResult.parallelBenchmarkCount?? && benchmarkReport.plannerBenchmarkResult.availableProcessors??> <#else> <#if benchmarkReport.plannerBenchmarkResult.benchmarkTimeMillisSpent??> <#else> <#if (benchmarkReport.plannerBenchmarkResult.maxMemory?string.number)??> <#else>
Name ${benchmarkReport.plannerBenchmarkResult.name}
Aggregation ${benchmarkReport.plannerBenchmarkResult.aggregation?string}
Failure count ${benchmarkReport.plannerBenchmarkResult.failureCount}
Starting timestamp ${(benchmarkReport.plannerBenchmarkResult.startingTimestampAsMediumString)!"Differs"}
Warm up time spent${benchmarkReport.plannerBenchmarkResult.warmUpTimeMillisSpentLimit} msDiffers
Parallel benchmark count / available processors${benchmarkReport.plannerBenchmarkResult.parallelBenchmarkCount} / ${benchmarkReport.plannerBenchmarkResult.availableProcessors}Differs
Benchmark time spent${benchmarkReport.plannerBenchmarkResult.benchmarkTimeMillisSpent} msDiffers
Environment mode ${benchmarkReport.plannerBenchmarkResult.environmentMode!"Differs"}
Logging level org.optaplanner.core ${benchmarkReport.plannerBenchmarkResult.loggingLevelOptaPlannerCore!"Differs"}
Logging level org.drools.core ${benchmarkReport.plannerBenchmarkResult.loggingLevelDroolsCore!"Differs"}
Solver ranking class ${benchmarkReport.solverRankingClassSimpleName!"Unknown"}
VM max memory (as in -Xmx but lower)${benchmarkReport.plannerBenchmarkResult.maxMemory?string.number} bytesDiffers
OptaPlanner version ${benchmarkReport.plannerBenchmarkResult.optaPlannerVersion!"Differs"}
Java version ${benchmarkReport.plannerBenchmarkResult.javaVersion!"Differs"}
Java VM ${benchmarkReport.plannerBenchmarkResult.javaVM!"Differs"}
Operating system ${benchmarkReport.plannerBenchmarkResult.operatingSystem!"Differs"}
Report locale ${benchmarkReport.locale!"Unknown"}
Report timezone ${benchmarkReport.timezoneId!"Unknown"}




© 2015 - 2024 Weber Informatics LLC | Privacy Policy