Skip to content

Latest commit

 

History

History
executable file
·
89 lines (67 loc) · 5.93 KB

README.adoc

File metadata and controls

executable file
·
89 lines (67 loc) · 5.93 KB

VRPTW solutions

This repository contains:

  • A collection of VRPTW benchmark solutions for open source tools.

  • Code for solution generation.

  • Code to verify / compare solutions.

  • A comparison with reference solutions.

Python code

  • benchmark.py - parsing the benchmark files and generating solution files.

  • optimize_or.py - generating solutions using or-tools.

  • optimize.py - generating solutions using continuous optimization.

Motivation

As part of the documentation of the optimization library fcmaes we have shown that continuous optimization is a valid choice for optimizing route planning as long as we are faced with non-standard requirements, see:

  • Multi-UAV Multi-UAV Task Assignment.

  • Noisy TSP Solve the noisy Traveling Salesman Problem.

  • Scheduling Solving a complex scheduling problem, part of the GTOC11 competition.

How far behind is continuous optimization for a typical standard problem well covered by specialized libraries? To evaluate this question we choose the 100 customer instances of Solomon’s VRPTW benchmark problems from 1987 because:

The goal is not to replace specific methods like or-tools by generic continuous optimization. Instead we investigate which specific continuous optimizer works best for VRPTW. This optimizer then could be applied to non-standard variations of the VRPTW problem not covered by the specialized tools. Keep in mind: The only thing we need is a fitness function, there are no "incremental changes" or "specific gene representations" as usually required by other heuristic methods. There is not much code to be written, compare optimize.py with optimize_or.py. Not only requires or-tools more code, you also have to learn its problem specific API.

During our tests we indeed found a combination of algorithms never before used together: A sequence of CR-FM-NES and BiteOpt. We found that CR-FM-NES converges very fast, but then always gets caught in a local minimum. BiteOpt on the other hand converges slower, but is better at escaping local minima. Applying both in a sequence looks like a natural choice.

The Solomons Benchmark

There exist two different objectives for the Solomon’s VRPTW benchmark:

  • Minimizing the overall distance / time serving all customers: solomon.

  • A hierarchical objective minimizing the number of vehicles with the distance as secondary objective: sintef.

For the distance-objective we found reference solutions at galgos, but some of them didn’t pass our validation. These solution assume rounding of the distances, which makes them incompatible to the interpretation of the problem used here. For both objective variants there are no solution sets available for existing open source tools. In this github repository we want to collect these, starting with or-tools and continuous optimization. Creation of the solution sets should be reproducible, so we added the code which computes them. Feel free to create a PR if you find an improvement or want to add another open source tool.

We applied the hierarchical objective only to or-tools to compare them with the sintef reference solutions. The single distance objective was applied both to or-tools and continuous optimization, with the or-tools result as reference. There are only small differences to the solomon reference results.

Summary of the Results

See results for detailed results.

Summary:

  • or-tools almost reproduces the reference solutions if given enough time (3 hours).

  • or-tools solves the clustered problems quite fast (< 1 minute single threaded).

  • or-tools has sometimes problems with the hierarchical objective, if the number of vehicles is very small.

  • for the single (distance) objective the new or-tools results are better then all reported results at Duda2019 - with the exception of RC1, were Duda2019 reports an average of 1348.51 for jsprit and the or-tools result is 1360.90 . So we can use the or-tools single objective results as open-source-tool reference for the Solomon’s benchmark.

  • Applying continuous optimization results in an error rate of about 0.2% for the clustered problem instances and about 3% for the random problem instances when about 7 minutes wall-time are invested for each problem on an AMD 5950x CPU utilizing all 16 cores / 32 threads.