I’ve just attended a presentation from Thorsten Koch entitled “Which Mixed Integer Programs could a million CPUs solve?” at CPAIOR 2011. Like any presentation of a challenging research topic would be, it has left more doubts than answers at the end of it. Let’s understand part of the rationale of that.

As many people had already noticed, the frequency of individual processors are not increasing any longer due to technological restrictions of the current technology. Instead of that, our computers are having more and more cores. Despite the performance improvement being still noticeable for a standard user which otherwise would have many different applications being handled by the same processor, having more cores does not help a single software if it is not designed to take advantage of that.

In the case of optimization applications, that can be even more dramatic, since solvers like CPLEX does a lot of processing at the root node. Koch suggests that algorithms like the Interior Points Method would gain part of the Simplex share in the future, as is the case of parallel algorithms for matrix factorization. Hence, it seems that algorithm design researchers will have an increased budget in the forthcoming years.

### Like this:

Like Loading...

*Related*

That’s an interesting subject."a million CPUs" you mean "a million cores" aka many-core aka GPU.See articles below for Simplex, Interior points method and Cholesky Decomposition for Linear Programs on GPU:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.5966&rep=rep1&type=pdfhttp://portal.acm.org/citation.cfm?id=1587768

Many OR problems don’t lend themselves well to massive parallelism, because they don’t exhibit the structural regularity necessary to distribute the load across many, many CPUs (either shared- or distributed-memory)..Branch-and-bound trees are too dynamic and take time to ramp up to the point where there is enough work for the whole system. Generating lots of work quickly often means that much of the work is redundant anyway.Exploiting sparsity in linear algebra is also problematic. Decomposing problems into small, dense subproblems is difficult, and even then, the subproblems are related hierarchically. Interior-point methods offer advantages over simplex because the sparsity structure is at least static across iterations.Metaheuristics and high-throughput algorithms are another matter, though.

Matthew,there are some works that address these problems:Large Graph Algorithms for Massively Multithreaded Architectureshttp://www.citeulike.org/user/laurobeltrao/article/9077445An Asynchronous Multithreaded Algorithm for the Maximum Network Flow Problem with Nonblocking Global Relabeling Heuristichttp://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5557863&con=yes&userType=instExploring the Limits of GPUs With Parallel Graph Algorithms http://arxiv4.library.cornell.edu/abs/1002.4482