Primal and dual valuation of our natural resources using O.R.

There are many ways in which one can devise the importance of O.R. to protect our environment, many of which dealing with optimization problems related to directly reducing the costs to prevent its destruction and so on. However, what about the environmental impacts from our patterns of consumption? Shall we change our way of living dramatically or rather find a balance between what we want and what we can use from our environment? Maybe O.R. can help us on that.

Roughly speaking, Operations Research (O.R.) deals mostly with finding the best way of doing something subject to a lot of different kinds of restrictions. Thus, one can indirectly consider the protection of the environment whilst solving a wide range of different optimization problems related to the daily needs of our society. For that sake, I figure two possibilities to consider the protection of the environment:

  • pricing natural resources appropriately as a subtraction to the profit of the operation;
  • limiting their use so as to avoid that we steal the share that belongs to the future generations.

I’ve already written about the first possibility in my post about optimizing public policies for urban planning. My fiancee and I devised a model to consider the environmental costs of subsidizing low income housing units at different parts of the city in what comes to daily displacement to work. However, finding the right data to run the model turned out to be our biggest problem. When one defines a penalty to the environmental impact related to the profit of an operation – for instance, using the value of carbon credits – it represents a cost to the problem. However, sometimes we might not have data to price it. But if the consumption of a given natural resource is limited by a constraint instead of penalized in the profit, it is still possible to figure the economic importance of such resource through duality. Therefore, let’s take a look at the second possibility as an alternative to finding the price of natural resources – as well as avoiding an excessive use of them.

The concept of duality in linear programming allows us to associate costs to our constraints. Suppose that our optimization problem is about finding the amount of goods of each kind to produce in order to maximize the profit that they generate subject to the limited resources available. The dual of this problem consists of finding the price of each unity of our limited resources in order to define the minimum price at which it is worthier to sell them instead of processing subject to how much profit each finished good would give us. The relationship between those two problems is quite strong: if a resource is not used up to its limit, its dual cost is zero – meaning that it does not have an economic importance according to the model. Therefore, duality can help us devising how much a limited resource is worth (if it is worth something) and thus provide a way of valuating resources according to their limitation and importance.

As a matter of fact, the more you understand the relation between primal and dual problems, the easier it becomes to talk face-to-face to economists. Indeed, this topic has a lot to do with the 1975 Nobel Prize in Economics. If you want to know more about that, the prize lectures from Kantorovich and Koopmans are a good starting point.

This post was written to the September’s INFORMS blog challenge: O.R. and the Environment.

Which problems could a million CPUs solve? (More about CPAIOR)

I’ve just attended a presentation from Thorsten Koch entitled “Which Mixed Integer Programs could a million CPUs solve?” at CPAIOR 2011. Like any presentation of a challenging research topic would be, it has left more doubts than answers at the end of it. Let’s understand part of the rationale of that.

As many people had already noticed, the frequency of individual processors are not increasing any longer due to technological restrictions of the current technology. Instead of that, our computers are having more and more cores. Despite the performance improvement being still noticeable for a standard user which otherwise would have many different applications being handled by the same processor, having more cores does not help a single software if it is not designed to take advantage of that.

In the case of optimization applications, that can be even more dramatic, since solvers like CPLEX does a lot of processing at the root node. Koch suggests that algorithms like the Interior Points Method would gain part of the Simplex share in the future, as is the case of parallel algorithms for matrix factorization. Hence, it seems that algorithm design researchers will have an increased budget in the forthcoming years.

First impressions from CPAIOR 2011

CPAIOR is an interesting environment for gathering researchers from diverse but close areas, ranging from mathematical programming to artificial intelligence. That reflects the 4 organized master classes, all of which related to search: the first about mixed-integer programming (MIP, that I missed for being late), then constraint programming (CP) with Giles Pesant (I arrived at the middle of it), satisfiability (SAT) with Marijn Heule and now A* with Nathan Sturtevant. The two later speakers were invited for the sake of bringing something different to the conference.

Despite being possible to describe SAT as a proper subset of CP, research on that topic has a very different focus for being concerned with only one type of constraint (predicates) and a bi-valued domain (true and false). With a more strict focus, the SAT community has been presenting outstanding results in the last years. In fact, there are people that envy how fast SAT solvers has been progressing compared to the CP ones. However, that’s the cost of being generalist.

Nathan Sturtevant presented an interesting animated example that does explain why Dijkstra’s shortest path algorithm can be very slow for certain cases, endorsing A* search.

PS: Lunch and coffee breaks are very tasty. I can’t wait for the barbecue and the gala dinner.

 

Revisiting operations research elements – part II: results, research… and failure!

This post resumes the discussion about key concepts of operations research started previously in this blog. However, instead of focusing only on the meaning of terms, this time I’d like to discuss about which results are worth of publishing and the unacknowledged importance of failure.

Are you the “brand new champion”?

“I never knew a soul who ever took a licking.
My friends have all been champions at everything.”
(Fernando Pessoa, Portuguese poet)

For a while, I thought that the idea described by Fernando Pessoa explained what happens in O.R.: an article is only worth of being published if it has noticeable results. However, which results are we talking about? In “Testing Heuristics: We Have It All Wrong”, John Hooker stresses many concerns about how results-driven approaches have been transforming research into plain development. He summarized most of them by the maxim “The tail wags the dog as problems begin to design algorithms”. In fact, such concerns were wide spreading by that time. In the article “Generating Hard Satisfiability Problems”, Bart Selman and others warned that bad benchmarks may lead to wrong conclusions about how hard a problem is and, conversely, about how good an algorithm to such problem is. Moreover, theoretical results such as the No Free Lunch theorems state that there is no best overall algorithm and there are always weaknesses to be acknowledged. Otherwise, it would not make sense to talk about a subject called meta-learning, which aims at selecting the most appropriate algorithm to solve each problem instance.

I remember to hear about the importance of reporting bad results in experimental physics classes, but that was quite lost in my mind after my junior and sophomore years. The big picture suggested by Selman’s group and Hooker was that many approaches being done in the area strive to be claimed the best one to solve a given problem at the cost of biasing the tests towards situations that are better suited to what is being proposed, intentionally or not. Besides, hardware differences, coding abilities and several other issues might influence what is being presented. Minding that metaphor that I used in the previous post, one must not confuse the power of an idea with the performance of a source code when dealing with a given benchmark set at a given machinery architecture. For instance, if someone is proposing an algorithm that explores fewer branches during search, it would be better to measure the number of branches instead of the running time, since it might be the case that a bad choice of data structures might hinder an adequate comparison of different techniques. Hence, one must strive to write an article that does recognize when its solution fails along with when it succeeds, whilst striving to diminish perturbation factors on its results.

After all, operations research is research, right?

What would you answer if asked that? I let it go unanswered last time. Most often, O.R. does involve scientific research: if the problem being solved was quite simple to skip that part, it would be unnecessary to have someone thinking about it. Since I was asked that while working on a long-term project, that was quite true. Notwithstanding, even if there is no scientific research involved, O.R. is still sort of a research: one wants to find the most appropriate way of performing certain operations. Therefore, some of the subtleties and misconceived concepts mentioned previously come from the differences among scientific research and operations research: research involves a problem, solutions and it may fail sometimes. Indeed, the word research in O.R. is also the same used to denote research in terms like R&D in Brazil (pesquisa), Portugal (investigação) and France (recherche). Not surprisingly, the list of ambiguous terms is far from complete. For instance, programming means both planning something or the act of producing computer code, as stressed by Lusting and Puget in the article “Program Does Not Equal Program: Constraint Programming and Its Relationship to Mathematical Programming”, in which they explain why it means both things for mathematical programming but just the later for constraint programming.

Yet another conclusion

I have decided to place the most important conclusions along with the most confusing term of the previous post – solution, but I can repeat them once more. Being aware of the differences between the common O.R. terminology and what researchers adopt anywhere else is important to be more effective when spreading the word about what you did beyond your close peers. In these two posts, I aimed at sharing some thoughts that I would be glad to have heard before. In order to succeed with that, I had the valuable help of my fiancée Sabrina, which helped me by revising this text in order to avoid that I committed the same mistakes that I was pointing. It might still not be completely accurate or clear cut, since I’m just a graduate student striving to figure out my area.