Revisiting operations research elements – part II: results, research… and failure!

This post resumes the discussion about key concepts of operations research started previously in this blog. However, instead of focusing only on the meaning of terms, this time I’d like to discuss about which results are worth of publishing and the unacknowledged importance of failure.

Are you the “brand new champion”?

“I never knew a soul who ever took a licking.
My friends have all been champions at everything.”
(Fernando Pessoa, Portuguese poet)

For a while, I thought that the idea described by Fernando Pessoa explained what happens in O.R.: an article is only worth of being published if it has noticeable results. However, which results are we talking about? In “Testing Heuristics: We Have It All Wrong”, John Hooker stresses many concerns about how results-driven approaches have been transforming research into plain development. He summarized most of them by the maxim “The tail wags the dog as problems begin to design algorithms”. In fact, such concerns were wide spreading by that time. In the article “Generating Hard Satisfiability Problems”, Bart Selman and others warned that bad benchmarks may lead to wrong conclusions about how hard a problem is and, conversely, about how good an algorithm to such problem is. Moreover, theoretical results such as the No Free Lunch theorems state that there is no best overall algorithm and there are always weaknesses to be acknowledged. Otherwise, it would not make sense to talk about a subject called meta-learning, which aims at selecting the most appropriate algorithm to solve each problem instance.

I remember to hear about the importance of reporting bad results in experimental physics classes, but that was quite lost in my mind after my junior and sophomore years. The big picture suggested by Selman’s group and Hooker was that many approaches being done in the area strive to be claimed the best one to solve a given problem at the cost of biasing the tests towards situations that are better suited to what is being proposed, intentionally or not. Besides, hardware differences, coding abilities and several other issues might influence what is being presented. Minding that metaphor that I used in the previous post, one must not confuse the power of an idea with the performance of a source code when dealing with a given benchmark set at a given machinery architecture. For instance, if someone is proposing an algorithm that explores fewer branches during search, it would be better to measure the number of branches instead of the running time, since it might be the case that a bad choice of data structures might hinder an adequate comparison of different techniques. Hence, one must strive to write an article that does recognize when its solution fails along with when it succeeds, whilst striving to diminish perturbation factors on its results.

After all, operations research is research, right?

What would you answer if asked that? I let it go unanswered last time. Most often, O.R. does involve scientific research: if the problem being solved was quite simple to skip that part, it would be unnecessary to have someone thinking about it. Since I was asked that while working on a long-term project, that was quite true. Notwithstanding, even if there is no scientific research involved, O.R. is still sort of a research: one wants to find the most appropriate way of performing certain operations. Therefore, some of the subtleties and misconceived concepts mentioned previously come from the differences among scientific research and operations research: research involves a problem, solutions and it may fail sometimes. Indeed, the word research in O.R. is also the same used to denote research in terms like R&D in Brazil (pesquisa), Portugal (investigação) and France (recherche). Not surprisingly, the list of ambiguous terms is far from complete. For instance, programming means both planning something or the act of producing computer code, as stressed by Lusting and Puget in the article “Program Does Not Equal Program: Constraint Programming and Its Relationship to Mathematical Programming”, in which they explain why it means both things for mathematical programming but just the later for constraint programming.

Yet another conclusion

I have decided to place the most important conclusions along with the most confusing term of the previous post – solution, but I can repeat them once more. Being aware of the differences between the common O.R. terminology and what researchers adopt anywhere else is important to be more effective when spreading the word about what you did beyond your close peers. In these two posts, I aimed at sharing some thoughts that I would be glad to have heard before. In order to succeed with that, I had the valuable help of my fiancée Sabrina, which helped me by revising this text in order to avoid that I committed the same mistakes that I was pointing. It might still not be completely accurate or clear cut, since I’m just a graduate student striving to figure out my area.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: