When Brazil excels for real (or floating point): International Olympiads in Informatics

One gold and three bronze medals out of four competitors: that would be routine for some countries, but it meant a lot to Brazil in the 2011 edition of IOI. It was the best result of the country ever, achieved after more than a decade of continuous hard working by many people, including some professors and colleagues of mine from the University of Campinas – Unicamp – and the University of São Paulo – USP – but also from many other places. Like some friends of mine, I got more proud of that result than I would be of a World Cup title.

I had the opportunity to participate on the training for selecting the Brazilian competitors for the 2003 IOI and, despite scoring very bad at that selecting contest, I left it motivated to keep studying and practicing. In the years that followed, I tried my best in the South American and the Southwest European ICPC contests and achieved a humble result of three bronze medals. But the best part of it was that I learned a lot during those five years and so did most of my colleagues that went on the same direction, building a network of professionals that indicate each other for interesting jobs.

I do not think it is very common that IT undergrads follow the path towards an OR specialization, but that happens more often among those that engage in programming contests that valuate algorithm design and implementation skills. Such contests represent a great opportunity to leverage the area in Brazil, since the training required by the new generations can be supported by a number of professors that had their abroad doctoral studies sponsored some decades ago. Despite how far we are from devising strategic plans to excel somehow, good ideas here and there (even if decades ago) and the passionate effort of great individuals are playing an important role to the development of our country.

Offshore Resources Scheduling and the Brazilian Symposium on Operations Research

On August 17th, I will present an article about what I’ve been working on my M.Sc. thesis at the Brazilian Symposium on Operations Research (shortened SBPO in Brazil). This article is authored by me and some colleagues from work. We are tackling the problem of scheduling offshore resources to develop oil wells with Constraint Programming (CP). It took a great effort to present so much about the problem and how to solve it in a way that it accessible to a broader audience (I hope we have managed to do that). There are four other papers competing for the best paper award. We will try our best there. Anyway, I’m glad by the nomination.

The 2011 SBPO will be held in Ubatuba, a beach town half-way between São Paulo and Rio de Janeiro.

When the Network becomes Social: Small World Graphs and O.R.

Mathematicians have been studying graphs for a long while. Sociologists found out that some of them explain how we interact. Indeed, social networks just make the connection more evident to anyone. In the middle of that, some researchers have been wondering about the following question: can we make optimal decisions based on our local information into a social network?

A world of lines and dots…

Dots and lines connecting pairs of dots – that’s a graph (but we usually say vertices and edges – or nodes and arcs – when talking about them). Mathematicians study graphs because they are structures capable of modeling lots of relationships among entities. Sometimes they wonder if a property found in a certain graph implies another one, developing statements to the Graph Theory. Other times they want to leverage those properties when designing an algorithm that manipulates certain types of graphs, like in Combinatorial Optimization algorithms. As a matter of fact, that is not an isolated case – many researchers handling real-world problems aim at designing algorithms with an outstanding performance for the most common instances they expect to solve.

… and the world of people!

Many people have already heard about the “six degrees of separation” principle, which states that – on average – you can reach any person in the world through a chain of six people that know one another. Such “magical number” emerged from experiments of Stanley Milgram and others during the 1960’s, in which they asked a person in the U.S. to deliver a letter to another person by submitting it to someone that he/she knew and who he/she supposed to be closer to such person. Theoretical results also point something similar: for a random graph, the average shortest distance among pairs of vertices is proportional to the logarithm of the number of vertices, what means a very slow pace of increment as graphs get bigger and bigger. However, that is not truefor any graph. Instead, people started looking to a more specific class of graphs called Small World Graphs, which are supposed to be representative of a number of situations.

Small World Graphs to be explored everywhere

Small World Graphs can be though as a combination of lattices (grids of edges) and clusters or quasi-clusters (groups in which almost all edges exist among vertices) with a small average degree (number of edges from each vertex). The former property ensures that the graph is connected and it is possible to find a path among any pair of vertices. The later has to do with the fact that two vertices sharing an edge with a third one are more likely to share an edge among them. Think about it: you might know some people from your university, almost everyone from your department, whereas each of your colleagues is more likely to have long range connections with researchers sharing a common interest worldwide; and all of that together means that you don’t need many steps to reach most of the researchers in the world. The same goes valid for airports: your local airport might be connected to a number of airports in other cities of your country and some of them are connected to airports worldwide in a way that you can attend to your meetings everywhere without worrying too much about how to get there. However, if you need to think about it, you might probably come up with a very good answer, isn’t it?

Do we always have optimal answers from local network information?

That’s the question that Jon Kleinberg tries to answer in the article “The Small-World Phenomenon: An Algorithmic Perspective”. He claims to have found the class of graphs for which such local information ensures an optimal decision. To be honest, I didn’t read the entire paper (I’m in really busy times) but it sounds really interesting and I left to the curious reader such task (let me know about it after).

This post was prompted by the INFORMS Blog Challenge of July: OR and Social Networking. You can check all the submitted entries by August 4th.

The model as a spell and the solver as a wand: O.R. magic for a muggles’ world

Who cares about O.R. magic?

When I said once to my sister that my former job was to put more fridges on each truck to save delivery trips (something that many of my colleagues consider a joyful job), she couldn´t be less interested. Maybe I should have tried to use magic metaphors to describe models as spells, solvers as wands and programming contests as Quidditch games for students. Despite those interested in profits and costs, operations research practice sounds really boring to the general audience.

Who believes in O.R. magic?

We are embedded in optimization problems that are usually overlooked. As a result, tackling one of them might look like plain witchcraft to an outsider: how come that costs were reduced by 5% or profit raised by 20% just like that? Of course that such witchcraft may need to compete with quack consultants selling a sole system supposedly capable of solving whatever problem the client has. Apart from a parcel of executives and engineers, O.R. seems to be hovering between unfamiliarity and suspicion to many, what means a lot of opportunities lost.

How to bring them in or back to OuR magic?

Paul Rubin had many insights about that: he presented a very sound analysis about “hitting muggles” on his blog to target high-level executives, business students and small organizations. Indeed, I’ve been on training classes at Petrobras along with many young economists that have been just hired and most know little but are very interested about operations research. I hope they enjoy the O.R. lectures to be held.

Nevertheless, I would like to praise for a holistic education about O.R. for engineers and IT professionals. Being so diversified, O.R. involves fields as diverse that practitioners of some are not fully aware about the existence of others that would suit their needs. Moreover, complex software systems are very likely to require O.R. at some point but system analysts and system architects might not be aware about that. In both cases, an interesting application – if not ignored – might be approached with the wrong spell or wand! Despite how much I believe in magic, I know that I’m a muggle sometimes.

Some concluding remarks about CPAIOR

CPAIOR 2011 is coming to its end. Despite not being the first international conference that I have ever been, it was the most interesting so far. Among the reasons for that is the fact that it is very focused if compared to other O.R. meetings. I had the opportunity to meet many people that I only knew as authors of papers I’ve read. I also met many young researchers like me, some of which as excited as I am on working in the industry with O.R.. Some of those people were impressed by how far I came from. As a matter of fact, with internet (and free access to articles) it is possible to be a researcher anywhere in the world, even if your country does not have a tradition on the topic you work with. However, talking to experienced people at such environments saves a lot of time and helps you getting further.

As for the organization, the Zuse Institute staff did a great job at everything. They managed to have something going on every night after the conference presentations. I wish I could attend to the informal meeting after the conference today, but I have bought tickets to leave Berlin in advance.


Which problems could a million CPUs solve? (More about CPAIOR)

I’ve just attended a presentation from Thorsten Koch entitled “Which Mixed Integer Programs could a million CPUs solve?” at CPAIOR 2011. Like any presentation of a challenging research topic would be, it has left more doubts than answers at the end of it. Let’s understand part of the rationale of that.

As many people had already noticed, the frequency of individual processors are not increasing any longer due to technological restrictions of the current technology. Instead of that, our computers are having more and more cores. Despite the performance improvement being still noticeable for a standard user which otherwise would have many different applications being handled by the same processor, having more cores does not help a single software if it is not designed to take advantage of that.

In the case of optimization applications, that can be even more dramatic, since solvers like CPLEX does a lot of processing at the root node. Koch suggests that algorithms like the Interior Points Method would gain part of the Simplex share in the future, as is the case of parallel algorithms for matrix factorization. Hence, it seems that algorithm design researchers will have an increased budget in the forthcoming years.

First impressions from CPAIOR 2011

CPAIOR is an interesting environment for gathering researchers from diverse but close areas, ranging from mathematical programming to artificial intelligence. That reflects the 4 organized master classes, all of which related to search: the first about mixed-integer programming (MIP, that I missed for being late), then constraint programming (CP) with Giles Pesant (I arrived at the middle of it), satisfiability (SAT) with Marijn Heule and now A* with Nathan Sturtevant. The two later speakers were invited for the sake of bringing something different to the conference.

Despite being possible to describe SAT as a proper subset of CP, research on that topic has a very different focus for being concerned with only one type of constraint (predicates) and a bi-valued domain (true and false). With a more strict focus, the SAT community has been presenting outstanding results in the last years. In fact, there are people that envy how fast SAT solvers has been progressing compared to the CP ones. However, that’s the cost of being generalist.

Nathan Sturtevant presented an interesting animated example that does explain why Dijkstra’s shortest path algorithm can be very slow for certain cases, endorsing A* search.

PS: Lunch and coffee breaks are very tasty. I can’t wait for the barbecue and the gala dinner.


Constraint Programming and Adaptiveness at CPAIOR

On May 25th, I will present an extended abstract at CPAIOR, which will be held in Berlin next week. It is about characterizing adaptive search methods for constraint programming. I have had the support of many colleagues along the process, which will be remembered at the end of the presentation. I’m also excited for the opportunity to meet other people with the same interests there, including those that I only know as authors of papers that I’ve read.

My presentation topic has nothing to do with my master thesis. The idea went out during the last summer (of the Southern hemisphere), right on time to submit a short paper for CPAIOR at the end of January. Despite the rejection, two of the reviewers suggested that a longer version would be worth of try. So I took one step back and submitted an extended abstract, since CPAIOR environment seems to be the most appropriate to discuss the topic. Besides, all 2011 master classes will be about search. I hope to post next week about how it went out. Click here to check the conference program.

How Analytics makes Operations Research the next big thing

Engineers enjoy laughing at buzzwords that they don’t sell. Despite that, some buzzwords represent important paradigm-shifts. They might not propose any technical novelty but they do contribute to empower our methodologies by valuating the presence of certain skilled specialists in large scale projects. Analytics is one of them: it represents the application of IT to support business decision processes. This post aims at showing that its existence can help leveraging O.R. practice in the industry.

The O.R. filet: there is no such thing as a free lunch!

Who did not wondered about that dream job in which all you have to do is what you do for fun? Suppose that you are an O.R. analyst hired by a company which provides you perfect data and a well-defined problem that you know how to tackle. And that’s not all: they do not underestimate the amount of effort that the project will demand from you and your co-workers. In such a perfect world, you just pick that traveling salesman or bin packing problem with that idealistic instances and expend some time experimenting your favorite techniques until you get satisfied with the results. You would probably finish your work very early and have the rest of the day to share a beer and French fries with your friends at the bar (if that happens in São Paulo).

Back to real life realm: from problem solver to problem finder

Unfortunately, there is a huge gap from being hired until possessing that well-defined problem and that perfect data. That is, if you manage to reach that point. If companies had already all of that figured, they would probably have gone beyond with an in-house approach to their decision problems. In such case, the benefit of an external OR consultant work would be often quite shy. Hence, one must mind that the work is not only about solving an optimization problem but rather helping the company to understand what the problem is and how to collect data to properly solve it.

Some interesting discussions about those issues have been recently raised by a couple of OR professionals called Patricia and Robert Randall on their blog Reflections on Operations Research. They have a blog post about data cleanup and two other posts about understanding what is the right solution for the client’s problem (by the way, I’m waiting for the promised sequel – check the first and the second posts).

And then the O.R. team becomes the Analytics division…

What I exposed before reflects the change that is going on in industry, including my workplace. The O.R. team is no longer called once someone “magically” finds an optimization problem that must be tackled within an IT project. By “magic”, I mean that someone working in a project knew about O.R. by chance and decided to invite an O.R. analyst to check it. Instead of that, new projects are supposed to pass through a preliminary assessment of the need of an O.R. approach. The analytics professional comes into scene to complement the team of software architects, software engineers, data modelers, project managers and stakeholders of any non-ordinary project. The role of that professional is to understand how the system can be used to support business decision-making and define whether statistics, data mining or operations research tools are required to accomplish that. Such assessment avoids that something pass uncaught or misunderstood and, of course, creates lots of interesting opportunities for O.R. professionals both at the assessment and later at the project development phase. As a matter of fact, we have plenty of people ready for the job, as I told last month in a post about O.R. job market in Brazil.

A gain-gain scenario: let’s spread the word about Analytics to empower O.R.!

An Analytics assessment of strategic projects would endorse a broader application of Operations Research, what usually means maximizing profit and reducing costs. Moreover, there is a huge workforce available to the demand that such paradigm-shift would incurs, including me and probably you. So let’s make that happen!

This post is my contribution to the INFORMS’ blog challenge of May: O.R. and Analytics. The INFORMS’ blog challenge consists of a monthly topic about O.R. that is proposed at the INFORMS’ blog. If you happen to write about the topic of the month, send an e-mail to them to get your post mentioned.

Revisiting operations research elements – part II: results, research… and failure!

This post resumes the discussion about key concepts of operations research started previously in this blog. However, instead of focusing only on the meaning of terms, this time I’d like to discuss about which results are worth of publishing and the unacknowledged importance of failure.

Are you the “brand new champion”?

“I never knew a soul who ever took a licking.
My friends have all been champions at everything.”
(Fernando Pessoa, Portuguese poet)

For a while, I thought that the idea described by Fernando Pessoa explained what happens in O.R.: an article is only worth of being published if it has noticeable results. However, which results are we talking about? In “Testing Heuristics: We Have It All Wrong”, John Hooker stresses many concerns about how results-driven approaches have been transforming research into plain development. He summarized most of them by the maxim “The tail wags the dog as problems begin to design algorithms”. In fact, such concerns were wide spreading by that time. In the article “Generating Hard Satisfiability Problems”, Bart Selman and others warned that bad benchmarks may lead to wrong conclusions about how hard a problem is and, conversely, about how good an algorithm to such problem is. Moreover, theoretical results such as the No Free Lunch theorems state that there is no best overall algorithm and there are always weaknesses to be acknowledged. Otherwise, it would not make sense to talk about a subject called meta-learning, which aims at selecting the most appropriate algorithm to solve each problem instance.

I remember to hear about the importance of reporting bad results in experimental physics classes, but that was quite lost in my mind after my junior and sophomore years. The big picture suggested by Selman’s group and Hooker was that many approaches being done in the area strive to be claimed the best one to solve a given problem at the cost of biasing the tests towards situations that are better suited to what is being proposed, intentionally or not. Besides, hardware differences, coding abilities and several other issues might influence what is being presented. Minding that metaphor that I used in the previous post, one must not confuse the power of an idea with the performance of a source code when dealing with a given benchmark set at a given machinery architecture. For instance, if someone is proposing an algorithm that explores fewer branches during search, it would be better to measure the number of branches instead of the running time, since it might be the case that a bad choice of data structures might hinder an adequate comparison of different techniques. Hence, one must strive to write an article that does recognize when its solution fails along with when it succeeds, whilst striving to diminish perturbation factors on its results.

After all, operations research is research, right?

What would you answer if asked that? I let it go unanswered last time. Most often, O.R. does involve scientific research: if the problem being solved was quite simple to skip that part, it would be unnecessary to have someone thinking about it. Since I was asked that while working on a long-term project, that was quite true. Notwithstanding, even if there is no scientific research involved, O.R. is still sort of a research: one wants to find the most appropriate way of performing certain operations. Therefore, some of the subtleties and misconceived concepts mentioned previously come from the differences among scientific research and operations research: research involves a problem, solutions and it may fail sometimes. Indeed, the word research in O.R. is also the same used to denote research in terms like R&D in Brazil (pesquisa), Portugal (investigação) and France (recherche). Not surprisingly, the list of ambiguous terms is far from complete. For instance, programming means both planning something or the act of producing computer code, as stressed by Lusting and Puget in the article “Program Does Not Equal Program: Constraint Programming and Its Relationship to Mathematical Programming”, in which they explain why it means both things for mathematical programming but just the later for constraint programming.

Yet another conclusion

I have decided to place the most important conclusions along with the most confusing term of the previous post – solution, but I can repeat them once more. Being aware of the differences between the common O.R. terminology and what researchers adopt anywhere else is important to be more effective when spreading the word about what you did beyond your close peers. In these two posts, I aimed at sharing some thoughts that I would be glad to have heard before. In order to succeed with that, I had the valuable help of my fiancée Sabrina, which helped me by revising this text in order to avoid that I committed the same mistakes that I was pointing. It might still not be completely accurate or clear cut, since I’m just a graduate student striving to figure out my area.