A glimpse at winning the INFORMS Undergraduate Operations Research Prize: Kayla Cummings (2018) and Wes Gurnee (2020)

This fall I had the pleasure of hosting two winners of the INFORMS Undergraduate Operations Research Prize in my Prescriptive Analytics course at Bucknell University (ANOP 370). These talks are now available online.

Kayla Cummings (2018 winner) talked about vaccine pricing: https://mediaspace.bucknell.edu/media/CDC+as+a+Strategic+Agent+in+Public+Sector+Vaccine+Pricing+-+Kayla+Cummings%2C+MIT%2C+09+30+2021/1_pmu631vu/185503823

Wes Gurnee (2020 winner) talked about fairness-optimized political districts: https://mediaspace.bucknell.edu/media/Fairmandering+Generating+Fairness+optimized+Political+Districts+-+Wes+Gurnee%2C+MIT%2C+11+11+2021/1_yu6gcqsm/185503823

Cummings has since extended her work to also analyze the pricing of COVID-19 vaccines. 

I found very interesting when she mentioned that the hardest part of this extension was finding the right data, which took them 6 months but only shows up in the appendix of the paper.

You can find more comments on her talk in the thread that I posted on Twitter during her presentation:

Gurnee emphasized the benefits of decompositions in optimization and of leveraging symmetries in how districts are partitioned in order to make the algorithm scalable.

I also liked the comments about how challenging it is to sell ideas like this to organizations.

You can find more comments on his talk in the thread that I posted on Twitter during his presentation:

Both of them are currently pursuing their PhDs at the Massachusetts Institute of Technology’s Operations Research Center

Whoever hire them when they finish will be very lucky: it was a pleasure to learn about their work when judging the award, and a blast to bring them to talk about it now!

Carolyn Mooney’s talk at Bucknell — “Decisions as Code: Systems Thinking, Optimization, and Computer Science”

During her recent visit to Bucknell, Carolyn Mooney from nextmv made an excellent case for rethinking the software used for automating decisions.

Her talk is now available online in the following link:
Decisions as Code: Systems Thinking, Optimization, and Computer Science

Here are some of my takeaways from her talk:

  • Current tools require a long development cycle with a multi-disciplinary team including experts, which may not be among the first hires of a startup.
  • These tools may be difficult to integrate due to the misalignment with contemporary software engineering practices.
  • The outcomes of these tools may be hard to interpret and test, in particular if they are not fully reproducible.
  • It is perhaps oblivious to optimization experts that the word “constraint” carries a very negative connotation for non-experts.
  • Representing optimization problems using the conventional matricial notation creates yet another barrier to developers who are not experts; whereas it is more intuitive to model instead states and transitions through decision diagrams.
  • I love the idea of using “decision engineer” as the role of a software engineer who understands optimization. In fact, Matt Turck claims that a single trend is encapsulated by big data (2013), machine learning and artificial intelligence (2017), and automation (2020).
  • All of the points above are even more crucial given that most data science projects never make it into production, which means that they are developed but end up not being used in practice by the companies.

The following Twitter thread contains my notes during the talk:

When do you need mathematical optimization instead of machine learning?

(Also posted on LinkedIn)

Here are some points that I have shared with my Prescriptive Analytics students at Bucknell University based on a piece by Gurobi Optimization CEO Ed Rothberg for Forbes:

1) Many applications of machine learning tend to be consumer-facing, whereas mathematical optimization is usually applied in businesses to seemingly optimize their processes without consumers being aware of it.

2) Mathematical optimization can be used to develop a “digital twin” of some organizational processes in situations where the number of possible solutions could be very large and subject to changes, especially in situations where patterns from historical data could be disrupted while the process in the organization remains the same.

3) The initial time and effort required from stakeholders to build a mathematical optimization model can be greater than that of a machine learning model, since these models require a deeper understanding of the business processes involved.

4) Machine learning can leverage what is called “big data” to learn from the past and make predictions about the future, but it is vulnerable to model drift: the predictions become less accurate if, for example, there are changes in the patterns that were previously observed in the data.

My neural network is a piecewise linear regression, but which one? (A glimpse of our AAAI-20 paper on Empirical Bounds)

One of the most beautiful things about the theory of computation is, in my opinion, how you start from something as simple and intuitive as a finite-state machine and then – with a few more bells and whistles – end up with something as complex and powerful as a Turing machine.  When someone mockingly says that machine learning or deep learning is just regression, and that often implies merely linear regression, they are missing a similar jump in complexity as the one from finite-state to Turing machines.

In this post, we will look at rectifier networks: one of the simplest but yet very expressive type of artificial neural network. A rectifier network is made of Rectified Linear Units, or ReLUs, and each ReLU defines a linear function on its inputs that is then composed with a non-linear function that takes the maximum between 0 and that linear function. If the output is positive, we say that the input has activated the ReLU. At the microscopic level, one would be correct to think that adjusting the coefficients of the linear function associated with each ReLU during training is akin to linear regression (and at the same time this person could be puzzled that the negative outputs produced when a ReLU is not activated are thrown away).


However, what happens at the macroscopic level when many ReLUs are put together is very different. If we were to combine units defining just linear functions, we would get a single linear function as a result. But as we turn negative outputs into zeroes, we obtain different linear functions for different inputs, and consequently the neural network models a piecewise linear function instead. That alone implies a big jump in complexity. Intuitively, we may guess that having a function with more pieces would make it easier to fit the training data. According to our work so far, that seems correct.



Here is what we found out in prior work published at ICML 2018:

  1. The maximum number of these pieces, which are also called linear regions, may grow exponentially on the depth of the neural network. By extending prior results along the same lines, where a zigzagging function is modeled with the network, we have shown that a neural network with one-dimensional input and L layers having n ReLUs each can define a maximum of (n+1)^L linear regions instead of n^L.
  2. Surprisingly, however, too much depth may also hurt the number of linear regions. If we distribute 60 ReLUs uniformly among 1 to 6 layers, the best upper bound on the number of linear regions will be attained by different depths depending on the size of input. Because this upper bound is tight for a single layer, that implies that a shallow network may define more linear regions if the input is sufficiently large.
  3. If no layer in a small neural network trained on the MNIST dataset is too narrow (3 units or less), we found that the accuracy of the network relates to the number of linear regions. However, it takes a long time to count linear regions even in small neural networks (for example, sometimes almost a week with just 32 ReLUs).

In the paper that we are presenting at AAAI 2020, we describe a fast estimate on the number of linear regions (i.e., pieces of the piecewise linear function) that could be used to measure larger neural networks. We can map inputs to outputs of a rectifier network using a mixed-integer linear programming (MILP) formulation that also includes binary (0-1) variables denoting which units are activated or not by different values for the input. Every linear region corresponds to a different vector of binary values denoting which units are activated, so we just need to count all binary vectors that are feasible. The formulation below connects inputs to the output of a single ReLU.


What makes counting so slow –  besides the fact that the number of linear regions grows very fast with the size of the neural network – is that there is little research on counting solutions of MILP formulations. And that is justifiable: research and development in operations research (OR) methodologies such as MILP are focused on finding a good (and preferably optimal) solution as fast as possible. Meanwhile, counting solutions of propositional satisfiability (SAT) formulas, which involve only Boolean variables, is a widely explored topic back in artificial intelligence (AI). For example, we can test multiple times how many additional clauses would it take to make a SAT formula unsatisfiable and then draw conclusions on the total number of solutions based on that. There is no reason why the same methods could not be applied to count solutions on binary variables in MILP formulations.


In other words, we started with an AI problem of counting linear regions, moved to an OR problem of counting solutions of an MILP formulation, and came back to AI to use approximate model counting methods that are typically applied to SAT formulas. However, there is something special about MILP solvers that can make approximate counting more efficient: callback functions. Most of the SAT literature is based on the assumption that testing the satisfiability of a formula with every set of additional clauses requires solving that formula from scratch. With a lazy cut callback in an MILP solver, we can add more constraints as soon as a new solution is found. Consequently, we reduce the number of times that we have to prove that a formulation is infeasible, which is computationally very expensive.


By adding random parity (XOR) constraints involving a few binary variables to iteratively restrict the number of solutions of the MILP formulation, we obtained probabilistic lower bounds on the number of linear regions that are very similar in shape to the actual figures. On the left of the figure below, we are analyzing 10 networks trained on the MNIST dataset for every possible combination of 22 ReLUs distributed in two hidden layers followed by 10 ReLUs of output. The black and red curves on top are upper bounds, the black dots are averages of the exact number of linear regions, and the line charts below are lower bounds with probability 95%. On the right of the figure below, we see that the probabilistic lower bounds are orders of magnitude faster to calculate if the actual number of linear regions is large (at least 1 million each).


If you are curious about our next steps, take a look at our upcoming CPAIOR 2020 paper, where we look at this topic from a different perspective by asking the following: do we need a trained neural network to be as wide or as deep as it is to do what it does? If a network is trained with L1 regularization, the answer is probably not! We use MILP to identify units that can be removed or merged in a neural network without affecting the outputs that are produced. In other words, we obtain a lossless compression.

Where to find our work at AAAI 2020:

  • Spotlight Presentation: Sunday, 11:15 – 12:30, Clinton (Technical Session 9: Constraint Satisfaction and Optimization)
  • Poster Presentation: Sunday, 7:30 – 9:30, Americas Halls I/II

Links to our papers:

  1. Bounding and Counting Linear Regions of Deep Neural Networks (ICML 2018)
  2. Empirical Bounds on Linear Regions of Deep Rectifier Networks (AAAI 2020)
  3. Lossless Compression of Deep Neural Networks (CPAIOR 2020)

All papers are joint work with Srikumar Ramalingam. Christian Tjandraatmadja is a co-author in the ICML 2018 paper. Abhinav Kumar is a co-author in the CPAIOR 2020 paper.


Summer 2020 schools on algorithms, data science, machine learning, networks, optimization, transportation, and other relevant topics in operations research

This post covers relevant schools happening between April of 2020 and September of 2020. If you know of other schools that are not listed here, please reach out to me. Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years.

Previous posts: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, Winter 2018 / 2019, Summer 2019, and Winter 2019 / 2020.

Spring School on Mathematical Statistics
March 30 – April 3     (deadline: January 24)
Leipzig, Germany

LMS Research School: Graph Packing
April 19-25     (deadline: January 31)
Eastbourne, England
* Included on January 23

NATCOR Heuristics and Stochastic Algorithms
April 20-24     (deadline not posted)
Nottingham, England

ISCO 2020 Spring School: Data Science, Machine Learning and Optimization
May 2-3     (deadline not posted)
Montreal, QC, Canada

Complex Networks: Theory, Methods, and Applications
May 18-21     (deadline: February 23)
Como, Italy

Mathematical Modelling, Numerical Analysis and Scientific Computing
May 24-29     (deadline: April 30)
Kacov, Czech Republic

Machine Learning Crash Course (MLCC 2020)
May 25-29     (deadline: April 3)
Oslo, Norway
* Included on February 28

Simons Institute Workshop: Statistics in the Big Data Era
May 27-29     (deadline: February 1st for travel support)
Berkeley, CA, USA

Column Generation 2020
May 31 – June 3     (deadline not posted)
Sainte-Adèle, QC, Canada

Summer School on Modern Optimization for Transportation
June 1-5     (deadline not posted)
Frejus, France

NATCOR Convex Optimization
June 1-5     (deadline not posted)
Edinburgh, Scotland

Risk Measurement and Control: Fintech and Digital Banking
June 3-6     (deadline not posted)
Rome, Italy

IPCO (Integer Programming and Combinatorial Optimization) Summer School
June 6-7     (deadline not posted)
London, England

Structural Graph Theory
June 7-12     (deadline not posted)
Murol, France
* Included on February 3 by suggestion of Aurélie Lagoutte

ICAPS-ICRA Summer School on Plan-Based Control for Robotic Agents
June 8-12     (deadline: March 31)
Paris, France

Zaragoza Logistics Center PhD Summer Academy
June 8-19     (deadline: June 1st)
Zaragoza, Spain

Summer School in Logic and Formal Epistemology
June 8-26     (deadline: March 13)
Pittsburgh, PA, United States
* Included on February 22

Hausdorff School Algorithmic Data Analysis
June 15-19     (deadline: March 29)
Bonn, Germany
* Included on February 22

Research school in computational complexity
June 15-19     (deadline not posted)
Paris, France
* Included on February 22 by suggestion of Ludmila Glinskih

Simulation Summer School (S3)
June 21     (deadline: February 15)
State College, PA, United States
* Included on January 27

DTU CEE Summer School 2020: Advanced Optimization, Learning, and Game‐Theoretic Models in Energy Systems
June 21-26     (deadline: January 31)
Copenhagen, Denmark

3rd International Summer School on Artificial Intelligence and Games
June 22-26     (deadline: March 1st for early registration)
Copenhagen, Denmark

Swedish Summer School in Computer Science (S3CS 2020): The Method of Moments in Computer Science and Beyond & Polyhedral Techniques in Combinatorial Optimization
June 28 – July 4     (deadline: February 11)
Stockholm, Sweden

Machine Learning Summer School – Germany
June 28 – July 10    (deadline: February 11)
Tubingen, Germany

Regularization Methods for Machine Learning (RegML)
June 29 – July 3    (deadline: March 20)
Genova, Italy

Data Science Summer School (DS3)
June 29 – July 3    (deadline not posted)
Palaiseau, France

Tsinghua University 2020 Deep Learning Summer School
June 29 – July 12    (deadline: April 14)
Beijing, China

International School of Mathematics “Guido Stampacchia”: Graph Theory, Algorithms and Applications
July 1-8     (deadline: April 10)
Erice, Italy

Special Interest Group on Genetic and Evolutionary Computation (SIGEVO) Summer School
July 5-9     (deadline: April 3)
Cancun, Mexico
* Included on January 21 by suggestion of Juergen Branke

4th Summer School on Cognitive Robotics
July 6-10     (deadline not posted)
Brisbane, Australia
* Included on January 23 by suggestion of Philip Kilby

Eastern European Machine Learning Summer School: Deep Learning and Reinforcement Learning
July 6-11     (deadline: March 27)
Krakow, Poland

Gdańsk Summer School on Algorithms for Discrete Optimization and Deep Learning
July 6-12     (deadline: April 30 for early registration)
Gdańsk, Poland
* Included on January 23 by suggestion of Georg Anegg

EURO PhD Summer Schools on Multiple Criteria Decision Aiding / Making (MCDA / MCDM)
July 6-17     (deadline: February 1st)
Ankara, Turkey

Bocconi Summer School in Advanced Statistics and Probability
July 6-17     (deadline: March 31)
Como, Italy

EADM Summer school on Learning and Decision Making
July 8-15     (deadline not posted)
Barcelona, Spain
* Included on February 22

EURO PhD School on Data Driven Decision Making and Optimization
July 10-19     (deadline: January 15)
Seville, Spain

3rd Advanced Course on Data Science & Machine Learning (ACDL 2020)
July 13-17    (deadline: March 31)
Siena, Italy
* Included on February 28

EURO PhD School on Sustainable Supply Chains
July 19-23     (deadline: January 20)
Lisbon, Portugal

Latin-American Summer School in Operational Research (ELAVIO)
July 19-24     (deadline: February 29)
Arequipa, Peru
* Included on January 20 by suggestion of Rodrigo Linfati

Gene Golub SIAM Summer School 2020: Theory and Practice of Deep Learning
July 20-31     (deadline: February 1)
Muizenberg, South Africa

Argonne Training Program on Extreme-Scale Computing
July 26 – August 7     (deadline: March 2)
St. Charles, IL, United States

4rd International Summer School on Deep Learning (DeepLearn 2020)
July 27-31     (deadline: January 26 for early registration)
Leon, Mexico

Metaheuristics Summer School: Learning and Optimization from Big Data
July 27-31     (deadline: March 5)
Catania, Italy

4th Modelling Symposium: Introducing Deep Neural Networks
July 27-31     (deadline: March 27)
Magdeburg, Germany

Advanced Methods in Operations Research for Logistics and Transportation
July 27-31     (deadline not posted)
Bogota, Colombia

Digital Transformation of Mobility Systems – OR Models and Methods
July 27-31     (deadline: March 31)
Munich, Germany
* Included on February 28 by suggestion of Layla Martin

Deep Learning and Reinforcement Learning (DLRL) Summer School 2020
July 29 – August 6     (deadline not posted)
Montreal, QC, Canada

Machine Learning Summer School – Indonesia
August 3-9     (deadline: April 30)
Bandung, Indonesia

The Cornell, Maryland, Max Planck Pre-doctoral Research School 2020
August 4-9     (deadline: February 15)
Saarbrucken, Germany
* Included on January 27 by suggestion of Alex Efremov

Oxford Machine Learning School
August 17-22     (deadline: April)
Oxford, England
* Included on January 27

Prague Summer School on Discrete Mathematics
August 24-28     (deadline: March 15)
Prague, Czech Republic

Simons Institute Workshop: Probability, Geometry, and Computation in High Dimensions Boot Camp
August 24-28     (deadline not posted)
Berkeley, CA, USA

Simons Institute Workshop: Theory of Reinforcement Learning Boot Camp
August 31 – September 4     (deadline not posted)
Berkeley, CA, USA

Summer School on Machine Learning and Big Data with Quantum Computing (SMBQ 2020)
September 7-8     (deadline not posted)
Porto, Portugal
* Included on February 28

Combinatorial Optimization at Work (CO@Work)
September 14-26     (deadline not posted)
Berlin, Germany

NATCOR Forecasting and Predictive Analytics
September 21-25     (deadline not posted)
Lancaster, England

Simons Institute Workshop: Deep Reinforcement Learning
September 28 – October 2     (deadline not posted)
Berkeley, CA, USA

A Word Cloud to Remember Shabbir Ahmed

(Also posted in the INFORMS 2019 blog.)

The optimization community had some big losses this year with the passing of giants such as Shabbir Ahmed and Egon Balas. I only met Shabbir in person in some of the conferences that I attended during my doctoral years. I still remember how much time he spent talking to me the first time that I presented a poster at the MIP workshop. He was always accessible, engaging, and willing to offer some advice.

When a big conference is about to start, Shabbir comes to my mind because of his word clouds of abstracts. For example, precisely two years ago he observed that model was used more often than data in the INFORMS 2017 abstracts:

Thanks to some help from Mary Leszczynski and WordArt.com, I came up with a word cloud for talk titles at INFORMS 2019. This year, “model” showed up 363 times and “data” showed up 420 times. However, if we also account for “models”, the tally goes up to 551. Therefore, Shabbir’s observation that model > data also holds for INFORMS 2019.

Word Art

Winter 2019 / 2020 schools on AI, big data, discrete math, machine learning, networks, optimization, and other relevant topics in operations research

This post covers relevant schools happening between October of 2019 and March of 2020. If you know of other schools that are not listed here, please reach out to me. Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years.

Previous posts: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, Winter 2018 / 2019, and Summer 2019.

PS: For schools after March, check the Summer 2020 post

Fall school of the thematic Einstein semester Algebraic Geometry: Varieties, Polyhedra, Computation
September 30 – October 4     (deadline was September 1st)
Berlin, Germany

The Autumn school on Machine Learning
October 3-11     (deadline: September 30)
Tbilisi, Georgia

Simulation Modelling for Business Research
October 7-10     (deadline was September 8)
Kiel, Germany

InfraTrain (Infrastructure Research and Policy Training) Autumn School 2019 – Sustainable Infrastructure Modeling: Numerical Models and Data Analysis
October 7-11     (deadline was August 31st)
Berlin, Germany
* Included on September 19 by suggestion of Sauleh Siddiqui

5th School on Belief Functions and Their Applications
October 27-31     (deadline was September 15)
Siena, Italy

Winter School on Computational Data Science and Optimization
(Part of the Focus Program on Data Science and Optimization)
November 4-8     (deadline: November 8)
Toronto, ON, Canada
* Included on September 19 by suggestion of Swati Gupta

Khipu: Latin American Meeting in Artificial Intelligence
November 11-15     (deadline was June 28)
Montevideo, Uruguay

Autumn school on Advanced Branch-Cut-and-Price (BCP) Tools: VRPSolver and Coluna
November 21-22     (no deadline posted)
Paris, France
* Included on September 20

Second Nepal AI Winter School 2019
December 10-20     (deadline: September 28)
Pokhara, Nepal

15th Summer School on Discrete Math
January 6-10     (deadline: October 15)
Valparaiso, Chile

Winter School on Machine Learning – WISMAL 2020
January 10-12     (deadline: December 13 for early registration)
Las Palmas de Gran Canaria, Spain

Low-Rank Models 2020: Optimization, approximation, and applications in data science
January 12-17     (deadline: October 1st)
Villars-sur-Ollon, Switzerland

6th International Winter School on Big Data
January 13-17     (deadline: October 2 for early registration)
Ancona, Italy

Winter School on Optimization and Operations Research
January 19-24     (deadline: December 16 for early registration)
Zinal, Switzerland

9th Winter School on Network Optimization
January 20-24     (deadline: October 31st – submit CVs to NetOpt2020@fc.ul.pt)
Estoril, Portugal

Early Summer Schools

Column Generation 2020
May 31 – June 3     (no deadline posted)
Sainte-Adèle, QC, Canada
* Included on September 19 by suggestion of Claudio Contardo

Summer 2019 schools on data analytics, discrete math, machine learning, networks, optimization, and other relevant topics in operations research

This post covers relevant schools happening between April and September of 2019. If you know of other schools that are not listed here, please reach out to me. Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years.

Previous posts: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, and Winter 2018 / 2019.

PS: For schools after September, check the Winter 2019 / 2020 post

NATCOR Stochastic Modeling
April 1-5     (deadline not posted)
Lancaster, England

The Data Incubator Data Science Fellowship
April 1 – May 24     (deadline not posted)
New York / San Francisco Bay Area / Boston / Washington DC, USA, or online

Rising Stars in Computational and Data Science
(workshop for female graduate students and postdocs)
April 9-10     (deadline: January 22)
Austin, TX, USA

Complex networks: theory, methods, and applications
May 13-17     (deadline: February 17)
Como, Italy

Numerical Analysis Summer School 2019: A Numerical Introduction to Optimal Transport
May 13-17     (deadline: March 15)
Paris, France

IPCO (Integer Programming and Combinatorial Optimization) Summer School
May 20-21     (deadline not posted)
Ann Arbor, MI, USA

2019 Midwest Big Data Summer School for Early Career Researchers
May 20-23     (deadline: March 20 for early registration)
Ames, IA, USA
* Included on February 19

2nd International Summer School on Artificial Intelligence and Games
May 27-31     (deadline not posted)
New York, NY, USA

Simons Institute Workshop: Deep Learning Boot Camp
(preannouncement; details to follow)
May 28-31     (deadline not posted)
Berkeley, CA, USA

Quantum Computing Summer School Fellowship
June 3 – August 9     (deadline: January 18)
Los Alamos, NM, USA

Advanced Process Optimization: Optimization in Biochemical Processes and Practical Optimization Techniques & Tools
June 3-14     (deadline: May 30)
Lyngby, Denmark
* Included on March 11 by suggestion of Richard Oberdieck

Applied Machine Learning Summer Research Fellowship
June 6 – (10-12 weeks)     (deadline: January 3 for first round of applications)
Los Alamos, NM, USA

Numerical Computations: Theory and Algorithms (NUMTA)
June 15-21     (deadline: March 31)
Crotone, Italy

DTU CEE Summer School 2019: Data-Driven Analytics and Optimization for Energy Systems
June 16-21     (deadline: March 15)
Copenhagen, Denmark
* Included on February 19

Summer Institute in Computational Social Science
June 16-29     (deadline: February 20)
Princeton, NJ, USA

Machine Learning Crash Course (MLCC 2019)
June 17-21     (deadline: April 19)
Genova, Italy
* Included on February 19

Finite Geometry & Friends
June 17-21     (deadline not posted)
Brussels, Belgium
* Included on February 19

Summer School on Behavioural Operational Research (BOR)
June 17-21     (deadline: May 15)
Nijmegen, The Netherlands
* Included on April 22

Summer School on Geometric and Algebraic Combinatorics
June 17-28     (deadline: February 27)
Paris, France
* Included on February 19

Gene Golub SIAM Summer School 2019: High Performance Data Analytics
June 17-30     (deadline: February 8)
Aussois, France

17th Annual Wolfram Summer School
June 23 – July 12     (deadline: May 25)
Waltham, MA, USA

Applied Bayesian Statistics Summer School on “Bayesian Demography”
June 24-28     (deadline not posted)
Como, Italy

Eötvös Loránd University Summer School in Mathematics: On the crossroads of topology, geometry and algebra
June 24-28     (deadline: April 30 for early registration)
Budapest, Hungary

Mathematical Optimization of Systems Impacted by Rare, High-Impact Random Events
June 24-28     (deadline not posted)
Providence, RI, USA

Data Science Summer School (DS3)
June 24-28    (deadline: April 26)
Palaiseau, France
* Included on February 19

RISIS Summer School on Data Science for studying Science, Technology and Innovation
June 24-28     (deadline: May 26)
Glasgow, Scotland
* Included on April 30

Summer School on Computer Science
(preannouncement; details to follow)
June 24-30     (deadline not posted)
Novosibirsk, Russia

International School of Mathematics “Guido Stampacchia”: Advances in Nonsmooth Analysis and Optimization (NAO 2019)
June 24 – July 1     (deadline: May 31)
Erice, Italy
* Included on February 19

Institute for Mathematics and its Applications (IMA) Math-to-Industry Boot Camp IV
June 24 – August 2     (deadline: February 28)
Minneapolis, MN, USA
* Included on February 19

1st MINOA PhD school: Mixed-Integer Nonlinear Optimization meets Data Science
June 25-28     (deadline not posted)
Ischia, Italy
* Included on January 16 by suggestion of Andrea Lodi

9th PhD School in Discrete Mathematics
June 30 – July 6     (deadline: May 31)
Rogla, Slovenia

Swedish Summer School in Computer Science (S3CS) 2019: Information Theory in Computer Science and Spectral Graph Theory
June 30 – July 6     (deadline: March 8)
Stockholm, Sweden
* Included on February 19 by suggestion of Ludmila Glinskih

2nd Summer School on Optimization, Big Data and Applications (OBA)
June 30 – July 6     (deadline: March 15)
Veroli, Italy

Summer School on Randomness and Learning in Non-Linear Algebra
July 1-4     (deadline not posted)
Leipzig, Germany
* Included on April 22

Association for Constraint Programming (ACP) Summer School
(preannouncement; details to follow)
July 1-5     (deadline not posted)
Vienna, Austria

Latin-American Summer School in Operations Research (ELAVIO)
(preannouncement; details to follow)
July 1-5     (deadline not posted)
Lleida, Spain

NATCOR Simulation
July 1-5     (deadline not posted)
Leicestershire, England

Random Trees and Graphs Summer School
July 1-5     (deadline: May 5)
Marseille, France

Future of Computing Summer School
July 1-5     (deadline not posted)
Porto, Portugal
* Included on February 19

International Summer School on Deep Learning 2019
July 1-5     (deadline: February 28)
Gdansk, Poland
* Included on February 19

Advanced Course on AI (ACAI): AI for Multi-Agent Worlds
July 1-5     (deadline: May 15 for early registration)
Chania, Greece
* Included on March 4

Eastern European Machine Learning Summer School
July 1-6     (deadline: March 29)
Bucharest, Romania

Reinforcement Learning Summer SCOOL (RLSS)
July 1-12     (deadline: March 15)
Lille, France
* Included on February 19

Satisfiability, Satisfiability Modulo Theories, and Automated Reasoning (SAT/SMT/AR) Summer School
July 3-6     (deadline not posted)
Lisbon, Portugal
* Included on February 19

Vision Understanding and Machine Intelligence (VISUM) Summer School
July 4-12     (deadline: March 22)So
Porto, Portugal
* Included on February 19

Applications of Machine Learning and Intelligent Optimization to Tourism and Hospitality: Summer School and Workshop
July 5-9     (deadline: February 28 for regular registration)
Trento, Italy

Gdańsk Summer School of Advanced Science on Algorithms for Discrete Optimization
July 6-12     (deadline: April 30 for early registration)
Gdansk, Poland
* Included on February 19

Southeast Asia Machine Learning School (SEA MLS)
July 8-12     (deadline was April 20)
Depok, Indonesia
* Included on April 22

Nice Summer School on Random Walks and Complex Networks
July 8-19     (deadline: May 1)
Nice, France

Random Graphs and Complex Networks: Structure and Function
July 8-19     (deadline: March 27)
Como, Italy

Lisbon Machine Learning School (LxMLS)
July 11-18     (deadline: March 31)
Lisbon, Portugal

Tsinghua University 2019 Deep Learning Summer School
July 13-26     (deadline: April 15)
Beijing, China
* Included on February 19

2nd Advanced Course on Data Science & Machine Learning (ACDL 2019)
July 15-19     (deadline: March 31 for early registration)
Siena, Italy

Simons Institute Workshop: Frontiers of Deep Learning
(preannouncement; details to follow)
July 15-19     (deadline not posted)
Berkeley, CA, USA

Machine Learning Summer School 2019 – London
July 15-26     (deadline: January 31)
London, England

Choice-Based Optimization (English description here)
July 22-25     (deadline: June 17)
Hamburg, Germany
* Included on March 15 by suggestion of Sven Müller

ICSP (International Conference on Stochastic Programming) PhD School (preannouncement; details to follow)
July 22-26     (deadline not posted)
Trondheim, Norway

3rd International Summer School on Deep Learning (DeepLearn 2019)
July 22-26     (deadline: March 2 for early registration)
Warsaw, Poland
* Included on February 19

AI Summer School 2019
July 22-26     (deadline: June 3)
* Included on May 28

Kempten International Summer School 2019: Data Science for Everyone
July 22-30     (April 30)
Kempten, Germany

Deep Learning and Reinforcement Learning (DLRL) Summer School 2019
July 24 – August 2     (deadline: February 15)
Edmonton, AB, Canada

Argonne Training Program on Extreme-Scale Computing
July 28 – August 9     (deadline: March 4)
St. Charles, IL, United States

5th Algorithmic and Enumerative Combinatorics Summer School 2019
July 29 – August 2     (deadline: June 15)
Hagenberg, Austria

ICCOPT (International Conference on Continuous Optimization) Summer School: Large Scale and PDE Constrained Optimization; Optimization and Machine Learning
August 3-4     (deadline not posted)
Berlin, Germany

Machine Learning Research School 2019
August 4-11     (deadline: May 31)
Bangkok, Thailand
* Included on May 28

Foundation of Data Science (FDS) Summer School 2019
August 5-8     (deadline: May 24)
Atlanta, GA, USA
* Included on March 4

Simons Institute Workshop: Emerging Challenges in Deep Learning
(preannouncement; details to follow)
August 5-9     (deadline not posted)
Berkeley, CA, USA

The Cornell, Maryland, Max Planck Pre-doctoral Research School 2019: Emerging Research Trends in Computer Science
August 6-11     (deadline: February 7)
Saarbrücken, Germany

Groups and Graphs, Designs and Dynamics (G2D2) Summer School
August 12-25     (deadline: March 1)
Yichang, China
* Included on February 19

Summer School on Deep Learning and Bayesian Methods (Deep|Bayes)
August 20-25     (deadline: April 15)
Moscow, Russia
* Included on February 19

Machine Learning Summer School 2019 – Moscow
August 26 – September 6     (deadline: April 5)
Moscow, Russia

EURO PhD Summer School on Operational Research for Value-based Health Care
September 1-8     (deadline: February 14)
Lisbon, Portugal

Model Guided Data Science
September 2-6     (deadline not posted)
Como, Italy

Gaussian Process and Uncertainty Quantification Summer School 2019
September 9-12     (deadline: July 1 for early registration)
Sheffield, England
* Included on May 28

NATCOR Combinatorial Optimization
September 9-13     (deadline not posted)
Southampton, England

Collective Intelligence and Big Data Revolution
(preannouncement; details to follow)
September 16-20     (deadline not posted)
Como, Italy

The 2018 Judith Liebman Awardees

(Originally posted in the INFORMS 2018 blog).

The student award reception had a special taste for me because the three winners of the Judith Liebman Award are friends that I have known for years and I witnessed their hard work towards this recognition. This is an award given to student volunteers that did excellent work on behalf of INFORMS student chapters and the Subdivisions Council.

Neda Mirzaeian arrived at Carnegie Mellon when I was starting my second term as president of the student chapter. Every time I have asked for her help, she always delivered beyond expectations. That started with our social events, then new initiatives, and she naturally rose to the presidency of our chapter. When she invited Margaret Brandeau for a talk at CMU last year, she also organized a panel entitled “Women in Academia” featuring Dr. Brandeau along with other female faculty across campus.

Lauren Steimle was the president of the student chapter at the University of Michigan and also served as the student representative in the Subdivisions Council. As a chapter president, I enthusiastically voted for her after reading her position statement, which showed to me that she had a clear vision and bright ideas to make INFORMS better for its students members.

Finally, Carlos Zetina was the driving force behind the creation and multiple initiatives developed by the Montreal Operations Research Student Chapter. MORSC encompasses several universities in Montreal, having an impressive number of members and activities, and is affiliated with both INFORMS and CORS, which is the Canadian counterpart of INFORMS.

The little engine that could generate INFORMS 2018 talks

(Originally posted in the INFORMS 2018 blog).

There are many ways to contemplate data. Shabbir Ahmed just tweeted a word cloud based on talks of the INFORMS 2018 Annual Meeting in Phoenix:

In this post, I will use a random walk instead. I wrote a piece of code that traverses talk titles to compute the frequency that each work is the first, the last, or the immediate successor of another word. This Markovian model can easily generate random titles but it is arguably not state-of-art, so why not using something more ellaborate like an LSTM? I wanted something complex enough to make sense while simple enough to generate unexpected outcomes. Otherwise, it would be boring to read them!

I believe that I can roughly classify the results that I got in four categories, which by decreasing frequency are the following:

  1. Non-sense.
  2. Awkward or boring statements.
  3. Long titles mixing the most commonly used terms (see “The buzzword bingo”).
  4. Somewhat plausible but unexpected titles (see “The interesting talks you will not see at INFORMS”).

The interesting talks you will not see at INFORMS

These are the types of talks that I am looking for when I arrive every year:

  • A Spatial Branch-and-Bound Algorithm via Retrospective Study
  • A Vehicle Routing Problem Becomes My Dishwasher: Selective, Manufacturing
  • Active Surveillance Monitoring of Large SDPs
  • Ambulance Emergency Response by a Social Media Network Component
  • An Integrated Train Timetabling Using Social Networks
  • An Optimization Competition in Retail: Evidence from Conflict Constraints and Implications
  • Asymptotic Optimality and International Gold Price, a Dual Reoptimization
  • Expanding a Dual is Less Irritating: Inducing Fresh Produce Supply Chain Management Science
  • Grounding Frequent Flyers: A Constraint
  • Managing the Expected Utility Models for More Sustainable, Swimming and Energy Bilateral Ratings in Bike Sharing
  • Optimal Cooperative Game: A Machine Learning Teaching Analytics Modeling and Improvement Projects
  • Optimal Solutions Revisited: The Impact Of Nonlinear Programs for Resource Allocation in Dynamic Decentralized Customization With 3d Printing
  • Optimizing Faculty Summer Research on Alibaba
  • Person Name Detection Using the Mean Field ExperimentalEvidence from Bike-sharing Economy on Different Textual Components
  • Predicting and Working Harder: The Impact on Modularization Design for the Optimal Steady State of Repairable Spare Parts Networks
  • Risk-sharing Agreements for Strongly Convex Risk Management in Platforms
  • Robust Contingency Constrained Optimization of Autonomous Driving: Can Voluntary Time-of-use Tariffs be Recommended?
  • Stochastic Analysis of Semidefinite Optimization
  • Stochastic Programming of Relativity
  • Triple-bottom-line Approach to Identify Smoking Status

The buzzword bingo

When the conference is over and I get overwhelmed, this is roughly what I recollect:

  • A Branch-and-price Algorithm for Fault Diagnosis for Using Data Analytics to Preferred Boarding Patients with Reusable Products in Hospitals? Evidence from an Urban Function Selection Under Competition in a Newsvendor Analysis of Gaussians via Accelerated Minibatch Coordinate Descent
  • A Bilevel Framework for Liver Exchange Intelligence and Morel Hazard
  • A Minimum Cost of First-order Optimization for Stochastic Mixed-integer Recourse via Machine Learning
  • A New Algorithm for the FEMA National Airspace Operations Training System Performance in a Medical Knowledge Gradient Descent Method for Extending Drone Assisted Devices and Water Distribution Networks
  • A Novel, Depth, Mixed-integer Programming Approach to Speculate on Hospitals’ Risk Analysis in Hydropower Plants Using Probability Dominance Information Asymmetry and Supply Chain Choice Model for Resources in Humanitarian Supply Chain Performance Trade-offs when Customers
  • Appointment Scheduling of Wind Along the Use of Outliers: Market Sensing from Digital Gamification Systems for Mislabeled Classification for Machine Learning
  • Blockchain Adoption in Continuous-time Markov Decision Making Economic Assessment for New Product Perishability
  • Blockchain can Increase Value of Doubly Stochastic Gradient Descent: A Machine Learning
  • Conditional Gradient Method of Customer Churn Prediction in Online Experiments on Multi-agent Based on Retail: Evidence from Bike-sharing Systems via Uncertainty
  • Dynamic Integer Programming Approach to Robust PCA by Eliminating Payment Models
  • Dual Bounds for Reinforcement Learning Heuristics to AC Optimal Power and Firm Innovation
  • Efficient Computational General Equilibrium Models for theTraveling Salesman Problem for Network Flow Control of Hospital Waiting Time Delay in Matlab
  • Healthcare Plan for the Use of Renewable Electricity and How Frequent Flyers Choose Their Affiliates
  • Lift-and-project Lower Bounds for the Heterogeneous Marginal Price Optimization in Bangladesh: An Empirical Analysis of Innovation in the Service
  • Mothership and Efficiency Investment in Construction of the Gig Economy Workers
  • Optimal Service Plan Model for High Dimensional Covariates and Quality in What We Forget About Blockchain Technology
  • Queueing Design using the Strongest Influence of Personalized Advertising
  • Realizing the Participatory Exploratory Modelling the Hitchcock- Koopmans Problem Solving Generalizations of an Academic Science at the Boston Public Transportation Platform Selection using Discrete Probability Computation of B2C E-commerce Age of Gamification on Detour Distances
  • Second-order Decomposition for Large-scale P2P Ride-Hailing Networks
  • Social Media Network Using Bilevel Mixed-integer Recourse Strategies: Facilitate P2P Platform ? Evidence from Ford

Wrapping up

It will be no surprise if you found two or three consecutive words above that you used for your talk. To the best that I could, I tried avoiding examples that were too similar to any of the talks in the program. I hope that I managed to keep it that way.

Now, why did I call this piece of code an engine? Marketability! The following tweet is from a talk by Rama Ramakrishnan at a seminar in the MIT Operations Research Center:

Last but not least, INFORMS made my life a lot easier by sending me the data I needed. Special thanks to Mary Leszczynski!