My presentation philosophy

There are many things about preparing and presenting slides that I keep repeating to students, so I figured that I should put them down in writing. I do think that they help a lot. (And if I am doing it, why not share it more broadly?)

Tooling:

  • Use Power Point if you have Power Point. It is the best and most flexible WYSIWYG tool (“what you see is what you get”).
  • Google Slides is an OK replacement. But alternating between Power Point and Google Slides for the sake of sharing files messes up stuff.
  • LaTeX is not WYSIWYG and will get you locked into a very peculiar way of doing slides, which I believe is only really suited for mathematicians — or not even them.
  • Fancier tools that automatically create pleasing slides will make you look exactly like everyone else trying be cool right now — and they will take away your freedom of customizing things to the limit.

Templates:

  • It has been about a decade that widescreen (16:9) replaced the narrower format (4:3). Use the widescreen format.
  • Templates should not get the screen crowded with unmovable and useless pieces.
  • You do not need more than a line for slide title, so your title can be higher by default.
  • You do not need logos and thick bars killing space.
  • Unless you want your name and affiliation showing in every slide (may be useful in conference presentations), leave the bottom of the slide free to get a better use of space.
  • Add the slide number to help anyone who would like to ask a question at the end.
  • The first slide should have your paper title in large font, your name in slightly smaller (but still large) font close to the center, with affiliation in a smaller font right below your name, and then in smaller font at the bottom you can list coauthors and their affiliations. 
  • Use official templates if convenient (for Iowa: https://brand.uiowa.edu/template-library), mix the official colors in a way that looks pleasing (for Iowa: https://brand.uiowa.edu/color), and observe how logos should be used — someone spent a lot of time thinking this through (for Iowa: https://brand.uiowa.edu/logo).

Content order and depth:

  • The number of people paying attention will decrease over time. If you are not the first talk in a session, you may need to do something to impress in them the need to pay attention again. It could be a provocative opening slide, a joke, or something else.
  • Start with why your subject is so exciting, and then tell them why it deserves to be more studied.
  • If you don’t get to your contribution fast (within 5 minutes), you lose them.
  • If you get to your contribution before they understand the context, you lose them.
  • If you can mentally anticipate that someone might be thinking something like “that’s not the right way to do this”, address that out loud, so that you keep that person with you.
  • Make it easier for everyone to keep following you (easier is a comparison: you can always iterate one more time). Don’t go crazy in unnecessary details.
  • If you throw a big formulation on a slide, you just lost almost everyone.
  • If you throw a big table with results on a slide, you just lost almost everyone.
  • You may want to “take off” at some point and show something very technical and nuanced. The choice of doing that should be based on whether there is one or a few people in the audience that it would be worth to impress, even at the cost of alienating everyone else, which means that you should better do this towards the end, since some people in the audience may “forgive you” and keep following along (at INFORMS in 2012, I decided to prove a theorem in a session about applications because the chair was a CMU professor and I wanted very badly to get into CMU).
  • You do not end by claiming that you won and that you beat everyone else who ever tried to solve the same problem: you end by showing that you found a promising way of approaching the problem, and that this helps you understand it better.
  • If you can also find the limitations of your work, such as where your approach loses to someone else’s, that communicates a lot more value.
  • Scholarship is about learning and exchanging. We win together as a community.
  • The last slide should allow the audience to be in touch with you and / or get more information about your work: list of related papers (yours, not the whole literature on the topic), QR code for a preprint or paper, QR code for your LinkedIn profile, social media handles, personal website, etc.
  • You stop at the slide with your papers and contact information and stay there. Leave it for as long as you can, so that the audience can take a picture or open the QR code.

Slide content:

  • It is almost never a good idea to use a font smaller than 28. You are preparing slides, not writing a paper.
  • Think about every line and every bullet point as the thing that you should be talking next. What you put in the lines and bullet points is just a very condensed version of what you are saying, which helps you cover what is important and let people get the gist if they miss you for a short while.
  • You don’t want the audience to be reading the slides instead of paying attention to what you say, so use Animations (in Power Point) to control when every little thing on the slide will appear — and in what order.
  • If you show an image, that takes some processing time too. Don’t put a cartoon with speech bubbles as the first thing to appear on a slide — unless this slide is meant as a break for you to slow down, drink some water, and the cartoon does have a connection worth risking this.
  • GIFs, memes, and popular culture props are awesome — if they help dramatize the point that you are trying to make. But be mindful of how they may end up stereotyping specific groups, or how your choices may consistently prioritize one group over others.
  • Plots may deserve their own presentation version, which may be different from how they look in your paper. A poor copy–paste from the paper looks bad on you.
  • When you show a plot, you need to tell your audience how to read it.
  • Do not present something that you have not prepared yourself, or at least thoroughly revised yourself, or at the very least practiced enough times to get the gist of it. Do not trust AI on doing anything that you are not going to perfectly understand before using.

Practicing it:

  • Practice your talk enough times that you know it by heart.
  • In the worst case, you can just repeat that memorization word by word. In case you do that, just try to make it look natural as much as you can. Put some pauses and emotion in it.
  • A better approach is to break away from the memorized script by paying attention to the audience, improvising, and finding better ways of expressing the idea on the spot. In other words, make it look natural by making it actually natural. But it takes practice to make it sound as if you have not practiced it!

Remember George Orwell:

i. Never use a metaphor, simile or other figure of speech which you are used to seeing in print. [don’t do something just because other people are doing it — keep things simple and direct]
ii. Never use a long word where a short one will do. [make it easy to understand]
iii. If it is possible to cut a word out, always cut it out. [make it short and to the point]
iv. Never use the passive where you can use the active. [keep language simple to read]
v. Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent. [nobody cares for the complex terms, the long formulations, etc]
vi. Break any of these rules sooner than say anything outright barbarous. [memorize the script only to break away from it; and do not take anything that I am saying as unbreakable: follow your instincts and create your own style]

Summer 2026 schools on artificial intelligence, data science, machine learning, and operations research

The purpose of organizing this list is to help graduate students find a summer school to gain skills related to operations research. Why? Because students are often unaware of those, whereas faculty are constantly bombarded by such announcements. And I cannot think of a better way to serve my academic community than doing this.

Some opportunities are targeted to undergraduate students (they are marked). In case you are not yet a PhD student but the topics of those schools are interesting to you, read more about applying to do a PhD with me or with my colleagues at University of Iowa.

By a summer school for this post, I understand an event that matches a reasonable number among the following criteria: (1) typically lasting from a couple of days to a couple of weeks; (2) not-for-profit organization; (3) some of the presenters are academics and not local; (4) the event is relevant to students interested in operations research; (5) the main activity is not having the participants presenting papers; (6) registration fees are reasonably priced for a student audience; and (7) the event happens between March and September of 2026:

European Laboratory for Learning and Intelligent Systems (ELLIS) Winter School 2026: AI for Earth System, Hazards & Climate Extremes
March 16-20 (deadline was January 2)
Athens, Greece

Sustainable Life-Cycle of Intelligent Socio-Technical Systems (SAIL) Spring School: Resilience and AI
March 17-19 (deadline: January 5)
Padeborn, Germany

Generative Modeling Spring School
March 23-27 (deadline: January 23)
London, England
* Included on January 15

European Laboratory for Learning and Intelligent Systems (ELLIS) Winter School on Foundation Models (FoMo) 2026
March 24-27 (deadline was December 14)
Amsterdam, The Netherlands

Simons Institute Workshop: Theoretical Foundations: From the Early Days of Neural Networks to the Modern Deep Learning Era
April 9-10 (deadline not posted)
Berkeley, CA, United States

International Symposium on Combinatorial Optimization (ISCO) Spring School: Packing and Covering
May 4-5 (deadline: March 1 for early registration)
Kuşadası, Turkey

AI Plus Management Doctoral Consortium
May 7-8 (deadline: January 31)
London, England
* Included on January 15

Technical University of Denmark (DTU) Power and Energy Systems (PES) Summer School 2026
May 18-22 (deadline: January 30)
Copenhagen, Denmark

Complex Networks: Theory, Methods, and Applications (early announcement)
May 18-22 (deadline not posted)
Como, Italy

Simons Institute Workshop: The Role of TCS in Modern Machine Learning
May 26-29 (deadline not posted)
Berkeley, CA, United States

23rd International Summer School for Advanced Studies on Biometrics, Behavior and Vision: Human Interactions and Large Foundation Models
June 8-12 (deadline: February 15)
Alghero, Italy

LANL Quantum Computing Summer School Fellowship
June 8 – August 14 (deadline: January 11)
Los Alamos, NM, United States

IPCO (Integer Programming and Combinatorial Optimization) 2026 Summer School
June 15-16 (deadline not posted)
Padova, Italy

Artificial Intelligence and Games: 8th International Summer School
June 15-19 (deadline not posted)
Leiden, The Netherlands

NATCOR Optimization Under Uncertainty
June 15-19 (deadline not posted)
Edinburgh, Scotland

Machine Learning Crash Course 2026
June 15-19 (deadline: March 1)
Genova, Italy
* Included on January 16

Robotics, Perception and Learning (RPL) Summer School
June 21-26 (deadline: January 30)
Stockholm, Sweeden

Numerical Computations: Theory and Algorithms (NUMTA) International Conference and Summer School
June 21-27    (deadline: April 1 for early registration)
Calabria, Italy

ICAPS (International Conference on Automated Planning and Scheduling) 2026 Summer School
June 22-25 (deadline not posted)
Dublin, Ireland

12th ScaDS.AI International Summer School on AI and Big Data: Neuro+Symbolic AI
June 22-26 (deadline not posted)
Leipzig, Germany
* Included on February 24 by suggestion of Filippo De Bortoli

PCMI 2026 Graduate Summer School: Knotted Surfaces in Four-Manifolds
June 28 – July 18 (deadline: January 15)
Park City, UT, United States

Machine Learning Summer School on Reliability and Safety (MLSS^RS 2026)
June – July (dates to be defined)
Kraków, Poland

24rd Annual Wolfram Summer School
June – July (dates to be defined)
Waltham, MA, United States
[for undergraduate students]

2nd VeRoLog (EURO Working Group on Vehicle Routing and Logistics Optimization) PhD School (early announcement)
July 3-6 (deadline not posted)
Bath, England

EUROPT Conference on Advances in Continuous Optimization (EUROPT) Summer School 2026 (early announcement)
July 6-7 (deadline not posted)
Linz, Austria

The European Summer School on Artificial Intelligence (ESSAI)
July 6-10 (deadline not posted)
Vienna, Austria

Czech Summer School on Discrete Mathematics
July 6-10 (deadline: March 1)
Prague, Czech Republic
* Included on January 15

Applied Bayesian Statistics Summer School on Interpretable Bayesian Learning for Physical and Engineering Sciences (early announcement)
July 6-10 (deadline not posted)
Como, Italy

European Laboratory for Learning and Intelligent Systems (ELLIS) Summer School on Machine Learning for Healthcare and Biology
July 7-9 (deadline not posted)
Manchester, England

European Association for Data Science (EuADS) Summer School 2026: Data Science and Economics (early announcement)
July 8 – July 10 (deadline not posted)
Luxembourg

Bocconi & StatML Summer School in Advanced Statistics and Probability: 2026 Edition on Causality (early announcement)
July 8-17 (deadline not posted)
Como, Italy

Satisfiability (SAT), Satisfiability Modulo Theories (SMT), and Automated Reasoning (AR) Summer School
July 13-17 (deadline not posted)
Lisbon, Portugal

British Machine Vision Association (BMVA) Computer Vision Summer School
July 13-17 (deadline: May 31 for early registration)
Durham, England

13th International Summer School on Deep Learning (DeepLearn 2026)
July 20-24 (deadline: January 5 for early registration)
Orléans, France

Argonne Training Program on Extreme-Scale Computing (ATPESC 2026)
July 26 – August 7     (deadline not posted)
St. Charles, IL, United States

Gene Golub SIAM Summer School: Fault-Tolerant Algorithms in Quantum Computing
July 27 – August 7 (deadline: January 31)
Durham, NC, United States

Cambridge Ellis Unit Summer School 2026 (early announcement)
July (dates not defined)
Cambridge, England

International Conference on Bilevel Optimization (ICBO) 2026 (paywalled early announcement)
August 2 (deadline not posted)
Pittsburgh, PA, United States

ProbAI — Probabilistic AI School 2026
August 3-7 (deadline not posted)
Vilnius, Lituania
* Included on January 15 by suggestion of Eliezer de Souza da Silva

NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) Summer School 2026
August 3-7 (deadline: February 9)
Boston, MA, United States
* Included on January 16

NATCOR Multiple Criteria Decision Making (MCDM)
September 7-11 (deadline not posted)
Portsmouth, England

EURO Summer/Winter Institute (ESWI) 2026: Data Driven Healthcare Under Uncertainty
September 20 – October 2
Aachen, Germany

Disclaimer

I do not vouch that all the information in this post is or will remain accurate, but I appreciate if you let me know if something is wrong. Moreover, if you know of other schools that are not listed here, please reach out to me ([first]-[last]@uiowa.edu). Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years. Here are the previous posts for reference: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, Winter 2018 / 2019, Summer 2019, Winter 2019 / 2020, Summer 2020, (pandemic hiatus), Summer 2025, and Winter 2025 / 2026.

Winter 2025 / 2026 schools on artificial intelligence, data science, machine learning, optimization, and other relevant topics in operations research

Following up on a recent post about Summer 2025 schools, here is a new list. The purpose of organizing this is to help graduate students find a summer school to gain skills related to operations research. However, some opportunities are targeted to undergraduate students (they are marked). By a summer school for this post, I understand an event that matches a reasonable number among the following criteria: (1) typically lasting from a couple of days to a couple of weeks; (2) not-for-profit organization; (3) some of the presenters are academics and not local; (4) the event is relevant to students interested in operations research; (5) the main activity is not having the participants presenting papers; (6) registration fees are reasonably priced for a student audience; and (7) the event happens between October of 2025 and February of 2026. And here they are:

(PS: For schools after February 2026, check the Summer 2026 post)

7th School on Belief Functions and their Applications (BFTA 2025)
October 19-23 (deadline was September 15)
Granada, Spain

Winter School on Causality and Explainable AI
October 20-24 (deadline: October 1)
Paris, France

LATAM School of Artificial Intelligence 2025
October 27-30 (deadline not posted)
Lima, Peru

Optimisation and Planning Summer School 2025
November 3-7 (deadline was July 31)
Melbourne, Australia

Nordic Winter School on Advanced Stochastic Optimization
December 1-5 (deadline was September 15)
Trondheim, Norway

São Paulo School of Advanced Science in Systems Change and Sustainability
December 8-17 (deadline was August 15)
São Paulo, Brazil

6th Annual Nepal AI School
December 29 – January 8 (deadline: October 16)
Kathmandu, Nepal

Northern Lights Deep Learning Winter School 2026
January 5-9 (deadline: October 3)
Tromso, Norway

21st Summer School on Discrete Math
January 5-9     (deadline: October 17)
Valparaiso, Chile

2nd Dolomites Winter School: Mean-field systems in finance, neurosciences and AI
January 11-16 (deadline was September 15)
Folgarida, Italy

Zinal Winter School on Data Science, Optimization and Operations Research
January 18-23 (deadline not posted)
Zinal, Switzerland

12th Winter School on Network Optimization
January 19-23 (deadline: October 15)
Estoril, Portugal

The Middle East and North Africa Machine Learning (MenaML) Winter School
January 24-29 (deadline was September 18)
Thuwal, Saudi Arabia

Swiss Winter School on Theoretical Computer Science
January 25-30 (deadline: October 24)
Zinal, Switzerland

51st International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM) PhD School: Introduction to Algorithms with Predictions
February 9 (deadline not posted)
Kraków, Poland

MLSS Melbourne 2026: The Future of AI Beyond LLMs
February 2-13 (deadline was August 31)
Melbourne, Australia

MBZUAI Machine Learning Winter School 2026: Representation Learning & GenAI
February 9-13 (deadline: October 10)
Abu Dhabi, United Arab Emirates

10th AIROYoung Workshop and Ph.D. School
February 9-13 (deadline: December 12)
Padova, Italy
* Included on September 30 by suggestion of Alice Raffaele

Disclaimer

I do not vouch that all the information in this post is or will remain accurate, but I appreciate if you let me know if something is wrong. Moreover, if you know of other schools that are not listed here, please reach out to me ([first]-[last]@uiowa.edu). Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years. Here are the previous posts for reference: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, Winter 2018 / 2019, Summer 2019, Winter 2019 / 2020, Summer 2020, and Summer 2025.

Summer 2025 schools on algorithms, machine learning, mathematics, optimization, and other relevant topics in operations research

After a 5-year hiatus and finding out that this was helpful to many people, I am reviving the list of schools. The purpose of organizing this is to help graduate students find a summer school to gain skills related to operations research. However, some opportunities are targeted to undergraduate students (they are marked). By a summer school for this post, I understand an event that matches a reasonable number among the following criteria: (1) typically lasting from a couple of days to a couple of weeks; (2) not-for-profit organization; (3) some of the presenters are academics and not local; (4) the event is relevant to students interested in operations research; (5) the main activity is not having the participants presenting papers; (6) registration fees are reasonably priced for a student audience; and (7) the event happens between March and September of 2025. And here they are:

(PS: For schools after September 2025, check the Winter 2025 / 2026 post)

Spring School and Workshop on Variational Analysis and Optimization 2025
March 10-15 (deadline: February 16)
Hanoi, Vietnam

NATCOR Stochastic Modelling
March 31 – April 4 (deadline not posted)
Lancaster, England

GIAN Course on Variational Analysis and Structure in Descent Systems and Optimization
April 14-18 (deadline: February 16)
Varanasi, India

PhD Summer School on Artificial Intelligence and Law 2025
May 12-16 (deadline was March 3)
Tübingen, Germany
* Included on March 7

Complex Networks: Theory, Methods, and Applications
May 12-16 (deadline: February 16)
Como, Italy

DTU PES Summer School: Future Energy Systems: Leveraging Advanced Optimization, Data Sharing and AI for Market Design and Operation
May 18-23 (deadline was January 17)
Copenhagen, Denmark

2025 School on Column Generation
May 19-23     (deadline: March 3)
Paris, France

Modeling and Simulation with PDEs: Undergraduate Summer School
May 19-30 (deadline: February 14)
College Station, TX, United States
[for undergraduate students]

Healthcare Operational Research (HOpeR) Graduate School
May 26-30 (deadline: February 28)
Montreal, Canada
* Included on February 13

2025 Mixed Integer Programming (MIP) Workshop Summer School
June 2 (deadline not posted)
Minneapolis, MN, United States
* Included on March 27 by suggestion of Aleksandr Kazachkov

LANL Quantum Computing Summer School Fellowship
June 2 – August 8 (deadline was January 19)
Los Alamos, NM, United States

2025 Summer School on Machine Learning for Healthcare and Biology
June 3-5 (deadline: April 7)
Manchester, England
* Included on March 7

IPCO (Integer Programming and Combinatorial Optimization) Summer School
June 9-10     (deadline not posted)
Baltimore, MD, United States

PhD Summer School on Operations and Supply Chain Management
June 9-13 (deadline: March 23)
Liverpool, England
* Included on March 7

1st VeRoLog (EURO Working Group on Vehicle Routing and Logistics Optimization) PhD School
June 13-16 (deadline: February 28)
Trento, Italy
* Included on February 7 by suggestion of Alice Raffaele

Large Scale Optimization Summer School 2025
June 16-21 (deadline not posted)
Mumbai, India
* Included on March 27

17th Machine Learning and Advanced Statistics Summer School
June 16-27 (deadline: May 27 for early registration)
Madrid, Spain
* Included on May 24

23rd Annual Wolfram Summer School
June 22 – July 11 (deadline not posted)
Waltham, MA, United States
[for undergraduate students]

Artificial Intelligence and Games: 7th International Summer School
June 23-27 (deadline: March 1 for early registration)
Malmo, Sweden

PhD Course: Theoretical Foundations of Machine Learning (TFML)
June 23-27 (deadline: March 30)
Genoa, Italy
* Included on March 7

EURO PhD School on Optimization and Artificial Intelligence in Agriculture
June 26 – July 4 (deadline: February 18)
Lleida, Spain

EUROPT Summer School 2025
June 27-29
Southampton, England
* Included on September 22 (for future reference)

INFORMS Applied Probability Society (APS) Summer School
June 30 – July 3 (deadline not posted)
Atlanta, GA, United States
* Included on February 9 by suggestion of Siva Theja Maguluri

Summer School on Algorithms, Dynamics, and Information Flow in Networks (ADYN)
June 30 – July 4 (deadline not posted)
Dortmund, Germany
* Included on May 24

Machine Learning Summer School on Drug and Materials Discovery (MLSS^D 2025)
July 1-6 (deadline: April 19)
Krakow, Poland

PCMI 2025 Graduate Summer School: Extremal and Probabilistic Combinatorics
July 6-26 (deadline was January 15)
Park City, UT, United States

Hi! PARIS Summer School on AI & Data for Science, Business and Society
July 7-10 (deadline: June 27)
Palaiseau, France
* Included on May 24

NATCOR Simulation
July 7-11 (deadline not posted)
Southampton, England

AI for Health Summer School
July 7-11 (deadline: April 30 for early registration)
Lausanne, Switzerland
* Included on March 7

London Geometry and Machine Learning (LOGML) Summer School 2025
July 7-11 (deadline: April 6)
London, England
* Included on March 24

European Association for Data Science (EuADS) Summer School 2025: Automated Data Science
July 8 – July 11 (deadline was May 20)
Luxembourg
* Included on May 24

Cooperative AI Summer School 2025
July 9-13
Marlow, England
* Included on September 22 (for future reference)

Cambridge Ellis Unit Summer School on Probabilistic Machine Learning 2025
July 14-18 (deadline not posted)
Cambridge, England
* Included on March 7

LxMLS 2025 – 15th Lisbon Machine Learning Summer School
July 19-25 (deadline: April 28)
Lisbon, Portugal
* Included on March 7

12th International Summer School on Deep Learning (DeepLearn 2025)
July 21-27     (deadline: February 21 for early registration)
Porto, Portugal

International Conference on Continuous Optimization (ICCOPT 2025) Summer School
July 19-20 (deadline not posted)
Los Angeles, CA, United States

Argonne Training Program on Extreme-Scale Computing (ATPESC 2025)
July 27 – August 8     (deadline: March 5)
St. Charles, IL, United States

The Cornell, Maryland, Max Planck Pre-doctoral Research School in Computer Science
July 28 – August 2     (deadline: February 15)
Saarbrucken, Germany
[for undergraduate students]

Machine Learning Summer School (MLSS)
Augustu 2-13 (deadline: March 20)
Arequipa, Peru

Brin Mathematics Research Center: Scientific Machine Learning
August 4-8 (deadline: March 1)
College Park, MD, United States
[for undergraduate students]

Laboratory for Optimization of Renewable Electric gRids (LORER) Summer School
August 4-8 (deadline: February 28)
Montreal, Canada
* Included on February 7 by suggestion of Cheng Guo

Trustworthy AI: Secure and Safe Foundation Models
August 4-8 (deadline not posted)
Saarbrücken, Germany
* Included on March 7

Satisfiability (SAT), Satisfiability Modulo Theories (SMT), and Automated Reasoning (AR) Summer School
August 6-8 (deadline: May 31)
St Andrews, Scotland
* Included on May 24

Simons Institute Workshop: Graph Learning Meets Theoretical Computer Science
August 11-15     (deadline not posted)
Berkeley, CA, United States

Gene Golub SIAM Summer School: Frontiers in Multidimensional Pattern Formation
August 11-26 (deadline: March 1)
Montreal, Canada

25th Max Planck Advanced Course on the Foundations of Computer Science (ADFOCS): Graph Decompositions and Efficient Algorithms
August 18-22 (deadline: July 14)
Saarbrucken, Germany
* Included on May 24

Summer School on Numerical Analysis
August 19-21 (deadline: July 1)
Jena, Germany

ACP Summer School 2025: Constraint Programming for Sustainable Development
August 25-29 (deadline: June 23)
Ouidah, Benin

2nd EURO PhD School: Data Science Meets Combinatorial Optimisation
August 25-29 (deadline not posted)
Eindhoven, The Netherlands
* Included on April 6

Summer School on Nonlinear Optimization and Combinatorics
August 28 – September 2 (deadline: June 20)
Braunschweig, Germany
* Included on May 24

NATCOR Combinatorial Optimization
September 1-5 (deadline not posted)
Bath, England

ELLIS Summer School: AI for Earth and Climate Sciences
September 1-5 (deadline: March 31 for first round of applications)
Jena, Germany
* Included on March 7

PRAIRIE / MIAI Artificial Intelligence Summer School (PAISS)
September 1-5
Grenoble, France
* Included on September 22 (for future reference)

CWI PhD School: Machine Learning and Optimization
September 2-4
Amsterdam, Netherlands
* Included on September 22 (for future reference)

Mediterranean Machine Learning Summer School
September 8-12 (deadline was March 28)
Split, Croatia
* Included on April 18

Summer School on Multimodal Foundation Models and Generative AI
September 8-12
Rabat, Morocco
* Included on September 22 (for future reference)

NATCOR Forecasting & Predictive Analytics
September 15-19 (deadline not posted)
Lancaster, England

Nordic PhD Course in Stochastic Programming 2025
September 22-26 (deadline not posted)
Bergen, Norway
* Included on April 2

Winter schools (early announcement)

ICAPS (International Conference on Automated Planning and Scheduling) Planning and Optimisation Summer School
November     (details not posted yet)
Melbourne, Australia

Disclaimer

I do not vouch that all the information in this post is or will remain accurate, but I appreciate if you let me know if something is wrong. Moreover, if you know of other schools that are not listed here, please reach out to me ([first]-[last]@uiowa.edu). Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years. Here are the previous posts for reference: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, Winter 2018 / 2019, Summer 2019, Winter 2019 / 2020, and Summer 2020.

A glimpse at winning the INFORMS Undergraduate Operations Research Prize: Kayla Cummings (2018) and Wes Gurnee (2020)

This fall I had the pleasure of hosting two winners of the INFORMS Undergraduate Operations Research Prize in my Prescriptive Analytics course at Bucknell University (ANOP 370). These talks are now available online.

Kayla Cummings (2018 winner) talked about vaccine pricing: https://mediaspace.bucknell.edu/media/CDC+as+a+Strategic+Agent+in+Public+Sector+Vaccine+Pricing+-+Kayla+Cummings%2C+MIT%2C+09+30+2021/1_pmu631vu/185503823

Wes Gurnee (2020 winner) talked about fairness-optimized political districts: https://mediaspace.bucknell.edu/media/Fairmandering+Generating+Fairness+optimized+Political+Districts+-+Wes+Gurnee%2C+MIT%2C+11+11+2021/1_yu6gcqsm/185503823

Cummings has since extended her work to also analyze the pricing of COVID-19 vaccines. 

I found very interesting when she mentioned that the hardest part of this extension was finding the right data, which took them 6 months but only shows up in the appendix of the paper.

You can find more comments on her talk in the thread that I posted on Twitter during her presentation:

Gurnee emphasized the benefits of decompositions in optimization and of leveraging symmetries in how districts are partitioned in order to make the algorithm scalable.

I also liked the comments about how challenging it is to sell ideas like this to organizations.

You can find more comments on his talk in the thread that I posted on Twitter during his presentation:

Both of them are currently pursuing their PhDs at the Massachusetts Institute of Technology’s Operations Research Center

Whoever hire them when they finish will be very lucky: it was a pleasure to learn about their work when judging the award, and a blast to bring them to talk about it now!

Carolyn Mooney’s talk at Bucknell — “Decisions as Code: Systems Thinking, Optimization, and Computer Science”

During her recent visit to Bucknell, Carolyn Mooney from nextmv made an excellent case for rethinking the software used for automating decisions.

Her talk is now available online in the following link:
Decisions as Code: Systems Thinking, Optimization, and Computer Science

Here are some of my takeaways from her talk:

  • Current tools require a long development cycle with a multi-disciplinary team including experts, which may not be among the first hires of a startup.
  • These tools may be difficult to integrate due to the misalignment with contemporary software engineering practices.
  • The outcomes of these tools may be hard to interpret and test, in particular if they are not fully reproducible.
  • It is perhaps oblivious to optimization experts that the word “constraint” carries a very negative connotation for non-experts.
  • Representing optimization problems using the conventional matricial notation creates yet another barrier to developers who are not experts; whereas it is more intuitive to model instead states and transitions through decision diagrams.
  • I love the idea of using “decision engineer” as the role of a software engineer who understands optimization. In fact, Matt Turck claims that a single trend is encapsulated by big data (2013), machine learning and artificial intelligence (2017), and automation (2020).
  • All of the points above are even more crucial given that most data science projects never make it into production, which means that they are developed but end up not being used in practice by the companies.

The following Twitter thread contains my notes during the talk:

When do you need mathematical optimization instead of machine learning?

(Also posted on LinkedIn)

Here are some points that I have shared with my Prescriptive Analytics students at Bucknell University based on a piece by Gurobi Optimization CEO Ed Rothberg for Forbes:

1) Many applications of machine learning tend to be consumer-facing, whereas mathematical optimization is usually applied in businesses to seemingly optimize their processes without consumers being aware of it.

2) Mathematical optimization can be used to develop a “digital twin” of some organizational processes in situations where the number of possible solutions could be very large and subject to changes, especially in situations where patterns from historical data could be disrupted while the process in the organization remains the same.

3) The initial time and effort required from stakeholders to build a mathematical optimization model can be greater than that of a machine learning model, since these models require a deeper understanding of the business processes involved.

4) Machine learning can leverage what is called “big data” to learn from the past and make predictions about the future, but it is vulnerable to model drift: the predictions become less accurate if, for example, there are changes in the patterns that were previously observed in the data.

My neural network is a piecewise linear regression, but which one? (A glimpse of our AAAI-20 paper on Empirical Bounds)

One of the most beautiful things about the theory of computation is, in my opinion, how you start from something as simple and intuitive as a finite-state machine and then – with a few more bells and whistles – end up with something as complex and powerful as a Turing machine.  When someone mockingly says that machine learning or deep learning is just regression, and that often implies merely linear regression, they are missing a similar jump in complexity as the one from finite-state to Turing machines.

In this post, we will look at rectifier networks: one of the simplest but yet very expressive type of artificial neural network. A rectifier network is made of Rectified Linear Units, or ReLUs, and each ReLU defines a linear function on its inputs that is then composed with a non-linear function that takes the maximum between 0 and that linear function. If the output is positive, we say that the input has activated the ReLU. At the microscopic level, one would be correct to think that adjusting the coefficients of the linear function associated with each ReLU during training is akin to linear regression (and at the same time this person could be puzzled that the negative outputs produced when a ReLU is not activated are thrown away).

ReLU

However, what happens at the macroscopic level when many ReLUs are put together is very different. If we were to combine units defining just linear functions, we would get a single linear function as a result. But as we turn negative outputs into zeroes, we obtain different linear functions for different inputs, and consequently the neural network models a piecewise linear function instead. That alone implies a big jump in complexity. Intuitively, we may guess that having a function with more pieces would make it easier to fit the training data. According to our work so far, that seems correct.

PWL

 

Here is what we found out in prior work published at ICML 2018:

  1. The maximum number of these pieces, which are also called linear regions, may grow exponentially on the depth of the neural network. By extending prior results along the same lines, where a zigzagging function is modeled with the network, we have shown that a neural network with one-dimensional input and L layers having n ReLUs each can define a maximum of (n+1)^L linear regions instead of n^L.
    LB
  2. Surprisingly, however, too much depth may also hurt the number of linear regions. If we distribute 60 ReLUs uniformly among 1 to 6 layers, the best upper bound on the number of linear regions will be attained by different depths depending on the size of input. Because this upper bound is tight for a single layer, that implies that a shallow network may define more linear regions if the input is sufficiently large.
    UB
  3. If no layer in a small neural network trained on the MNIST dataset is too narrow (3 units or less), we found that the accuracy of the network relates to the number of linear regions. However, it takes a long time to count linear regions even in small neural networks (for example, sometimes almost a week with just 32 ReLUs).
    Accuracy

In the paper that we are presenting at AAAI 2020, we describe a fast estimate on the number of linear regions (i.e., pieces of the piecewise linear function) that could be used to measure larger neural networks. We can map inputs to outputs of a rectifier network using a mixed-integer linear programming (MILP) formulation that also includes binary (0-1) variables denoting which units are activated or not by different values for the input. Every linear region corresponds to a different vector of binary values denoting which units are activated, so we just need to count all binary vectors that are feasible. The formulation below connects inputs to the output of a single ReLU.

MILP

What makes counting so slow –  besides the fact that the number of linear regions grows very fast with the size of the neural network – is that there is little research on counting solutions of MILP formulations. And that is justifiable: research and development in operations research (OR) methodologies such as MILP are focused on finding a good (and preferably optimal) solution as fast as possible. Meanwhile, counting solutions of propositional satisfiability (SAT) formulas, which involve only Boolean variables, is a widely explored topic back in artificial intelligence (AI). For example, we can test multiple times how many additional clauses would it take to make a SAT formula unsatisfiable and then draw conclusions on the total number of solutions based on that. There is no reason why the same methods could not be applied to count solutions on binary variables in MILP formulations.

SAT_MC

In other words, we started with an AI problem of counting linear regions, moved to an OR problem of counting solutions of an MILP formulation, and came back to AI to use approximate model counting methods that are typically applied to SAT formulas. However, there is something special about MILP solvers that can make approximate counting more efficient: callback functions. Most of the SAT literature is based on the assumption that testing the satisfiability of a formula with every set of additional clauses requires solving that formula from scratch. With a lazy cut callback in an MILP solver, we can add more constraints as soon as a new solution is found. Consequently, we reduce the number of times that we have to prove that a formulation is infeasible, which is computationally very expensive.

MILP_MC

By adding random parity (XOR) constraints involving a few binary variables to iteratively restrict the number of solutions of the MILP formulation, we obtained probabilistic lower bounds on the number of linear regions that are very similar in shape to the actual figures. On the left of the figure below, we are analyzing 10 networks trained on the MNIST dataset for every possible combination of 22 ReLUs distributed in two hidden layers followed by 10 ReLUs of output. The black and red curves on top are upper bounds, the black dots are averages of the exact number of linear regions, and the line charts below are lower bounds with probability 95%. On the right of the figure below, we see that the probabilistic lower bounds are orders of magnitude faster to calculate if the actual number of linear regions is large (at least 1 million each).

Results

If you are curious about our next steps, take a look at our upcoming CPAIOR 2020 paper, where we look at this topic from a different perspective by asking the following: do we need a trained neural network to be as wide or as deep as it is to do what it does? If a network is trained with L1 regularization, the answer is probably not! We use MILP to identify units that can be removed or merged in a neural network without affecting the outputs that are produced. In other words, we obtain a lossless compression.

Where to find our work at AAAI 2020:

  • Spotlight Presentation: Sunday, 11:15 – 12:30, Clinton (Technical Session 9: Constraint Satisfaction and Optimization)
  • Poster Presentation: Sunday, 7:30 – 9:30, Americas Halls I/II

Links to our papers:

  1. Bounding and Counting Linear Regions of Deep Neural Networks (ICML 2018)
  2. Empirical Bounds on Linear Regions of Deep Rectifier Networks (AAAI 2020)
  3. Lossless Compression of Deep Neural Networks (CPAIOR 2020)

All papers are joint work with Srikumar Ramalingam. Christian Tjandraatmadja is a co-author in the ICML 2018 paper. Abhinav Kumar is a co-author in the CPAIOR 2020 paper.

 

Summer 2020 schools on algorithms, data science, machine learning, networks, optimization, transportation, and other relevant topics in operations research

This post covers relevant schools happening between April of 2020 and September of 2020. If you know of other schools that are not listed here, please reach out to me. Like in previous semesters, I will keep updating this post and I may add some schools with past deadlines as reference for readers looking for schools in the next years.

Previous posts: Summer 2016, Winter 2016 / 2017, Summer 2017, Winter 2017 / 2018, Summer 2018, Winter 2018 / 2019, Summer 2019, and Winter 2019 / 2020.

Spring School on Mathematical Statistics
March 30 – April 3     (deadline: January 24)
Leipzig, Germany

LMS Research School: Graph Packing
April 19-25     (deadline: January 31)
Eastbourne, England
* Included on January 23

NATCOR Heuristics and Stochastic Algorithms
April 20-24     (deadline not posted)
Nottingham, England

ISCO 2020 Spring School: Data Science, Machine Learning and Optimization
May 2-3     (deadline not posted)
Montreal, QC, Canada

Complex Networks: Theory, Methods, and Applications
May 18-21     (deadline: February 23)
Como, Italy

Mathematical Modelling, Numerical Analysis and Scientific Computing
May 24-29     (deadline: April 30)
Kacov, Czech Republic

Machine Learning Crash Course (MLCC 2020)
May 25-29     (deadline: April 3)
Oslo, Norway
* Included on February 28

Simons Institute Workshop: Statistics in the Big Data Era
May 27-29     (deadline: February 1st for travel support)
Berkeley, CA, USA

Column Generation 2020
May 31 – June 3     (deadline not posted)
Sainte-Adèle, QC, Canada

Summer School on Modern Optimization for Transportation
June 1-5     (deadline not posted)
Frejus, France

NATCOR Convex Optimization
June 1-5     (deadline not posted)
Edinburgh, Scotland

Risk Measurement and Control: Fintech and Digital Banking
June 3-6     (deadline not posted)
Rome, Italy

IPCO (Integer Programming and Combinatorial Optimization) Summer School
June 6-7     (deadline not posted)
London, England

Structural Graph Theory
June 7-12     (deadline not posted)
Murol, France
* Included on February 3 by suggestion of Aurélie Lagoutte

ICAPS-ICRA Summer School on Plan-Based Control for Robotic Agents
June 8-12     (deadline: March 31)
Paris, France

Zaragoza Logistics Center PhD Summer Academy
June 8-19     (deadline: June 1st)
Zaragoza, Spain

Summer School in Logic and Formal Epistemology
June 8-26     (deadline: March 13)
Pittsburgh, PA, United States
* Included on February 22

Hausdorff School Algorithmic Data Analysis
June 15-19     (deadline: March 29)
Bonn, Germany
* Included on February 22

Research school in computational complexity
June 15-19     (deadline not posted)
Paris, France
* Included on February 22 by suggestion of Ludmila Glinskih

Simulation Summer School (S3)
June 21     (deadline: February 15)
State College, PA, United States
* Included on January 27

DTU CEE Summer School 2020: Advanced Optimization, Learning, and Game‐Theoretic Models in Energy Systems
June 21-26     (deadline: January 31)
Copenhagen, Denmark

3rd International Summer School on Artificial Intelligence and Games
June 22-26     (deadline: March 1st for early registration)
Copenhagen, Denmark

Swedish Summer School in Computer Science (S3CS 2020): The Method of Moments in Computer Science and Beyond & Polyhedral Techniques in Combinatorial Optimization
June 28 – July 4     (deadline: February 11)
Stockholm, Sweden

Machine Learning Summer School – Germany
June 28 – July 10    (deadline: February 11)
Tubingen, Germany

Regularization Methods for Machine Learning (RegML)
June 29 – July 3    (deadline: March 20)
Genova, Italy

Data Science Summer School (DS3)
June 29 – July 3    (deadline not posted)
Palaiseau, France

Tsinghua University 2020 Deep Learning Summer School
June 29 – July 12    (deadline: April 14)
Beijing, China

International School of Mathematics “Guido Stampacchia”: Graph Theory, Algorithms and Applications
July 1-8     (deadline: April 10)
Erice, Italy

Special Interest Group on Genetic and Evolutionary Computation (SIGEVO) Summer School
July 5-9     (deadline: April 3)
Cancun, Mexico
* Included on January 21 by suggestion of Juergen Branke

4th Summer School on Cognitive Robotics
July 6-10     (deadline not posted)
Brisbane, Australia
* Included on January 23 by suggestion of Philip Kilby

Eastern European Machine Learning Summer School: Deep Learning and Reinforcement Learning
July 6-11     (deadline: March 27)
Krakow, Poland

Gdańsk Summer School on Algorithms for Discrete Optimization and Deep Learning
July 6-12     (deadline: April 30 for early registration)
Gdańsk, Poland
* Included on January 23 by suggestion of Georg Anegg

EURO PhD Summer Schools on Multiple Criteria Decision Aiding / Making (MCDA / MCDM)
July 6-17     (deadline: February 1st)
Ankara, Turkey

Bocconi Summer School in Advanced Statistics and Probability
July 6-17     (deadline: March 31)
Como, Italy

EADM Summer school on Learning and Decision Making
July 8-15     (deadline not posted)
Barcelona, Spain
* Included on February 22

EURO PhD School on Data Driven Decision Making and Optimization
July 10-19     (deadline: January 15)
Seville, Spain

3rd Advanced Course on Data Science & Machine Learning (ACDL 2020)
July 13-17    (deadline: March 31)
Siena, Italy
* Included on February 28

EURO PhD School on Sustainable Supply Chains
July 19-23     (deadline: January 20)
Lisbon, Portugal

Latin-American Summer School in Operational Research (ELAVIO)
July 19-24     (deadline: February 29)
Arequipa, Peru
* Included on January 20 by suggestion of Rodrigo Linfati

Gene Golub SIAM Summer School 2020: Theory and Practice of Deep Learning
July 20-31     (deadline: February 1)
Muizenberg, South Africa

Argonne Training Program on Extreme-Scale Computing
July 26 – August 7     (deadline: March 2)
St. Charles, IL, United States

4rd International Summer School on Deep Learning (DeepLearn 2020)
July 27-31     (deadline: January 26 for early registration)
Leon, Mexico

Metaheuristics Summer School: Learning and Optimization from Big Data
July 27-31     (deadline: March 5)
Catania, Italy

4th Modelling Symposium: Introducing Deep Neural Networks
July 27-31     (deadline: March 27)
Magdeburg, Germany

Advanced Methods in Operations Research for Logistics and Transportation
July 27-31     (deadline not posted)
Bogota, Colombia

Digital Transformation of Mobility Systems – OR Models and Methods
July 27-31     (deadline: March 31)
Munich, Germany
* Included on February 28 by suggestion of Layla Martin

Deep Learning and Reinforcement Learning (DLRL) Summer School 2020
July 29 – August 6     (deadline not posted)
Montreal, QC, Canada

Machine Learning Summer School – Indonesia
August 3-9     (deadline: April 30)
Bandung, Indonesia

The Cornell, Maryland, Max Planck Pre-doctoral Research School 2020
August 4-9     (deadline: February 15)
Saarbrucken, Germany
* Included on January 27 by suggestion of Alex Efremov

Oxford Machine Learning School
August 17-22     (deadline: April)
Oxford, England
* Included on January 27

Prague Summer School on Discrete Mathematics
August 24-28     (deadline: March 15)
Prague, Czech Republic

Simons Institute Workshop: Probability, Geometry, and Computation in High Dimensions Boot Camp
August 24-28     (deadline not posted)
Berkeley, CA, USA

Simons Institute Workshop: Theory of Reinforcement Learning Boot Camp
August 31 – September 4     (deadline not posted)
Berkeley, CA, USA

Summer School on Machine Learning and Big Data with Quantum Computing (SMBQ 2020)
September 7-8     (deadline not posted)
Porto, Portugal
* Included on February 28

Combinatorial Optimization at Work (CO@Work)
September 14-26     (deadline not posted)
Berlin, Germany

NATCOR Forecasting and Predictive Analytics
September 21-25     (deadline not posted)
Lancaster, England

Simons Institute Workshop: Deep Reinforcement Learning
September 28 – October 2     (deadline not posted)
Berkeley, CA, USA

A Word Cloud to Remember Shabbir Ahmed

(Also posted in the INFORMS 2019 blog.)

The optimization community had some big losses this year with the passing of giants such as Shabbir Ahmed and Egon Balas. I only met Shabbir in person in some of the conferences that I attended during my doctoral years. I still remember how much time he spent talking to me the first time that I presented a poster at the MIP workshop. He was always accessible, engaging, and willing to offer some advice.

When a big conference is about to start, Shabbir comes to my mind because of his word clouds of abstracts. For example, precisely two years ago he observed that model was used more often than data in the INFORMS 2017 abstracts:

Thanks to some help from Mary Leszczynski and WordArt.com, I came up with a word cloud for talk titles at INFORMS 2019. This year, “model” showed up 363 times and “data” showed up 420 times. However, if we also account for “models”, the tally goes up to 551. Therefore, Shabbir’s observation that model > data also holds for INFORMS 2019.

Word Art