Ridiculous Problems, Real Applications

March 1, 2019

Life is filled with compromises. Generally, we just don’t have the resources, be they time, money or and so on, to do the things we want to do as well as we’d like.  While I can’t solve that problem entirely or generally, I can help you intelligently trade time for improved solutions to problems that you may encounter on the regular.  To see how, we’re going to walk through some solutions to decidedly ridiculous problems that you do not have, starting with efficiently ordering a stack of pancakes by size using a spatula.

Academics are interested in toy problems

In academic circles, that is called the pancake problem, and it is the subject of much research.  Honestly, it is serious business. Bill Gates worked on it in the late 70s, along with Papadimitrou.  If you don’t recognize those names immediately, those two have arguably done more to influence how we think about computing, practically and theoretically respectively, than most living people.

The pancake problem is a classic example of what’s more commonly called a permutation problem.  As the name might suggest, a permutation problem is one of permuting the order of some list into a desirable state.  The sliding tile puzzle and the Rubik’s cube are examples of permutation problems you can go out and buy later today if you want. If you go out to dinner at Cracker Barrel, you’ll find peg solitaire at your table. Peg solitaire is also a permutation problem, or it can be solved as if it were one.

, and that’s totally fine and reasonable.

That last part, “can be solved as if it were one”, is the important bit. The fifteen puzzle and the pancake puzzle aren’t necessarily interesting in of themselves, and they don’t have to be.  They provide an easy to describe and control test-bed for evaluation algorithms that work on more complicated problems: planning, scheduling, VLSI, constraint satisfaction, and logistics for example.

What these problems have in common is that they can be viewed as a combinatorial optimization problems.  Wikipedia unhelpfully describes this as “finding an optimal object from a finite set of objects.”  I would tell you that combinatorial optimization is the process of answering “of all possible solutions to the problem, which solutions are the best?”  Best has a loose definition that varies from application to application. Maybe it means “costs least” to you in one setting and “is least likely to be interrupted by delays” in another.  Maybe you care about multiple incomparable attributes, like finding the shortest over-road route that drives across the fewest number of bridges because you suffer from gephyrophobia.

Combinatorial search is a general approach to automated problem solving

The beauty of combinatorial optimization is that it doesn’t really care how you define best, so long as you do. Armed with a notion of best and a means of enumerating every solution in the space of all possible solutions, combinatorial search will dutifully slog through that space until it finds an answer for you. It just might take a while.  Consider the Rubik’s cube. There are roughly 45,000,000,000,000,000,000 arrangements of the standard 3 x 3 x 3 cube. For scale, there are on average 236520000 seconds in a human lifetime. If you considered 20 billion positions a second, from birth to death, you would be guaranteed to find the optimal answer.

We’ve been optimally solving the cube with computers since the early 80s, so obviously you don’t have to consider every position to get a good answer.  Actually, you don’t even have to consider every position even if you want to guarantee that the solution you find is the best possible solution. If you’re clever about it, you can get really great solutions while examining a small portion of the space.  You can even trade off additional resources, like time and memory, for better solutions if you know the right techniques.

Why you should care about combinatorial search

I think that combinatorial search is useful tool to have in your bag because i think that problems like “find me the cheapest X in Y” come up not infrequently. I want you to be able to recognize combinatorial optimization problems when they arise, and I want you to know how to solve them when you find them.  To that end, I’m writing a small series of blog posts that describe how to apply an increasingly complicated series of combinatorial search techniques to an increasingly complicated series of problems.  The posts will be accompanied by F# implementations so you can get a sense of what these approaches actually look like in practice.  By the end, we’ll have:

  • Implemented several kinds of search algorithms
  • Discussed evaluation techniques for search algorithms
  • Explored a few toy domains
  • Investigated one real domain

Learning Goals

I hope that by the end, you’ll have:

  • Understood what makes a problem a combinatorial optimization problem
  • Mastered a basic tree search technique
  • Got a feel for one or  two advanced tree search techniques, but especiallyHow do I handle huge branching factors and extremely huge trees
  • How do I make these searches run in parallel
  • How do I even out my effort across the space of all solutions

Next time around, we’ll be looking at how to apply depth first search to the pancake sorting problem I mentioned before.  We’ll start by doing a very naive application of depth first search (DFS) to the problem. Then we’re going to iterate on that implementation. We’ll actually iterate a bunch, because there’s a lot of ground to cover between the text-book implementation of DFS and one that’s good for huge problems. By the end, we’ll be able to order a comically large stack of imaginary pancakes.