Jordan Thayer, PhD

AI Practice Lead

Recent Articles

Using Pipelines to Lower Barriers To Entry in Machine Learning

Many people are keenly interested in machine learning, and with good reason. Machine learning is applicable to a wide variety domains, including engineering, education, healthcare, and government. The broad applicability of machine learning is a double edged sword: Although an ever increasing pool of people want to use machine learning, a decreasing portion of them […]
Read More

Five Things I Learned Working with Software Engineers

Introduction Hi, I’m Jordan Thayer, and I’m a research scientist by training. I got my PhD in 2012 from the University of New Hampshire, where I focused on Artificial Intelligence. My undergraduate degree  was in Computer Science too; I got that from Rose-Hulman back in 2006. I’ve been programming things for about as long as […]
Read More

Accidental AI: 5 Everyday AI Problems

Introduction People often ask questions like “What is AI?”, or “Is AI worth of the hype?”.Both questions are non-trivial to answer, but let’s start with the first one:”What is AI?”. This is a perennial favorite at academic conferences on AI for a few reasons: Every AI researcher has to have an opinion, since it’s their […]
Read More

Planning for Randomizers

Introduction I’m a big fan of videogames. I like small, well defined boxes where I can get better at some task. I like measuring myself against my peers. As such, it’s probably no great surprise that I like speedrunning and randomizers. For the unititiated, speedrunning is trying to beat some game as quickly as possible. […]
Read More

Distributing Depth First Search to the Masses

Last Time Last time we talked about techniques for exchanging processor (and developer) time for reduced will clock time in heuristic search.  In other words, we talked about how to use multiple cores on a single machine to solve a problem faster.  That worked pretty well, but we noticed that it couldn’t scale beyond the […]
Read More

Parallel Problem Solving

Last Time Previously, we looked at a technique for reducing the memory footprint of a heuristic search. We talked about why it was important to reduce the memory consumed by a search.  Even if we move heaven and earth to reduce memory consumption, heuristic search is still prohibitively expensive in terms of time. Learning Goals This […]
Read More

Trying Deltas For A Change

Last Time Last time we took a look at how improved bounds computation and child ordering can improve the performance of heuristic search algorithms.  In particular, we saw how those techniques improved the performance of depth first search (or depth first branch & bound if you prefer) when applied to the TSP.  Even though we […]
Read More

The Importance of Consuming Search Results, Pancakes

Last Time Last time we looked at depth first search and how it could be applied to a simple optimization problem, the pancake problem.  We decomposed the pancake stacking problem into some components are very important if you want to apply heuristic search: A goal test States Actions to move between states We then looked […]
Read More

Flipping Flapjacks, Pruning Pancakes, and Depth First Steps

In the previous post in this series I spent some time trying to convince you that toy problems are worthy of your attention. In particular, I tried to sell you on the notion that the pancake problem was worthy of your attention.  It isn’t necessarily because flipping flapjacks is in of itself fascinating.  It’s because […]
Read More