Wednesday, January 3, 2018

Optimization Algorithms

When discussing optimization algorithms, many fall into the general class of hill-climbing algorithms:  if you want to get to the highest elevation, start where you’re at, look around, and take a step uphill.  Then see again – lather, rinse, repeat.   This is exactly the approach we talked about yesterday, making many small, incremental improvements.  It’s easy, it’s robust, and mostly it works.

The trouble with hill-climbing algorithms is that once you get to the top of your mountain, there you stay.  If you started on the slopes of the biggest mountain, then great, you’re at the highest summit!  But what if you started in the foothills?  You climb the mountain, and stand proudly on the top of a modest hill, with no way to get higher.

This is where a different class of algorithms comes into play, and this is why small, disruptive companies can get ahead of big companies with their teams of plodding hill-climbers topping their little foothills.  Actually, there are a bunch of algorithms that allow you seek further afield for higher peaks, such as jumping randomly to any point on the map and seeing what the elevation is, or overlaying a grid and walking from intersection to intersection and mapping the elevation.  The trouble with these algorithms is you don’t know when you’re done, and you can spend a lot of time jumping and checking more points.  If you’re short on time, and want to be pretty sure you get to the top of a relatively high mountain, what do you do?  Hold onto that thought….

Curiously, certain type of organisms called protists exhibit an interesting reproductive strategy, in that they change from asexual reproduction to sexual reproduction depending on the environment.  When the environment is benign, the organism reproduces asexually – it’s fast and efficient, and the chance of the new offspring to be well-adapted to the environment is high, and the population of identical genes expands with a small rate of change caused by random mutations here and there.  But what if the environment changes and becomes hostile?  In that case the organism switches modes, and uses sexual reproduction, which mixes up the genetic makeup of the offspring, increasing the chance of finding new good matches (this is called the Red Queen Hypothesis).  Switching modes is the useful insight here, and that’s the behavior to emulate.  Genetic algorithms in software today do this, with a mix of random mating combinations that drive large changes and random minor mutations that create small changes. 

A few decades back a Russian engineer working on optimization of microprocessor circuit routing hit upon a similar approach after talking with a mathematician who was modeling glass annealing behavior.  In glass annealing, reheating and slow cooling is used to heal micro-fractures in the glass and reduce entrained stress, with higher temps healing larger cracks but causing more stress, and lower temps reduce stress but don’t resolve cracks as well.   The engineer devised a novel approach now called simulated annealing, where a computer performs a number of simulations, first with “higher temperatures” – large changes in component layouts, and then cycles of “lower temperature” – small changes and lower stress.  For each run, the computer performed a quick routing of interconnections, and weighed the result against previous tries, keeping the best. 

For both types of algorithms there is a necessary intermediate consequence:  when there are large changes, most new offspring and most simulation runs yield really poor results, and only a few yield promising results, so there must be a culling step where only the best survive.  Of course, for genetics (and genetic algorithms) this is the Darwinian step, and for simulated annealing it’s a fitness function weighting step.  We will need to emulate this behavior as well.


Now let’s go back to our hill-climbing problem, and emulate the behavior we’ve just seen:  perhaps we can cast a rough grid or take a few random hops to sample the uncertain topography, and of those data points, pick the best one or top few.  Around those, take smaller hops and repeat.  Throw out the valleys, take the biggest peak so far, and shift modes to a hill climb.  Voila!  You know when you’re done, you’re at a high peak (probably the highest), and you didn’t spend a lot of time hopping around.

What’s the lesson for us, at work or as individuals?  If you want to make big changes, you have to accept that failure will be ever-present as the environment is hostile to your experimentations, and thus your goal should be come up with an approach that allows you to experiment quickly and cheaply, creating and culling a lot of permutations, and the learning by taking the best and refining again.  Throwing many samples away, sometimes taking steps backward, and having to survive many improvement cycles are all hard for us to stomach, but that’s what we need to do. 
This is exactly the general approach espoused by the Lean Startup, and Little Bets, and earlier by Peter Drucker in the Effective Executive, though they do a more thorough job of explaining (and made a lot more money) than I just did. 

If you or your business is struggling in a harsh environment, maybe you should raise your annealing temperature a bit.  This is a thought we’ll revisit, but first we’ll take a little deeper look at Lean, and the precepts of Lean Startup.  Until then, ponder on the thought that few people and few companies really try to take chances at all, and fewer still do so with a clear vision and a decent plan.  

No comments:

Post a Comment