The Front End: Climbing the Not-Highest Mountain

The Front End: Climbing the Not-Highest Mountain

Aug. 2, 2017
Often times, engineers push forward enough to achieve elegant solutions that work on a local level, but don’t reach the global-optimum summit.

What comes to mind when you think about ”optimizing” a design? Does it involve big leaps, or small tweaks, or both?  Do you get your designs to be as good as they can be, or do you aim for only as good as they need to be to meet the “minimum” demands of your boss or your customer?

I’ve been there. We want to do our best, but we get institutional side-eye if we burn too much time or energy going beyond what’s necessary. We know satisficing isn’t satisfying, but sometimes we’re not given the choice.

Whatever the constraints, we still want to try for that highest peak of design performance. And I imagine it must be very pleasing, if you’re a mountain-climber, to stand at that peak, exactly the highest point of that mountain you’ve just conquered. In today’s thoroughly mapped world, there’s no surprise when you get to the top; you know how high up you are. Well, I suppose topographers have a way of defining height above sea level when you’re so far away from the sea, that the sea level where you are at would be a different distance from the center of the Earth.

But I digress. In the past, though, it must have been quite annoying to struggle to the summit of what you thought was the highest mountain around, only to stand at that peak and look up at a visibly higher peak just over yonder. You were climbing the not-highest mountain.

But that isn’t actually a very good analogy for the optimization issue I want to talk about. If the Earth’s topography is analogous to an error surface (the N-dimensional graph of how “good” your system is as a function of its parameters), we can cheat and, using our binoculars (and neglecting the Earth’s curvature just for a moment) to look around for a higher peak nearby.

As designers, though, we don’t have peak-finding binoculars. We can’t “look across” our error surface and just “see” a better solution. It’s a more accurate metaphor to say that we’re rappelling into deep holes in the ground, not climbing up tall mountains. Given this, we’re all too likely to descend into the not-deepest hole, without access to tools for “seeing” whether or not there’s an even deeper hole elsewhere that we might have stumbled into (perhaps literally) if we had spent more time looking.

I think we need two phases to “optimize” our designs. First, a sequence of “big leaps” across the error surface—the landscape of our exploration—that give us a chance of finding the hole that contains the “deepest deepest point.” Then some cycles of “small tweaks,” where we attempt to zoom in on that deepest deepest point.

The Hole Story

And finally I get to the key point of this piece: Far too many people start trying to find that metaphorical deepest deepest point of their design before they have any idea whether they are even in the right hole.

Over the years, I’ve seen many cases where significant time is spent fine-tuning some aspect of a system in blissful ignorance of flaws in the design that really require some more big leaps. You need to be able to sense the difference between a good solution that still needs some tweaking, and a bad solution. Once you have developed those antennae, you need to be able to stick up for what they tell you—even if, for a while, you get labeled as a contrarian by those who haven’t yet figured out that they are looking in the wrong hole.

Why this industry’s, nay our cultural, obsession with fine-tuning? I believe that it’s because we’re inculcated into believing that you can learn something from the small change in some y that results from a small change in some x. It’s what I call:

The Curse of the Calculus… buwahaha.

I still remember when I discovered calculus and what you could do with it. By calculus, though, I mean The Differential Calculus, and therein lies the rub: It’s only good for smooth functions—functions whose rates of change are well-behaved. If you make an idealized cup by rotating a parabola around its axis and then drop a ball into it, there’s no doubt where that ball is going to end up, even if it oscillates around its equilibrium point for a while. Functionally, that cup is smoothness incarnate.

But the real world is full of unsmooth things – discrete, degenerate and other non-differentiable things. So optimization methods, whether mental or mathematical, which rest on an assumption of differentiability, often fail to locate the globally best answer. They simply give you the depth of the not-deepest hole you find yourself in to exquisite precision. And you can be sure that if you take a small step away from the deepest point of that not-deepest hole in any direction, your solution will get a little poorer. So it feels like you’ve found the best solution—any small change makes it worse.

Staying Local

This point is called a local optimum. We go out of our way to modify our problems and our gradient-based routines, replacing discontinuities with elegant continuous (and differentiable) approximations. The result is a whole canon of optimization methods that are awesome at finding local optima, but completely useless at actually finding the best global solution. These modifications are rather like training wheels on a bicycle; they stop the algorithm from failing, but they can’t help you ride the bike in the right direction.

In a previous column I recounted an older colleague’s critique of the much-younger me: “Do you know that or are you just guessing?” Well, I only dimly realized it at the time, but it turns out that appropriately organized guesswork is jolly useful.

In my experience, the most successful routines for finding global optima in real-world, constrained, “dirty” design problems always use a great deal of what you and I would call guesswork. It’s usually dressed up with fancy nomenclature like “simulated annealing” and “genetic algorithm.” I’ve used both of these to solve thorny filter design challenges. They are, basically, just smarter ways of keeping track of a great deal of function evaluations whose usefulness you can’t really judge at the time. Guesswork, in fact.

A cautionary coda to this already-cautionary tale, though. Just because you can solve a problem doesn’t mean that it’s worth the time and effort. One of the most challenging filter optimization tasks I ever attempted (due to a bunch of self-imposed constraints), finally yielded what I hoped after many optimization sessions. But the thrill of the chase was not matched by the promise of the catch; the design never went into production, because it didn’t solve any customer problem better than our other products. You need to know when to stop, but you also need to know whether to start in the first place.

Have you been tripped up by applying smooth thinking to a rough problem? Let me know!

About the Author

Kendall Castor-Perry | Senior MTS Architect, Programmable Systems Division

For nearly four decades, Kendall Castor-Perry has been chasing signals through electronic systems, wringing out the information they are hiding. He’s a world-class authority on filters and precision analog circuit engineering and a tireless champion of the needs of the customer. He has been widely published and syndicated, especially when sharing his extensive filtering knowledge as “The Filter Wizard.”  He holds a BA in Physics from Oxford and an MBA in MBA stuff from London Business School. Kendall is currently Senior MTS Architect in Cypress Semiconductor’s Programmable Systems Division, pushing on the performance:power:price boundaries constraining tomorrow’s critical sensor-processing systems.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!