Sign In
Forgot Password?
Sign In | | Create Account

The Biggest Loser?

A new season of NBC’s “The Biggest Loser” recently started. Have you seen this show? My wife, Cherie, loves it; she finds it inspirational to watch these folks go through such a tough ordeal in order to improve their health. I enjoy it as well, though my motives are completely different. Somehow watching these folks literally work their butts, while someone is screaming at them, makes me feel less self-conscious of my physical shape and fitness, or lack thereof. Knowing that I haven’t gotten to that point yet allows me to justify why I’m sitting on a couch watching them while reaching for a handful of potato chips!

I do find it interesting how they measure gains or losses week in and week out. If you’ve watched from the beginning, you may have noticed that the current approach used is different than the first season. When the show first started, the contestants were measured purely on the amount of total weight, in pounds, they lost that week. Now, instead, they determine the biggest losers based on percent of weight lost each week. I believe that the decision to do so was to counter the assertion that by going based purely on pounds that the competition favored those who were already the most overweight, as they would have more to lose.

But, it seems to me that the current approach doesn’t really solve the problem. Yes, it helps. Someone who was 250 pounds that loses 5 pounds now equals someone who was 400 pounds that loses 8. But, someone who is 400 pounds still has a lot more to lose over the long haul.

Let’s consider two theoretical contestants; I’ll call them Bob and Jillian. Let’s assume that they each have an ideal weight of 175 lbs. Let’s assume Jillian starts the competition at 225 lbs and Bob starts at 275 lbs. Imagine that 2/3rds through the competition, they’ve both lost 22% of their body weight. That means that Bob, the heavier of the two, has lost 61 pounds and now weighs 213 lbs. But Jillian, the lighter of the two, has lost 49.5 lbs and now weighs 175.5 lbs. She has nowhere to go from here, while Bob still has room to lose another 38 lbs! As a result, its likely Bob will win the long-run, unless Jillian struggles to lose weight beyond her ideal! But, in this fictional case, is Bob really the person who should be considered to have made the most achievement? Afterall the lighter of the two contestants made it to ideal weight much faster. Isn’t that what really counts?

This same phenomenon creeps up from time to time in the world of physical verification when we talk about scaling. Let’s face it, scaling is hugely important for DRC runtimes these days. If you are designing at 32nm or 28nm with a design with billions of devices, there is simply no way you are going to get reasonable runtimes without it.

As part of the efforts to continue to ensure the fastest total physical verification runtimes, Calibre continues to improve our scaling capability. If you are looking for the best runtimes with Calibre for large designs, then “hyper remote” with remote data servers is the way to go.Hyper remote” is a great concept. It actually combines Calibre’s strengths in the world of true multi-threading, with our existing distributed processing and initial hyperscaling concepts. In essence, it allows us to run multi-threaded processes on remote machines. Also, by allowing the remotes to manage their own processes, it allows us to do many more tasks in parallel than traditional hyperscaling, thus improving scaling and cutting runtimes dramatically. Remote data servers allow us to also move memory allocation to be shared across the memory in the remote machines. Doing so greatly reduces the requirements for a “master” machine. The combination allows Calibre to gain considerable improvements in both environments with lots of small (2 processor nodes) machines, or in environments with several large servers with many processors. As always, its just part of the standard Calibre licensing configuration, and all comes as part of the support dollars spent on your Calibre investment.

Calibre Scaling Improvements

But, with all that in mind, we still realize that scaling is only a means to an end. The end goal is really fastest turn around times. Sometimes it’s easy to lose track of this goal, putting the emphasis not on runtimes but on scaling itself. This can be misleading. Consider the two scaling graphs below.

Scaling Example #1

Example Scaling #2

If you consider these two graphs out of context, you may conclude that the second graph represents the best solution for physical verification performance, because it seems to scale to more CPUs. But this may not be true due to some unstated assumptions.

First, scaling is always measured by reference to some starting point. These starting points are not necessarily scaled with respect to one another. To illustruate, lets go back to our Biggest Loser analogy. Recall our two contestants, Bob and Jillian. Let’s assume that after 2/3rds through the season, Jillian stopped losing weight, but Bob continued to lose another 20 lbs through the course of the season. You could plot “scaling” as a curve of their current weight per week. In doing so, you’d clearly see that the heavier contestant’s weight loss ‘scaled’ further. But, could you then conclude that this means that contestant is somehow more fit? Of course not; Jillian weighs less!

The same is true for physical verification and scaling. Let’s consider the same original scaling graphs, but this time, lets plot them not by relative speed-up, but by actual runtimes. In doing so, some new information comes to light that can dramatically change the picture. Now we can see that the first graph’s curve stopped scaling earlier, but actually reached a faster total runtime in the end.

Scaling in Context

Ah, but you might say ‘the graph on the right still looks best because it is continuing to scale and by extrapolation, it looks as though it will eventually be faster.’ Well that’s a stretch, to say the least. Let’s reconsider our contestants. We all know that, eventually, each body has its own minimum healthy sustainable weight point. We know where Jillian’s minimum weight is; she already reached it at 175. But, you can’t tell if Bob will continue to lose more weight or if he has already hit his minimum.

From this scenario, you can clearly see that for contestants on the Biggest Loser, it is to one’s advantage to come in weighing more.  Quite frankly, any would-be contestant on the show would be well served to binge eat as much as possible prior to coming onto the show, just so they had more weight to lose over the course of the season, and thereby increasing their odds to stay above the dreaded yellow line!

Again, there is a similar analogy in the world of physical verification. Amdahl’s law clearly shows that all scaling solutions eventually reach a point of diminishing returns. In other words, like ideal body weight, every tool will eventually reach a runtime plateau, where adding more CPUs does not improve performance, and may even start to run slower. This basically means that you cannot extrapolate a scaling curve. We illustrate this point by extending the previous scaling curves to more CPUs below – clearly the extrapolation was not a safe bet.

Extended Scaling

Another commonly related mistake is to make a determination that for a larger design the second solution, which scales further, will be better because the first one stopped scaling too early to get good returns. This is akin to saying that if Bob and Jillian left the show, only to both return back the following year each weighing 400 lbs, that Bob would now be better suited to win.  What this thinking fails to take into account is that the situation in the earlier season is now completely changed and cannot be used to set an expectation.

The same is true for PV.  One cannot assume that the scaling, and the point where that scaling stops, are consistent for a particular physical verification solution, across any design. I can’t speak for every tool, but for Calibre, that’s clearly not true. Calibre’s scaling will depend on many things: the size of the design, the hierarchy of the design, the number of rules being run, the complexity of the rules being run, the interactions between the rules run, the number of types of hardware used … In general, we can say that two designs on the same process with similar design styles, but with two different design sizes, will not experience a runtime increase in proportion to the design size. It should be considerably less, given Calibre’s handling of hierarchy and repetition.

For the producers of “The Biggest Loser,” this may all seem like a lot to try to digest! The point to remember is that scaling is just a means to an end. What are the real goals? For fitness, it is how to get to the ideal target weight the fastest, not how to stretch the amount of time it takes to achieve that out the longest! For physical verification the goals are two-fold: First and foremost is how to get the fastest possible runtimes. The second, less obvious one, should be how to get there with the lowest cost, which generally means with the fewest CPUs and using the least memory and disk resources.

It’s for this reason that Calibre is not just optimized for scaling. It would be relatively easy to modify the Calibre architecture such that it scaled further.  This would be akin to binge eating before going on the Biggest Loser show. To do so, however, would likely increase the total runtime and memory usage. Instead, the focus for Calibre is first on reducing total CPU times through a combination of continued engine improvements and optimizations and by introducing new operations to simplify and speed new process checking requirements. Below is an example of performance improvements due to engine optimization in Calibre over the past year.

Typical Calibre performance improvements release to release

With this approach, the total CPU time required to run a job is significantly reduced. This means that when scaling to multiple CPUs, there is less computation that needs to be shared. In otherwords, less hardware is required to get to the ideal performance goals. Or, to go back to our Biggest Loser analogy, it means that Calibre is doing what it needs to do stay fit and trim from the beginning, instead of having to spend weeks on the treadmill with people screaming for improvement!

TTFN,

Ferg

DRC, Performance, Calibre, Runtime, Scaling, Physical Verification, PV

More Blog Posts

About John Ferguson

John FergusonJohn Ferguson has spent the past 13 years focused in the area of physical verification. As a lead technical marketing engineer, his time is dedicated on understanding new requirements and ensuring that the continued development of calibre is properly focused and prioritized. This includes a combination of understanding the requirements of the latest process nodes to ensure that all checks can be accurately coded and implemented in calibre, as well as ensuring that the use models and debugging information are presented in a manner that enables the most efficiency from users. Visit John Ferguson's Blog

More Posts by John Ferguson

Comments

No one has commented yet on this post. Be the first to comment below.

Add Your Comment

Please complete the following information to comment or sign in.

(Your email will not be published)

Archives

Tags

 
Online Chat