Running a Sensitivity Analysis is one of my favorite mechatronic system simulations. I like SystemVision’s ability to analyze my design and tell me which parameters have the most effect on a performance metric of my choosing. Makes designing systems a bit easier if I know which parameters to tweak in order to influence a particular measurement.
Before joining the SystemVision team…in another life with another simulator…I used a standard sensitivity analysis with a traditional flow: change a parameter, run a simulation, measure the results, then compare the measurement with nominal results to determine the sensitivity. Straightforward and simple. But SystemVision lets me dig a little deeper into my system’s sensitivities with two additional sensitivity analyses, Tolerance-Based and Statistical, added to its standard Relative Sensitivity analysis.
SystemVision’s Relative Sensitivity analysis is the standard simulation outlined above. The results tell me the percent change I can expect in a measured metric for a 1% change in parameter value. So if the waveform analyzer returns a relative sensitivity measurement of 0.45 for a performance metric, the default interpretation is “a 0.45% change in performance metric is expected for a 1% change in parameter value”. More generally, a parameter change of N% will lead to a change of sens_result*N% in the measured value, where “sens_result” is the raw number returned by the sensitivity measurement.
The Tolerance-Based Sensitivity analysis adds additional detail. If tolerance information is added to a parameter, and the parameter is included in the sensitivity analysis, the tolerance information is considered in the sensitivity measurements. By combining the Relative Sensitivity and Tolerance-Based Sensitivity analysis results, I end up with some interesting insight into my system. Turns out that a parameter with a high relative sensitivity but a very tight tolerance may have very little affect on my system’s performance metric. Even though the relative sensitivity is high, there isn’t much room for the parameter to vary within its tight tolerance. On the other hand, if a parameter has a low relative sensitivity but its tolerance is very loose, the parameter is more likely to have a measureable effect on my system. Even though the relative sensitivity is low, the parameter has lots of room to vary within its loose tolerance.
Finally, the Statistical Sensitivity analysis might be the most interesting of all. It tells me which parameters contribute most to a performance measurement. In other words, it shows me what percentage of the performance metric variability is due to a specific parameter change. When all of the percentages are added up, the total should be approximately 100% — the more runs I require for the statistical analysis, the closer to 100% the total reaches. Results are interpreted a bit differently from either the Relative or Tolerance-Based sensitivity analyses. To reduce the variability of my design, I need to tighten tolerances on the parameters that have the most effect on my chosen performance metric. To reduce the cost of my design, I can loosen tolerances on the parameters that have little or no effect on my chosen performance metric – less precise components cost less money.
So what does all of this mean? First, and perhaps most important, if I only use the standard sensitivity analysis, I will miss important insights into my system – insights that will not only improve my system’s performance, but also might improve reliability and perhaps reduce overall manufacturing and maintenance costs. Second, no performance metric should be considered in isolation. Optimizing my system for a specific performance metric may have a negative effect on other metrics. I need to make sure I understand the performance metric priorities for my system. Finally, and probably most obvious, a sensitivity analysis should not be used in isolation. It’s just one option in SystemVision’s toolbox for making sure my system works as it should.