Rolling out an enterprise system is for many Capital electrical platform design software customers a multi-stage process. One set of tasks in managing the distribution of the value to individuals and the corporation is to integrate to the corporate IT infrastructure. This is done mindful of the needs to provide acceptable reponse to users, adequate security policy conformance, appropriate data backup strategies need to be in place, confidence in up time or availability for production users, provision of a trials/training environment.
Make management of the Capital IT infrastructure look effortless.
Actually that is just the highlights list of user expectations. There are many many more tasks in IT management to deliver a solid Capital application to users at multiple workstations - probably on multiple physical sites and probably in multiple countries on different continents. Systems and wiring design and manufacturing is a global business, so the software that supports that business is also by necessity architected to be used worldwide in diverse computer infrastructures. The technologies of a military aerospace customer running Capital can be radically different from Mentor's Educational Services' virtual machine drop-in and go systems and data configurations for example. But in all cases a common need, and usually the first one articulated amounts to 'the software must be quick enough" - no lags, no delays, no hiccups, stutters, freezes or lengthy waits.
That brings me to reveal what the "P" word is. Performance
. My inclination has always been to taboo the word in meetings, and to have clients and colleagues find another way of describing their aspirations, and observations. Because in "performance" it is all too easy to chat about acceptable/unacceptable without the discipline of measurement. And then people become entrenched or adamant in their opinions rather than working on the basis of fact. Complaints are good, because knowing what you are unhappy about is the first step towards being happy. And the path to being happy about response time of a software application is knowing the timings. First the baseline or control timing, second the same operation as you find it with a changed piece of the environment e.g. a workstation in Argentina and the same specification equipment in Poland. You may be surprised at how many debates about the speed of software begin without reference to any standard measurements.
[caption id="attachment_2187" align="aligncenter" width="520" caption="Timing tests with representative data are good ways to understand how your system is behaving. Harness XC processing is a common benchmark choice."]
Of course our brilliant programmers and development experts designed a superfast code base! I can promise you it is very rare that any downturn in some processing speed is attributable, proved to be caused by inefficient logic in programming. Very rare over the last six years. So should be very rare over the next six years because Capital has a stable code base. Think about it, how likely is it that a programmer would write a clever routine to wake up after six months on a given version, yawn and stretch, take a look at the user base, see who is logged on with a Poilsh name and make their database inquiries run twice as slow as their Argentinian coworkers doing the same work.
Fortunately over the life of Capital there has evolved a body of knowledge amongst the Mentor Graphics staff who support customers' deployments not just how to handle issues like these, but how to help customers devise a set of timing tests representative of the loads expected, and usage patterns specific to their needs. What's going to be important to you rather than another company can be advised by the Mentor Consulting experts (intensive paid-for engagement and value-add) the Application Engineer assigned to you (best practice advice tending to specifics for your particular circumstances) or via the Customer Support Engineers (responding to service requests with highly specific answers). These people can help you avoid puzzlement and preplexity when someone comes to you with the general malaise "it's too slow" and help you ground the problem definition in the real world, and give you practical steps to removing the measurable issues.
So what sort of operations are representative for most customers? What you shoud look at is not a big list of things when you are identifying a set of normal benchmark timings against which future measurements can compare (new hardware/ new software versions - amended WAN configurations etc.). The right P-word here is "Plan" - have a plan to measure these regularly. Tracking these via a simple spreadsheet is commonplace - be sure to publicize the results widely so stakeholders know there is someone watching over this aspect of gettign return for the business from the software. It certainly does not have to be a lengthy list - but the more coverage the better the insurance!
[caption id="attachment_2191" align="aligncenter" width="300" caption="Simple metrics like the time to retun an answer from a part selection dialog in Capital Design like this one are all you need to put the debate about response time in the Capital application on a scientific footing"]
You cannot manage what you can't measure.
- Data Crunching through Capital Manager and back to the database repository and its manager: significant processing options performed frequently - Harnesss Processing in Harness XC for example on a small, a medium and a large example of your product data or a representative approximation thereof.
- Pulling Data Across the network: open and close and save tasks on your designs (e.g. schematics topology designs, formboard drawings) - the most common things which your users will do - so any change in these timings will be noticed first by the user community. Include logging in to the system in these tests. This exeercises the parts of the system where data is buffered up for local client work.
- Common editing and design operations. Interrogation of the library items (devices & other parts definitions and symbols) once you have opened the designs. This tells you what the normal user experience is going to be as they work day-to-day.
- Passing on data from the application to other parts of the IT environment: Perforn some popular print or reporting functions.
- Capital data exchange: Import and export of project information - although this is a rare occurrence these operations are probably the highest "stress" you can present to the system. Keep a data set constructed just for this purpose if you can.
Whether the list of tests contains multi-user weighted tests, accumulates results from different sites is often up for grabs. Your decision whether you want to go deeper and look for comprehensive results or settle for an acceptable minimum.
My view is that the user community will thank you for depth and attention to detail. And you will thank yourself for anchoring any performance discussion in reality by providing factual, observable data.