Recently I received the lead over a performance optimization project for a software product. It isn’t something extraordinary for a software architect, because as a software architect you have to know what’s critical for a software system in a specific environment.
Some of my co-workers may now smile a bit: I always say that you shouldn’t by default design your software based on performance considerations. The code should be simple, understandable and correct. My thesis was and is "good code is fast code". With this new job I have the chance to proof this thesis.
So I started to plan my work and tried to achieve a performance improvement of the related software. This blog post is a temporary résumé what I learned so far.
Feedback from the users of the software
My first task was to visit the users and talk with them about where they think exactly the performance problems are. As expected the feedback was quite on a high level of abstraction. They told me where in the processes the software seems to be slow. The feedback was, as expected, subjective and also colored by the business of the customer. Not all customers use the software the same way or use the same set of features. But after an analysis of log files and other resources I could, in many times, prove the customer feedback.
Feedback from the IT departments
I received not only feedback from the users, there was also feedback from system administrators as well. And this feedback was sometimes a little bit scary: 100% CPU usage over several minutes, heavy use of RAM. The only thing what a system administrator can do in such a situation is to ask the users if he could reset the IIS or reboot the server. At this point every software developer understands why performance is important. It isn’t only about money, it is also about customer satisfaction.
When you have to improve your software performance, then you need facts. Those facts can be log entries, code smells or discovered facts by dynamic code analysis tools. In the software which I have to improve, there was a log, but the components of the software didn’t log very performance specific. So I was more or less blind. This was the reason why I began to evaluate profiling software. At the end there were two profilers: JetBrains dotTrace and Red Gate ANTS Profiler. I chose the Red Gate profiler because I found the UI and the presented information a bit better. So, the current tools I use are:
- Several SQL statements to get information out of the database
- Execution plan in the SQL Management Studio
- Handmade log analyzer
- ANTS Performance Profiler
- ANTS Memory Profiler
- Trial version of the .Net Memory Profiler
- Microsoft Sysinternals tools
The Sysinternals tools and the trial version of the .Net Memory Profiler I used to understand why the software consumes that much memory. But I discovered just little things, which you can find by a code review or by a static code analysis tool as well.
After a week I asked myself if I do job right (efficiency) and if I do the right things (effectiveness). So I looked for techniques how to find performance issues in software (books, blogs, etc.). Unfortunately I didn’t find any interesting sources until now.
One thing I learned was, that you should never optimize code without facts. Too often I got results from the profiler which were surprising. Without those results (facts) I would have optimized the wrong part in the code. But sometimes the code is obviously bad, so that you don’t need a profiler to proof that the code could be faster. So, currently my thesis "Good code is fast code" seems to be true.
During the discussions with the users and the customers I realized, that there weren’t any non functional requirements specified. But this isn’t a good thing neither for the software company nor for the customer. At the end, the only thing that counts, is customer satisfaction.
The next steps are to improve the effectiveness to find performance issues and to define some preventions to increase the code quality. That includes some teaching about YAGNI, DRY and KISS.