Archive

Archive for the ‘Software architecture’ Category

NDC 2014

June 16th, 2014 1 comment

ndclogo2014I attended this year’s NDC (Norwegian developer conference) in Oslo. It was a very interesting conference, but as a short summary, it saw something like a consolidation. JavaScript – as some people say in its fourth generation (Simple Scripts, AJAX, MVC-Framworks, SPA) – is finally accepted as a language like C# or Java. Also in the agile world there is no hype anymore about Scrum or Kanban. It was more how and when to use it.
One major topic, which I saw in several sessions was maintainable code. Code has to be readable, clear and simple. This is something, what I try to teach my team-members also since a few years (yes, I was also once a geek who tried more or less any new fancy framework).
The highlights were for me the sessions “Seven ineffective coding habits of many programmers” with Kevlin Henney, “Code that fits your brain” with Adam Tornhill and “Beautiful builds” with Roy Osherove.

Here the sessions I attended:

4.6.2014:

5.6.2014:

6.6.2014:

Beside the fun presentation of Scott Hanselman about JavaScript, there was another last funny presentation: “History of programming, Part 1″. You can watch all recorded sessions on vimeo.
After the conference I stayed for the weekend in Oslo. And yes, it is a really nice place to be and the people are really friendly.


Share

Are stale data evil?

February 27th, 2012 No comments

Sexy young woman as devil in fireWhen you’re a software engineer who produces software for enterprises like banks or assurances, then it is normal you have huge databases (several gigabytes). Such systems have an operative application where users do the daily business of the company and there are more informative parts (or strategic parts) of the systems which the management uses. At a first glance, there isn’t a problem with those two views, but as you probably know, those companies have for the second part for the management a data ware house solution.

But what if your customer doesn’t want a data ware house solution? Or if he couldn’t afford one? Then you will probably add reports, search views to your application. In this blog post I describe some of the aspects if you’ve to choose this variation.

Stale data as a requirement

Unfortunately the question "how old can the data on this report/search be?" is rarely asked. When the answer is "The report/search has to show the right data", then you have to ask the customer again. The problem is, the data is maybe already stale after the query, because somebody changed some data.

In my experience there are only a few reports, which need as little as possible stale data. But it is essential that you ask this question.

Isolate only as far as needed

Most searches or reports need essential tables in your relational database, so it is important that those searches or reports don’t have an effect to your daily business. You ask yourself maybe now, how those queries could have any impact?

If you use Microsoft SQL Server, then the default isolation level is "Read committed". If a query isn’t that clever made, it could happen, that the query blocks a whole table (Intended Shared Lock which blocks any inserts or updates). If that happen, your users will remark that by waiting while they try to save their data.

When you create a search or an report you have to ask yourself always, which Isolation level you will use. When you use dirty reads (Isolation level "Read uncommitted"), then you’ll probably never generate any locks, but you have to deal with data which is wrong. This because data could be roll backed and the same query wouldn’t bring the roll backed data again.

Conclusion

Stale data or even wrong data on a search or an informational report hasn’t to be wrong or a mistake. Sometimes it’s just good enough to fulfil the requirements and make the customer happy. And that’s what it’s all about.

Share
Categories: Software architecture Tags:

Hunting performance issues

January 17th, 2011 4 comments

imageRecently I received the lead over a performance optimization project for a software product.  It isn’t something extraordinary for a software architect, because as a software architect you have to know what’s critical for a software system in a specific environment.
Some of my co-workers may now smile a bit: I always say that you shouldn’t by default design your software based on performance considerations. The code should be simple, understandable and correct. My thesis was and is "good code is fast code". With this new job I have the chance to proof this thesis.
So I started to plan my work and tried to achieve a performance improvement of the related software. This blog post is a temporary résumé what I learned so far.

Feedback from the users of the software
My first task was to visit the users and talk with them about where they think exactly the performance problems are. As expected the feedback was quite on a high level of abstraction. They told me where in the processes the software seems to be slow. The feedback was, as expected, subjective and also colored by the business of the customer. Not all customers use the software the same way or use the same set of features. But after an analysis of log files and other resources I could, in many times, prove the customer feedback.

Feedback from the IT departments
I received not only feedback from the users, there was also feedback from system administrators as well. And this feedback was sometimes a little bit scary: 100% CPU usage over several minutes, heavy use of RAM. The only thing what a system administrator can do in such a situation is to ask the users if he could reset the IIS or reboot the server. At this point every software developer understands why performance is important. It isn’t only about money, it is also about customer satisfaction.

Tools
When you have to improve your software performance, then you need facts. Those facts can be log entries, code smells or discovered facts by dynamic code analysis tools. In the software which I have to improve, there was a log, but the components of the software didn’t log very performance specific. So I was more or less blind. This was the reason why I began to evaluate profiling software. At the end there were two profilers: JetBrains dotTrace and Red Gate ANTS Profiler. I chose the Red Gate profiler because I found the UI and the presented information a bit better. So, the current tools I use are:

The Sysinternals tools and the trial version of the .Net Memory Profiler I used to understand why the software consumes that much memory. But I discovered just little things, which you can find by a code review or by a static code analysis tool as well.

Methods
After a week I asked myself if I do job right (efficiency) and if I do the right things (effectiveness). So I looked for techniques how to find performance issues in software (books, blogs, etc.). Unfortunately I didn’t find any interesting sources until now.

Conclusion
One thing I learned was, that you should never optimize code without facts. Too often I got results from the profiler which were surprising. Without those results (facts) I would have optimized the wrong part in the code. But sometimes the code is obviously bad, so that you don’t need a profiler to proof that the code could be faster. So, currently my thesis "Good code is fast code" seems to be true.
During the discussions with the users and the customers I realized, that there weren’t any non functional requirements specified. But this isn’t a good thing neither for the software company nor for the customer. At the end, the only thing that counts, is customer satisfaction.
The next steps are to improve the effectiveness to find performance issues and to define some preventions to increase the code quality. That includes some teaching about YAGNI, DRY and KISS.

Share

NHibernate day in Bologna

October 25th, 2010 3 comments

NHDay_3During a whole day several speakers spoke about NHibernate and related topics. This conference was in Bologna and was very well prepared and organized.

You could watch the slides and the videos of the sessions here. I joined the following sessions:

Keynote

Simone Chiaretta opened the conference and showed during his keynote how the day is organized. But he couldn’t resist to make a little joke about us Swiss folk which isn’t part of the European union, so basically we shouldn’t be here ;-).

On the next graphic you see a summary about the attendees.

image

Actually there were around 150 attendees, two rooms with parallel sessions. In the main room mainly Ayende hold his sessions.

Links: Slides, Video

What’s new in NH 3.0

P1030010 Oren Eini, aka Ayende,  showed in this session what’s new in NHibernate 3.0. For me there wasn’t a lot new, also because I hold two presentation at the .Net User group Bern about NHibernate in august.

Highlights in this session:

  • New book: NHibernate 3.0 Cookbook by Jason Dentler
  • NHibernate is a matured technology: It is 7 years old
  • New LINQ Provider in NHibernate (Ayende said he suffered to write the first implementation, it wasn’t any fun to do it)
  • NHibernate for medium trust environments
  • Query Over
  • Extensions in the Criteria API
  • Strongly typed configuration
  • Dependencies between NHibernate and Log4net removed
  • Over 170 issues fixed

After the presentation I asked Ayende if the lazy properties aren’t in NHibernate 3.0. But Ayende just forgot to mention them so he did it in his next session.

Links: Slides, Video

Loosely Coupled Complexity – Unleash the power of your domain model, using Event Sourcing and CQRS

P1030022 This session by Alberto Brandolini was very interesting and entertaining. The topic was about CQRS. It is currently a hot topic, so I took the chance to learn a bit more about it. There was a lot of content but the presentation run a bit out of time.

Alberto asked during his presentation a lot of interesting questions and showed by very nice slides (only the beamer had sometimes problems with the colors) how to transform a classical layered architecture to a CQRS architecture. The following two graphics show the transformation. On the left you see the traditional architecture and on the left the CQRS architecture:

 image image

Highlights in this session:

  • Transform current architecture to a CQRS architecture
  • Anemic Domain Model IS an anti pattern
  • List of how to shoot yourself in the foot
  • Aggregates
  • Traditional DDD View of an architecture
  • Introduction into CQRS
  • Event Sourcing
  • Advice: Start small

Here my most favored slide of this session:

image

As usual, CQRS isn’t a silver bullet. You have to ask yourself if it makes sense to do all queries through an object relational mapper and not directly to the database. That doesn’t mean to go back to ASP or PHP, but just use the tools for the purpose for which they made for.

Links: Slides, Video

NHibernate Hidden Gems

P1030043 This session was also held by Ayende. He showed some concepts in NHibernate by showing some code.

One concept was quite interesting: The Feel-the-Pain-Interceptor. During the development of a new application the database server is on the same machine like the application server or the web server. Because the developer doesn’t feel the pain of a huge amount of queries to the database or a query which returns a lot of data. But in a production environment the database server isn’t normally installed on the same server like the web server or application server. To make the developer aware to optimize the communication to the database server, Ayende implemented an Interceptor which just make sleep the current thread for one or two seconds. Now the developer feels the pain when he tests the application.

Highlights in this session:

  • Feel-the-Pain-Interceptor
  • Optimize startup times
  • Use of User Types
  • Use of Listeners
  • Use of Future, specially in Criteria API

Links: Slides, Video

NHibernate Worst practices

P1030054 This was a very interesting session with Ayende. He discussed the worst practices with NHibernate.

Highlights in this session:

  • Hiding NHibernate (by building additional layers on top of NHibernate)
  • Select N + 1 Problems
  • If the Database couldn’t do it, NHibernate can’t do it as well
  • Micro management of the NHibernate session

After Ayende finished his presentation he asked the attendees what they did bad things with NHibernate or what kind of worst practices they observed.

Links: Slides

NHibernate Q&A round table

In this session Fabio Maulo joined us through Skype from Argentina (unfortunately audio only). It was funny, because Ayende mentioned after the session, that it was the first time that Ayende and Fabio talked together about NHibernate:

image

P1030062 During this session the attendees could ask question to Fabio or to Ayende and they tried to answer them. Fabio speaks much better Italian or Spanish, so most of the English questions was answered by Ayende.

But Fabio mentioned the following several times: “Welcome to NHibernate, welcome to the world of options!”

One last thing: At the beginning of this session Ayende and Fabio announced the fist beta of NHibernate 3.0.

Links: Video

RavenDB

P1030067 Ayende presented the current version of RavenDB. It was very interesting to see a NoSQL Database in Action. One more item for my to-do-list. With RavenDB there is no longer a need for an object relational mapper because the data could be stored directly into the database.

At the dinner after the conference I asked Ayende if RavenDB is also designed to be shared. He answered with no, as I expected. If it would be shared, then you probably need at the end a framework to map your model to the model of the database. And something funny: Maybe you would also need someday a person like a document database admin (DDA).

Links: Slides, Video

Conclusion

The location of the conference Bologna was for me well reachable. I was surprised from how many countries in Europe people came to this event. I  wasn’t the only geek from Switzerland, also Kay Herzam, was present at the event.

iPhone 031 NHDay_Friday_Dinner

For me the networking was very important. So I had also dinner at Friday with the a group of attendees, organizers and speakers. Also during Saturday (lunch and dinner) I had the possibility to discuss with Ayende or Rob Ashton. I had a lot of fun and could make a lot of contacts to Italian software engineers.

And one final note: The food in Bologna was exceptional (as expected) and Bologna has also a very nice old city.

Share

Round-up of a data centric architecture

April 11th, 2010 No comments

In my last big project we had to use a data centric architecture. There was a learning curve which architecture was the most appropriate one. The result is visible in the picture bellow:

Architektur

Lets explaining the diagram. The data (or state) is managed by the database layer and the common layer which contains the .Net class DataSet and the DataTables (logic representation of the physical table in the database). This architecture makes use of the patterns Table Module for domain logic and Table Data Gateway for data access logic.

There is no need for DTOs, the state is loaded by the data access layer into the DataSet which is transported through the upper layers. That means, that all layers (except the database layer) know the common layer. Instead of using the DataSet, you could realize that also with POJOs. But that leads in a data centric approach to an anemic domain model.

The domain layer contains the core business logic (domain logic). When you use a data centric architecture, then you couldn’t program in a pure OO-way. All methods have to have a pointer to the data. In this architecture you have possibility to solve that problem: One pointer to the data-structure (here DataSet) and another to the DataRow by a key (for example the primary key). The other possibility is, that you get a DataRow once and pass it around. The problem with this solution is, that you could get a mess with several DataSets.

One important lesson that was learned, was the need of a service layer (Service Layer pattern) which we called use case layer. This layer contains the services, here Controller called. A Controller represents a use case, for example an activity in a workflow process or a simple CRUD window. The responsibility of a Controller is to control and to coordinate what happen in a use case:

  • Prepare initial data structure, load data for combo boxes
  • Coordinates load or validation of data for additional AJAX calls
  • Coordinates the validation of the data structure
  • Delegate the persistence of the data structure

The layers report data source, workflow and remoting (contains facades which realize the Remote Facade pattern) are just technical layers. They don’t contain any relevant logic. They just delegate to the use case layer.

Finally, the two layers report and web represent the user interface. The report layer talks only with the report data source layer and the common layer. The web layer talks only with the remoting and common layer and asynchronous with the workflow layer. As web layer technology we used ASP.Net Web Forms.

Using silos

We used the term silo to define what you have to do to realize a use case. There were two major use cases: Simple CRUD dialog or a workflow activity (which have also a dialog).

The silo for a simple CRUD dialog contains: Code behind, View, Document, Facade Interface, Facade, Use Case Interface and a Controller.

The silo for a workflow activity contains: Code behind, View, Document Facade Interface, Facade, WFModule, Use Case Interface and a Controller.

The big advantage of a silo is, that it is clear what you have to do and where what kind of logic has to be. It is also clear that you shouldn’t reference classes of an other silo. This rule helps to minimize the side effects.

But there is also a disadvantage: A silo generate boilerplate code (Interfaces, WFModule, Facade). This leads to the anti pattern accidental complexity. To reduce this problem, you could use wizards in your IDE or some code generation tools.

Make clear decisions where reuse should happen

The use of silo arise the question where the reuse of logic happens. This was an other important lesson learned. It is important that it is clear where the reuse has to happen. In this architecture it happens in the domain layer with the methods of the BusinessObject and TableModule. Those methods are driven by the domain and have specific names, what simplifies the reuse.

Conclusion

After understanding the architecture, we were quite productive. But I’m still a fan of simple and clean OO architectures, for several reasons: avoid accidental complexity, encapsulation, information hiding or inheritance. Most of those reasons are a problem with this architecture.

A nice side effect of this architecture is a good support for testability. You could set up every state what you want because there is no encapsulation. The problem of dependencies between classes still has to be solve with dependency injection and a mocking framework.

If you use a technical environment which gives you strong instrument for a data centric architecture you should consider to use them. Important for a data centric architecture is that you define rules where you place your logic and where the reuse happens.

Share
Categories: Software architecture Tags: