Archive for the ‘.NET’ Category

Migrate a VSS repository to TFS

August 2nd, 2012 No comments

Visual StudioRecently I had to migrate parts from a Microsoft Visual SourceSafe 2005 repository to a Microsoft Team Foundation Server 2010 repository. In this blog post I show what I had to do and what the pitfalls were.

The tool

To migrate a repository you have at least two possibilities: Migrate the latest snapshot or the whole history. Normally you prefer a migration of the whole history, so you don’t loose the gained advantage of an version control system.

To migrate a repository from Visual SourceSafe (VSS) with the complete history, there exists a tool, which comes with Visual Studio: vssconverter.exe. You find the tool in the following directory: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE.

To migrate a repository or just a part of it from VSS to Team Foundation Server (TFS), you have to process two steps: Analyse and Migrate.

There is quite a good documentation about the process and the tool itself at the MSDN.

Analyse step

In the Analyse step the VSSConverter tool checks if there are any problems and creates an user mapping file.

To start the analyse step, you have to enter the following at the command line:

vssconverter.exe analyze analyzesettings.xml

The analyzesettings.xml file looks like the following:

<?xml version="1.0" encoding="utf-8"?>
     <Source name="VSS">
          <VSSDatabase name="\\server\vss"/>
          <UserMap name="Usermap.xml"/>
          <Project Source="$/Project/Scripts/Func"/>
          <Project Source="$/Project/Scripts/Proc"/>
          <Project Source="$/Project/Scripts/Trig"/>
          <Project Source="$/Project/Scripts/View"/>
     <Output file="AnalysisReport.xml"/>

The result of the execution of the command line are two files: Usermap.xml and the AnalysisReport.xml. You can open the AnalysisReport.xml to see if there are any problems. The Usermap.xml file you have to modify before you can continue with the next step.

In the user mapping file (Usermap.xml) you map the VSS users to the users you use with the TFS. This file looks like the following:

<?xml version="1.0" encoding="utf-8"?>
<UserMappings xmlns:xsi="" xmlns:xsd="">
  <UserMap From="michael" To="DOMAIN\michael"/>
  <UserMap From="john" To="DOMAIN\john"/>
  <UserMap From="ADMIN" To="DOMAIN\michael"/>

I had some troubles here with the correct domain name. The problem result that the user mapping didn’t work during the migration and all history entries had me as user. So I had to destroy the migrated items in the TFS repository with the following command line statement:

tf.exe destroy $/Project/Main/Source/Data/Project/SQL

After that, I corrected the Usermap.xml file and started the migration step again.

Migration step

For the migration step you need a migration setting file. A such file looks like the following:

<?xml version="1.0" encoding="utf-8"?>
     <Source name="VSS">
          <VSSDatabase name="\\server\vss"/>
          <UserMap name="Usermap.xml"/>
          <Project Source="$/Project/Scripts/Func" Destination="$/Project/Main/Source/Data/Project/SQL/Func"/>
          <Project Source="$/Project/Scripts/Proc" Destination="$/Project/Main/Source/Data/Project/SQL/Proc"/>
          <Project Source="$/Project/Scripts/Trig" Destination="$/Project/Main/Source/Data/Project/SQL/Trig"/>
          <Project Source="$/Project/Scripts/View" Destination="$/Project/Main/Source/Data/Project/SQL/View"/>
	 <TeamFoundationServer name="tfs" port="8080" protocol="http" collection="tfs/DefaultCollection"/>
     <Output file="MigrationReport.xml"/>

This setting file looks quite similar to the analyse setting file. But in the ProjectMap section you have the destination attribute where you define the directory in the TFS repository where you want to migrate the VSS data.

In the Settings section there is an important entry TeamFoundationServer. For TFS 2010 you have to define the attribute collection. It wont work without this attribute.

You start the migration with the following command line statement:

vssconverter.exe migrate migratesettings.xml

As a result of this statement you will receive a MigrationReport.xml file, which you can watch in a browser if there were any problems. I recommend also to have a look in the VSSConverter.log file. There are some valuable additional information.

Categories: .NET, Good practices Tags:

TF.exe or maybe the most useful tool for TFS

January 31st, 2012 2 comments

visualstudioWhen you’ve to use TFS, then there are moments when you’re missing some features in the UI tools in Visual Studio. In those situations the console tool tf.exe is very useful. In this short blog post I summarized the commands I used the most.

Update an old version of a branch

It could happen, that the production branch or an older feature branch isn’t up-to-date, because somebody forgot one or several merges from the main branch. To fix that, there could be used the following statement:

tf.exe merge /recursive /force /version:T $/Project/Main/Source $/Project/Production/Source

Delete a branch permanently

After a while the number of merged feature branches increase. Normally, you would delete the merged feature branches, so they aren’t any longer visible in Visual Studio. But in my case, I display also the deleted items and this means, that I see all the deleted feature branches. To remove the old merged feature branches, there could be used the following statement:

tf.exe destroy $/Project/Development/FeatureX

Categories: .NET, Good practices Tags:

DataSet and deleted rows

January 31st, 2012 No comments

Pair of cubic eggsYes, I know, the DataSet isn’t the leading edge technology, but as the company where I work currently, there are several companies who use DataSet as data access technology.

Recently, I had to migrate the DataSet subclasses of the framework of my current employer to .NET 4.0 and add LINQ support. After this migration I made some unit tests and I found a surprising fact: If you using LINQ with a DataSet, then you have to handle deleted rows, which isn’t as you would expecting the behaviour.

First, here a little setup to run the tests:

TestDataSet ds = new TestDataSet();
TestDataSet.BECRow row = ds.BEC.NewBECRow();
row.BecId = Guid.NewGuid();
row.RunDat = DateTime.Now;
row.MachineName = Environment.MachineName;
row.CrtDat = DateTime.Now;
row.CrtUsr = "pw";

The old way

In the ages before LINQ you get the preferred rows in the following way out of a typed or untyped DataSet:

string strFilter = string.Format("{0} = '{1}'", ds.BEC.CrtUsrColumn.ColumnName, "pw");
TestDataSet.BECRow[] rowsBEC = (TestDataSet.BECRow[]) ds.BEC.Select(strFilter);
Console.WriteLine("Row count: {0}", rowsBEC.Length);

As you expecting, the result here is one row. The ugly thing is the magic filter string and the casting for the variable rowsBEC. But if you’re using DataSets, you are used to such code.

The new way?

With LINQ, the code could look much nicer:

var query = from r in ds.BEC
			where r.CrtUsr == "pw"
			select r;
Console.WriteLine("Row count: {0}", query.Count());

That’s better and I thought that I had the job done. But I was wrong, seriously wrong. This code behaves differently because you will get also the deleted rows (when no where clause exists) or even worse get a DeletedRowInaccessible exception. This exception occurs, when you try to access a property on a deleted row, which will be the case here, if there are deleted rows in the DataTable.

Why? Well, if you look at the code of the Select method (for example with the tool Reflector) you will see, that there is an implicit filtering on the current rows (added, modified and unchanged).

The solutions

So, how could you fix this issue? There are at least two ways. First, the nearest one:

var query = from r in (TestDataSet.BECRow[]) ds.BEC.Select()
			where r.CrtUsr == "pw"
			select r;

This solution works, but you have to do the ugly cast. Well, because we have a framework and we subclassed every DataSet class, we could fix that by an override in the generic base class.

If you don’t have a framework, where you subclassed all the DataSet classes, then there is an alternative way:

var query = from r in ds.BEC
			 where r.RowState != DataRowState.Deleted && r.CrtUsr == "pw"
			 select r;

The order here is important, because you have to check first the RowState and after that you can add the real where clause. Also the "&&" is important, because it has to break if the row is deleted.


There are some pitfalls with DataSet and LINQ, because you wouldn’t expect that behaviour with deleted rows. It doesn’t make any sense at all, because I nearly never saw a reason why my business logic would like to deal with deleted rows. Also the behaviour is unnatural because you think of the DataSet as an in-memory-database, and on the database you have not to deal with deleted rows, unless you have to implement a trigger.

Categories: .NET Tags:

Grooming your code base

September 1st, 2011 2 comments

Fotolia_32643902_SWhen you’re doing Test Driven Development (TDD), it’s in the process: Red-Green-Refactor. Refactoring doesn’t only mean to improve your new code, it is also important to make your existing code nicer.

If you are a .NET Developer, then you should have the Visual Studio Add-on ReSharper. With this tool you get a marker bar which you see right hand side of your editor. On top of this marker bar you have a little square which shows the worst found item (error or warning). In our company we introduced recently this tool and we also introduced a new rule: "Check-in only green code!"
This rule means that the square has to be green: no errors and no warnings are allowed in the current file.

Reality check

New tools and rules are nice, but we know all the pearl of wisdom "A fool with a tool still a fool". You aren’t immediately a better software developer when you do TDD or using ReSharper. You have to know how to use it.
All of us have in our code base a lot of legacy code, I mean really bad code and a remarkable collection of anti-patterns: god object, Swiss army knife, etc. and the ReSharper shows tons of warnings and errors. Now you can start to fix that code base, but be honest, that will be a project of its own. So, should we just resign or try to improve our code base?

Grooming your code base

When you know Scrum, then you know also the term grooming the product backlog. It is essential to make progress and to do the right things. The same goal we should have with our code base. We want to implement fast new features or change quickly with no risk breaking current behaviour.
The word risk here is critical: Don’t try to improve your code base at once, it’ll be a huge task (or even a project) and maybe you will fix things which will change in the near future. So, the right way is step by step or just following the boy scout rule: "Leave your code better than you found it".
If you doing TDD, then try during the refactor step to groom your code base. If not, then just clean up the code you touch and make it green (ReSharper marker bar). Don’t try to fix a whole class, just improve the needed methods.

Make the field a bit more green

In a classical brownfield project, there will be just bad legacy code. So it is important to plan some bigger refactoring tasks to make the code base better and decrease the technical debt. If the code coverage is very low then those tasks are a good start to increase the number of tests and the code coverage. But be aware of those tests: They could document wrong behaviour as correct.


Jenkins and .Net

February 24th, 2011 4 comments

butler This week I visited the first JUG’s event in Bern. The topic was Jenkins (fork of Hudson). The presentation of Dr. Simon Wiest was very entertaining. He explained continuous integration and showed how easy it is to install, configure and run Jenkins.

.Net integration in Jenkins

Jenkins is from the Java ecosystem, so it isn’t obvious to use it in a .Net environment. But one of best thing of Jenkins is that there exists a lot of plugins. So, there are also plugins for .Net:

  • MSBuild support
  • NAnt support
  • MSTest support
  • NUnit support
  • NCover support
  • TFS support


First you have to download Jenkins from Then you type on the console:

java -jar jenkins.war

And that was the whole installation. Well, you could now install Jenkins as a windows service, but that isn’t hard too. There exists a command to do that in the “Manage Jenkins” menu.

To use Jenkins in a .Net environment you have to install the needed plugins first. I installed the following plugins through the UI:

  • MSBuild
  • NUnit

After you installed the plugins you have to restart Jenkins and don’t forget to check if you have to configure the installed plugins.

Simple project setup

To show the configuration, I show you the same project “DokuDB” on Jenkins and  on CruiseControl.NET.



The project “DokuDB” is  a .Net 4.0 project with a NUnit Test project. So, the steps of the build servers has to be the following:

  1. Get sources out of the SVN repository
  2. Build the application
  3. Run the unit tests
  4. Examine the results of the unit tests

Sample with Jenkins

You can configure your build of your project through XML or UI. If you haven’t much experiences with Jenkins I recommend the UI way. The resulting XML is the following:

<?xml version='1.0' encoding='UTF-8'?>
  <scm class="hudson.scm.SubversionSCM">
    <browser class="hudson.scm.browsers.WebSVN">
    <workspaceUpdater class="hudson.scm.subversion.UpdateUpdater"/>
  <triggers class="vector">
      <spec>* * * * *</spec>
      <msBuildName>MSBuild 4.0</msBuildName>
      <command>"c:\Program Files (x86)\NUnit 2.5.8\bin\net-2.0\nunit-console.exe" src/Doad/Doad.Test/bin/debug/Doad.Test.dll</command>

Jenkins doesn’t have by default an application for the system tray on the developer machines, so that the developer can observe the build state. But there are two available tray-apps:

Sample with CruiseControl.NET

To configure your build in CruiseControl.NET you have to edit the ccnet.config file. CruiseControl.Net hasn’t an UI to edit the configuration, but there is an application to validate the config file: CCValidator. The resulting config file for the sample project looks like that:

<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
		<sourcecontrol type="svn">
			<executable>C:\Program Files (x86)\Subversion\bin\svn.exe</executable>
			<webUrlBuilder type="websvn">
				<logger>C:\Program Files (x86)\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger>
			<exec executable="C:\Integration\Doad\src\Doad\packages\NUnit.\Tools\nunit-console.exe" buildArgs="C:\Integration\Doad\src\Doad\Doad.Test\bin\Debug\Doad.Test.dll /xml:doad-results.xml /nologo"/>
			<xmllogger />
			<intervalTrigger name="Continuous integration" seconds="30" initialSeconds="30" />

CruiseControl.Net has an own System Tray application, the CCTray.


Jenkins is really easy and fast to install and configure. In comparison to CruiseControl.NET the Look&Feel is much nicer. 
All downloaded plugins for .Net worked well except the TFS-plugin, but there was maybe a problem with the Team Foundation server.

For the next (ALT.NET)-project I consider to use Jenkins instead of CruiseControl.NET.

Dr. Simon Wiest wrote also a book about Hudson:

Categories: .NET, First experiencies, New technology Tags: