0 Comments

I was asked to write a post about my views on unit testing because it is a hot subject at the moment.

I’m very ambivalent about unit testing and always have been for many different reasons. Although testing is very important, I often find unit testing to be a time consuming liability but it depends on the project.

Good for static classes

Over the years I’ve build many helper class libraries which is used in many projects and can be considered as business independent fundamental classes. All the classes in those libraries are static and therefore cannot be considered to be entities, but merely a logical placeholder for static methods that works independently.

To unit test such libraries is a must have. They are simple to write and maintain and unit testing is important because the libraries are used in a vast amount of different projects.

Object orientation and business entities

A unit is the smallest testable part of an application which means that for object orientated applications/libraries it will always be a class or business entity. Of course you can test the individual methods, functions and constructors but it doesn’t make much sense since the business entity is to be considered as a single unit and also works as such in real scenarios.

For me it makes more sense to conduct use-case testing on business entities. It works much like unit testing, but is constructed so they test a complete flow of operations on the class. For instance, if I had a class called Dog, then I would write a use-case test that simulated how a real person would use the class. First Feed the Dog, then Pet it and then Walk it before it needs to Sleep.

For that purpose it doesn’t really matter if you use unit testing frameworks such as xUnit or write it in a console application in the same Visual Studio solution. I like both approaches equally, but most often write a console application because I find it simpler.

Never-fail-applications

Financial applications used by banks or government applications needs to be tested extremely well. Those types of applications deal with people’s personal data and money and must never ever fail. The same goes for applications that ships on CD's or DVD's unless they are updated automatically over the Internet.

There needs to be automatic tests running every day and both unit testing and use-case testing is very important. The 80/20 rule does not apply here; it needs to be as close to 100% code coverage as possible and each class needs to be use-case tested for dozens of scenarios.

To test this properly you need professional testers and QA’s because the test is as important as the application itself.

What about no test

The most common thing is to have no test but the application itself. Admit it; you do this all the time as well. Is it a dumb idea to use the application to test its dependent libraries and itself and not doing unit or use-case testing? No it definitely is not. The end user application/website is the ultimate test, but there is a serious problem with not having an automated test harness. If an error occurs you can spend more time finding the problem than it takes to fix it. For smaller projects though, I find it perfectly acceptable to test from the GUI without a test harness.

Also, it is very difficult to test GUI’s automatically so it will be a good idea in most cases to use a test harness for every project in the application stack and then create a procedure for testing the actual GUI. In most cases it has to be done by humans manually.

Test driven development

TDD is one of the hyped methodologies that I never really understood. I can’t get my head around writing the test before the class it tests. In C# it doesn’t make sense to me, because if I write a test to call a non-existing class and its methods and properties, then it will not compile. Then when I write the actual class I often compile to check for errors. That doesn’t work either, because the test references some methods I haven’t written yet and as so cannot compile.

Besides, in most cases I don’t know 100% what properties and methods the class needs and which of them is public or private. Then I need to get back and change the test and thereby looses the point of writing the test before the class.

User driven development

For smaller projects with few dependencies user driven development is for me the best way to test. Consider a rather small project like BlogEngine.NET with one web project and one class library. The class library is only used by the web project and nowhere else. That means that the class library has no classes, methods or properties that is not used by the web project so by testing the web project, the class library get’s 100% code coverage.

If a new feature needs to be implemented in both the web project and class library I use the web project to test the class library. That’s because I’m as much a user as a developer of the project. If you are not the end user, then you should test as much as you can on the web project and then get a co-worker or the customer to test when you feel confident it works. It’s always a good idea to let the customer test as early in the process as possible.

Simple code is simple to test

If you always make sure your code is simple, clean and refactored, then testing is equally simple to write. Complex code needs complex testing and the maintenance of the tests will increase with every change to the code. Every time you change a little thing, you need to rewrite many tests if your code is complex. That goes for both unit testing and use-case testing.

Conclusion

I find unit testing to be good for very few types of projects – helper class libraries and never-fail-applications. User driven development with use-case testing is my absolute favourite. I don’t see the big difference in using a xUnit framework or using a console application, because you only get the benefit from a test project when something fails. For automatic testing though, you need a testing framework that logs the results.

This is how I would manually test different projects.

  • Helper class libraries – unit testing in xUnit or console by the 80/20 rule
  • Never-fail-applications – automatic unit and use-case testing written by QA’s. Full coverage
  • GUI’s – test yourself, then a co-worker and then let the customer test
  • Business object layer – use-case testing in xUnit or console by the 80/20 rule
  • Data access layer – unit test in xUnit by the 80/20 rule

Again, it is very important to note that it all depends on the type and size of the project. If you don’t have a build- or versioning server to handle automatic unit testing, then you have to do it manually. If you have the ability to automate unit testing, a xUnit framework is the way to go but full code coverage is not always necessary and not all projects need unit testing.

0 Comments

Here we are in that certain part of the year where the sun is beginning to shine and the winter has ended. Officially the winter ended January 31st, but here in Denmark it has been known to snow in May on occasion, but in June it becomes safe to say that the summer is kicking in. All the rain, thunder storms and just plain gray and windy weather are over (almost) and now it’s finally vacation time.

This year I’m heading south, down to the sunny country of Italy. I start in Milano for a few days and then hit the road in an Alfa Romeo 159. Along the route I plan to visit Le Cinque Terre, Pisa, Roma, Firenze, Rimini, Venice, Verona and back to Milano to fly home.

It means that I will not be active on this website until I get back. I really need to be totally offline and just enjoy the freedom of the open road.

While I’m gone, the rest of the BlogEngine.NET team is working very hard on the next release which probably will be released shortly after my return. The 1.1 release will take BlogEngine.NET to a whole new level of performance, stability, security and of course also have many new existing features.

The next posts after my return will – amongst other things – be about unit testing, which I’m very ambivalent about and try to explain why and what I do to unit test applications. This subject was requested by a reader/friend and I just can’t say no to such requests.

Until then, arrivederci.

0 Comments

I’ve always been a little annoyed by the fact that ASP.NET websites sends the version number as a HTTP header. For an ASP.NET 2.0 application this is added automatically to the headers and you cannot remove it from code. This is what it looks like:

X-AspNet-Version => 2.0.50727

Why would it be necessary to send this information about your application to possible hackers? It doesn’t make sense. Maybe it’s because it allows for statistics to be collected about what versions people are using. Microsoft could then send a crawler to investigate all the websites in the Windows Live search database. I don’t have a problem with that; it’s the hackers I fear.

The other auto-injected header X-Powered-By => ASP.NET is fine with me. It’s easy for people to see by the .aspx extension that you run ASP.NET anyway, so this is not a security issue but still a little annoying that you cannot remove it from within your ASP.NET application. You have to remove it from the IIS.

Then the other day I was playing around with the web.config and by accident noticed the httpRuntime tag and its enableVersionHeader attribute. For some reason I’ve never noticed it before. If the enableVersionHeader attribute is set to false, the X-AspNet-Version header will not be sent.

So, to get rid of the X-AspNet-Version HTTP header from the response, just copy this line into the web.config’s <system.web> section:

<httpRuntime enableVersionHeader="false" />

I think if it was such a big deal to get rid of it, I’d probably done some more research and found this trick years ago. Anyway, I just thought I would share it with you.

To check the HTTP headers sent from your own site, you can use one of the many online tools like this one.

0 Comments

I’ve always been a big fan of using the ThreadPool for asynchronous execution, but in ASP.NET it is not the best approach for multi-threading. I’m not writing about when threading is appropriate and the impact of multi-core or dual core machines when doing threading, but point out that the ThreadPool is not the best choice for ASP.NET applications.

ThreadPool is easy

The reason why I like the ThreadPool is because it is managed for me and it is very easy to use and it only takes one line of code to execute a method in a new thread by using the QueueUserWorkItem method.

System.Threading.ThreadPool.QueueUserWorkItem(SomeMethod);

Private void SomeMethod(object stateInfo)
{
    // Execute something...
}

You can also send a parameter easily like so:

System.Threading.ThreadPool.QueueUserWorkItem(SomeMethod, variable);

That variable then gets transferred to the stateInfo parameter of SomeMethod where you can then cast it from object to whatever data type it is.

Private void SomeMethod(object stateInfo)
{
    int number = (int)stateInfo;
    // Execute something...
}

The ThreadPool in ASP.NET

You can use the ThreadPool in exactly the same way in ASP.NET and it works just as you would expect. The problem is not in the ThreadPool itself but in what else ASP.NET uses it for at the same time. ASP.NET is multi-threaded by design and it uses the ThreadPool to serve pages and content mapped to the ASP.NET ISAPI filter.

If you also use the ThreadPool, then ASP.NET has fewer threads to utilize and requests are put on hold until the pool returns a free thread. This might not be a problem for a low traffic site, but more popular sites can get into trouble. Low traffic sites can get into trouble if they use the ThreadPool a lot.

Don’t use it

Whether you work on a low or high traffic site, there really is no reason to use the ThreadPool when you can create new threads almost as easily that doesn’t disturb the pool. Here is an example that uses an anonymous method as a delegate:

System.Threading.ThreadStart threadStart = delegate { SomeMethod(variable); };
System.Threading.Thread thread = new System.Threading.Thread(threadStart);
thread.IsBackground = true;
thread.Start();

That’s four lines instead of one, but that’s a low price to pay. You could always create a helper method to call whenever you want to start a new thread.

public static void StartBackgroundThread(ThreadStart threadStart)
{
  if (threadStart != null)
  {
    Thread thread = new Thread(threadStart);
    thread.IsBackground = true;
    thread.Start();
  }
}

Then just call it in one line like so:

StartBackgroundThread(delegate { SomeMethod(variable); });

Word of caution

The ThreadPool is managed by the CLR, which provide a level of control that a normal thread doesn’t get. By using an un-pooled thread you also have the ability to do much more harm if you don’t know what you are doing.

You have the ability to stop the AppPool from being recycled and the application from being stopped if you aren’t careful. For instance, if you set the IsBackground property to false, it will exist in the foreground and can make it difficult for the application to recycle or restart. However, the example shown above is not doing that, so don’t worry about damaging your application by using it.

0 Comments

Micro formats have existed for some years now, but it hasn’t been useful for anything until now. If you don’t know what micro formats are, then here is an explanation from Wikipedia:

A Microformat (sometimes abbreviated μF or uF) is a way of adding simple semantic meaning to human-readable content which is otherwise, from a machine's point of view, just plain text. They allow data items such as events, contact details or locations, on HTML (or XHTML) web pages, to be meaningfully detected and the information in them to be extracted by software, and indexed, searched for, saved or cross-referenced, so that it can be reused or combined.

The reason why it has become useful is because of browser support. Both Firefox 3 and Internet Explorer 8 will support micro formats natively. If you cannot wait for the next version of Firefox or IE, then you can download the Operator micro format extension for Firefox and start testing your site. I have done that and applied some micro formats to my blog and now I’ll give you some examples on how to do it easily.

The nice thing about micro formats is that you don’t have to change your layout and stylesheet because it is completely invisible to the human eye – it only exist in the (X)HTML for machines to read. Let’s take a look at three micro formats and how to implement them.

hCard

The hCard micro format is a XHTML representation of the vCard format that applications such as Outlook and iCal understands. By using hCard you can give your visitors an easy way to retrieve and store your contact information. Here is a screenshot of the hCard implementation in the Operator extension for Firefox.

It is implemented only by the use of adding classes to HTML tags. Here is one similar to the one I use on this blog.

<div class="vcard">
  <span class="fn">Mads Kristensen</span><br />
  Lead Developer at <span class="org">Traceworks</span>
  and long time <span class="nickname">.NET slave</span>.
 
  <div class="adr">
    I live in <span class="locality">Copenhagen</span>,
    <span class="country-name">Denmark</span>.
  </div> 
</div>

Remember that the vcard class name must be set in the surrounding container. There are many other class names that is supported by the hCard specifications and you can find a list of them here.

xFolk

The xFolk micro format is for social bookmarking and can be used to let people add your posts and articles to services like del.icio.us, digg and dotnetkicks. That means you no longer need the social bookmarking links and icons on each blog post, because the browser can take care of it for the user. This is how my blog looks in the Operator extension.

 

xFolk is also one of the simplest micro format to apply to a website and it also just uses classes. Before I added xFolk to my blog, each post looked liked this in the HTML:

<div class="post">
  <h1><a href="/page.aspx">Title of the post</a></h1>
  <p>The content of the post</p>
</div>

And here it is after.

<div class="post xfolkentry">
  <h1><a href="/page.aspx" class="taggedlink">Title of the post</a></h1>
  <p>The content of the post</p>
</div>

As you can see, I only added two additional classes – xfolkentry and taggedlink – to the existing mark-up. The xFolk format does support additional classes to be used, but I think these two are enough for most websites. You can find the specifications here.

Rel-Tag

The Rel-Tag micro format is used to provide keywords to content in order to categorize it. If you have a blog post called “Why dogs are better than cats”, then you could provide two tags to that post – one called dog and one called cat. They should then point to a page of all your blog posts that has been tagged with the dog and cat tag respectively. It’s very similar to categories, but much more granular.

To make the dog and cat link into a tag, you simple add the rel-attribute to the link:

<a href=”/tag/dog.aspx” rel="tag">Dog</a>.

The rel attribute is supported by the HTML and XHTML specifications so don’t worry about breaking your standards compliant webpage.

Where the hCard and xFolk micro format can be applied to existing mark-up easily, you might have to change more to add tags. That’s because the Rel-Tag does not use the text in the link tag but the href instead. It sees a tag as everything that is to the right of the last forward slash. That means that the link above has a tag called dog.aspx instead of Dog.

If you have control over your webserver and can setup the IIS to map all extensions to the ASP.NET engine, you can rewrite the URL so that the link can point to /tag/dog/ and thereby avoid the .aspx in the URL. However, if your website is hosted you probably don’t have that option before IIS 7 will be released with Longhorn Server.

If you have to use the .aspx extension you still have a way to make it work. The prettiest is to use the PathInfo part of the URL which looks like this /tag.aspx/dog. The only problem is that when you use the PathInfo you also break the relative root of your website. You can no longer use a link like “~/” to return to your home page because it will send you to /tag.aspx/ instead. That’s why I have implemented the other way on my blog. I use a URL parameter instead which isn’t very pretty, but it does the trick. Now my tags have this URL /tag.aspx?tag=/dog. The important part is to remember to add a forward slash before the name of the tag.

Here is how the Operator extension sees my tags:

 

This was only an introduction to three of the many micro formats you can use. Other important micro formats are XFN, hCalendar and rel-nofollow along with a few more. As a side note I can inform you that BlogEngine.NET 1.1 will fully support hCard, xFolk, Rel-Tag, XFN and rel-nofollow.