0 Comments

In the first part of the checklist, we looked at creating high quality websites from a client perspective and the tools that helps us do that. In this part we look at the (free) tools that will help us build high quality on the server side of the website.

Code quality

Treat compiler warnings as errors

When you compile your solution in Visual Studio it will by default allow compiler warnings. Compiler warning occurs when there is a problem with the code, but nothing that will result in severe errors. Such a warning could be if you have declared a variable that is never used. These warnings should at all times be treated as errors since they allow you to produce bad code. Keyvan has written a post about how to treat compiler warnings as errors.

StyleCop

The StyleCop Visual Studio add-in analyses your C# code and validates it against a lot of rules. The purpose of the tool is to force you to build maintainable, well documented code using consistent syntax and naming conventions. I’ve found that most of the rules are for maintainability and consistency. After using StyleCop on my latest project I will never build a C# project again without it.
 
Some of the rules might seem strange at first glance, but when you give it a closer look you’ll find that it actually makes a lot of sense.

FxCop

This tool should be familiar to most .NET developers by now. It has existed for a long time and is now on version 1.36. FxCop doesn’t analyze your C# code but the compiled MSIL code, so it can be used with any .NET language. Some of the rules are the same as in StyleCop, but it also actually helps you write more robust methods that result in fewer errors.

If you use StyleCop and do proper unit testing, then you might not need FxCop, but it’s always a good idea to run it on your assemblies. Here's a guide to using FxCop in website projects. Just in case. If you own a Visual Studio Team Edition, then you already have FxCop build in.

Security

Anti-Cross site Scripting (XSS) Library

The Anti-XSS library by Microsoft is not just a fancy way to HTML encode text strings entered by users. It uses white-listing which is much more secure than just trust any input and then HTML encode it in the response. It works with JavaScript, HTML elements and even HTML attributes.

Code Analysis Tool .NET (CAT.NET)

When your website relies on cookies, URL parameters or forms then it’s open for attacks. That’s because all three of them is very easy to forge and manipulate by hackers and robots even. By using the CAT.NET add-in for Visual Studio you can now easily analyze the places in your mark-up and code-behind that is vulnerable to those kinds of attacks. CAT.NET analyzes your code and tells you exactly what the problem is. It’s easy to use, understand and it lets you build more secure websites.

6 Comments

Just the other day I was digging into various Web 2.0 APIs to see what the possibilities where. You know, just kicking back and having fun geek style. I quickly gave up.

For some reason, both Facebook and LinkedIn protect certain information about your friends and contacts in the name of privacy. If you log into your Facebook account, you can see the e-mail address of your friends if they have provided one on their profile. You cannot retrieve that e-mail through the API and the same goes for the phone number. You can get pictures, gender, age etc. but not the e-mail address.

LinkedIn does expose the e-mail address, but not addresses, phone numbers or any other information except the name, title and organization. The reason why I wanted the e-mail address was that it is a great key that could pair up your Facebook friends to the LinkedIn contacts. Then I would be able to get all the information on the people from the two networks and make a more complete profile on them.

I know that people might be reluctant to share their e-mail address on Facebook, but apparently a lot of the same people have no issue sharing it on LinkedIn. It doesn’t make sense. And why does the LinkedIn vCards of your own contacts not contain information like country and zip code even though people have entered it? Why couldn’t they just let it be up to the individual user to allow this information being public? Privacy restriction, that’s why, and probably a law suit waiting to happen.

Now, there is some sense in keeping sensitive information private, but why are they just sensitive to the API’s on not if you’re logged in on the websites? In other words, people can get access but not machines. It might be that people build mash-ups, but machines have to execute them and that’s the problem.

It seems that the bigger the programmable web becomes, the bigger the issue becomes on keeping information private, thus limiting us from doing some really cool stuff easily. I guess we could always go back to screen scraping as long as it’s still possible, which by the way it is on both Facebook and LinkedIn – for now anyway – even thought it is a clear violation on their terms of service.

So, I gave up my little venture, looked longingly at the moon from my window and dreamed of a world where privacy restrictions and law suits don’t conflict with my geeky nature.

0 Comments

My last post about comment spam fighting resulted in a lot of e-mails from readers asking how to create their own spam fighting logic in BlogEngine.NET 1.3. So I decided to show a simple extension that listens for certain bad words and filters on those. If a comment contains one of the predefined words it is considered spam.

The extension


[Extension("Filters comments containing bad words", "1.0", "Mads Kristensen")]

public class BadWordFilter

{

 

  // Constructor

  public BadWordFilter()

  {

    // Add the event handler for the CommentAdded event

    Post.AddingComment += new EventHandler<CancelEventArgs>(Post_AddingComment);

  }

 

  // The collection of bad words

  private static readonly StringCollection BAD_WORDS = AddBadWords();

 

  // Add bad words to the collection

  private static StringCollection AddBadWords()

  {

    StringCollection col = new StringCollection();

    col.Add("VIAGRA");

    col.Add("CASINO");

    col.Add("MORTAGE");

 

    return col;

  }

 

  // Handle the AddingComment event

  private void Post_AddingComment(object sender, CancelEventArgs e)

  {

    Comment comment = (Comment)sender;

    string body = comment.Content.ToUpperInvariant();

 

    // Search for bad words in the comment body

    foreach (string word in BAD_WORDS)

    {

      if (body.Contains(word))

      {

        // Cancel the comment and raise the SpamAttack event

        e.Cancel = true;

        Comment.OnSpamAttack();

        break;

      }

    }

  }

 

}

The problem with an extension that filters based on bad words is that if you have a blog about medicine then Viagra probably isn’t a bad word. Therefore this type of spam fighting is left out of the release, but is offered as a separate download where you are able to define your own bad words.

Download BadWordFilter.zip (743 bytes)

13 Comments

Today I hit the all time record of comment spam with a staggering 367 attacks in just 21 minutes. They were all coming from the same IP address but with various different comments that all had something to do with selling Christmas cards. I don’t mind the occasional comment spam attacks since none get through, but when they hit as hard as they did today I get annoyed because they take up CPU cycles and bandwidth.

I needed a way to block these pesky intruders from leeching on my server and hopefully find a way to keep them from returning.

BlogEngine.NET 1.3 to the rescue

The next version of BlogEngine.NET with the creative title of 1.3, which is due before Christmas, has some new events exposed for extension builders. One of them is called Comment.SpamAttack and gets raised every time a spammer tries to add a comment.

So I wrote a small extension that listens to that event and collects IP addresses from the clients making the spam requests. When the same IP address gets caught spamming comments 3 times, the extension clears the response and sends back a 404 HTTP header. The reason for that is to trick the spammer (which almost always is a dumb robot) to believe that the URL doesn’t exist and therefore it would stop trying and wont come back.

This extension is only a few hours old so I don’t have any statistics on its effect yet, but my spider sense tells me it will have positive effect in fighting the spam attacks right now and in the long term.

You can also create extensions that listens to the Comment.AddingComment which is raised before the comment is saved. That gives you the possibility to do your own spam filtering, because you can then cancel saving the comment and raise the Comment.SpamAttack event by calling the static Comment.OnSpamAttack() method.

I’ll test the extension thoroughly and if it behaves well, it will be included in the 1.3 release. You can also get a sneak peak at the extension by downloading the .cs file below:

BlackLister.zip (886 bytes)

0 Comments

I think by now, most ASP.NET developers have come across some of the different provider models in ASP.NET 2.0. Most likely the Membership, Roles and SiteMap provider but also the custom provider model that you can use to create your own providers.

Still, I often come across custom authentication and authorization mechanisms instead of using the Membership and Roles provider model. Typically, a business object called User has a Logon method that does the authentication and returns a Boolean back to the login page, which sets a cookie and maybe session variable and then redirects the user to the password protected area of the website.

Then there’s usual also some custom authorization mechanism that prevents certain logged in users from certain areas such as admin pages. The menu is custom designed to hide and show only the pages available to the current user and the individual pages also makes the checks.

That is a common scenario and I see it very often. If you have such a scenario on your web application, then consider converting your existing logic to use the provider model. By creating a custom implementation of the Membership, Roles and SiteMap provider, you get all this for free and it is very simple to do so. You can keep your existing code and then build the providers on top of that.

The benefit is that these three providers work together so that the SiteMap automatically shows only the pages a user has permissions to see based on her role. You also get the cookie management for free. But the best part is that you can easily switch to a new provider when needed without changing your existing code. I did that today when I installed BlogEngine.NET on our intranet at ZYB. It had to use the Active Directory Membership provider so that we could log on using our network credentials. It took me 5 minutes to make the transition from my custom XML Membership provider to the Active Directory one. There’s a good guide at MSDN for using the Active Directory provider.

That flexibility comes free of charge with ASP.NET 2.0 and makes the lives of all .NET web developers much easier, more flexible and powerful. I have published my XML Membership provider and will wrap up the XML Roles provider soon and publish it as well.

When you first try the provider model, I promise that you’ll never go back.

Resources