Monday, February 28, 2005

How to Test Private and Protected methods

[This was originally posted at]

I published an article on CodeProject How to Test Private and Protected methods.

This article explains some theory behind testing/not testing private methods, and then provides and walks through a downloadable code sample to demonstrate these testing techniques.

Sunday, February 27, 2005

Tips to Handle Config files for Testing and Deployment

[This was originally posted at]

.Net provides Config files to store configuration values for applications. Each application (an executable or web application) can have its own config file. Two common config files are:

  • Web.Config - used by an ASP.Net or Web Service project.
  • App.Config - used by executables like console apps or Windows Forms.

For example, a standard solution might include a web application (with a web.config) and a Console application to run unit tests (with a App.Config). There are challenges with config files for both testing and deployment:

  • Testing - you may want to localize the config file to the test environment. For example, the app.config may store the database connection string, and you may want to set this connection string to the test database.
  • Deployment - Each environment should have its own config. When you deploy an app to a new environment, you need to also have the correct config deployed too.

This prompts three questions:

  • Should you include the config in the project file?
  • How do you get the latest version of the config?
  • How do you localize the config for a specific environment?

Let's look at each one of these:

Should you include the config in the project file?

Having a file in the physical directory, and having the file listed in the project are two different things. If in Visual Studio, Solution Explorer, you click "Show all files", you'll see all the files in the physical directory that aren't listed in the proj file.

  • Easy to get the latest (just do a Get-Latest from VSS)
  • Can be used by NUnit if you step through your tests in Visual Studio.


  • Hard for each developer to have their own individual copy.
  • Easier for MSI Installer. If the config file isn't included, then deploying the MSI won't overwrite any existing config. This makes it very easy to redeploy the application without breaking the config.
  • Each developer keeps local copy, doing a get-latest won't overwrite your localized copy because VS.Net's source control features only update files that are part of the project.
  • Config won't be available to NUnit or TestDriven.Net Plug-in (a free plug-in to let you run NUnit in Visual Studio).

How do you get the latest version of the config?

Two ways to get the latest version of a file:

  • Pull from Visual Studio (such as doing a "Get-Latest")
  • Push from VSS.

How do you localize the config for a specific environment?

Perhaps the best way to localize configs for your environment is to have a localized version in another folder, and then have a DOS script that can copy that localized version into the project folder.


Based on these questions, I've found the following parameters to work well.

Config FileInclude/ExcludeWhyHow to get latestLocalize
Web.Config - used for web application.Exclude
  • Best for deployment. Re-running the MSI installer doesn't override the config (because it's not included in the MSI installer).
  • Never run NUnit on Web app directly, so this isn't a problem.
Manually push from VSSNeed to localize it the first time, but then do nothing afterwards because a get-latest won't override it.
App.Config - used for Unit Tests.IncludeNeeded for NUnitAutomatic when getting project within VS.NetRun DOS script to copy in your localized files.

In terms of keeping configs updated, I find the following process helpful:

  1. If a change is made to the config (which should be rare), then all devs will be notified
  2. Regardless of technique (whether include or exclude), a localized copy should still be stored on each dev machine
  3. Each developer is ultimately responsible for updating their local copy.

Wednesday, February 23, 2005

Tips with OOP - New, Virtual, and Abstract

[This was originally posted at]

As I prepare to teach a C# class, I've being looking at ways to explain and clarify certain Object Oriented Programming (OOP) concepts. I find that while a lot of people are familiar with the terms, the specifics always seem confusing. Two questions I often hear are:

  1. What is the difference between using virtual/override and new?
  2. What is the difference between virtual and abstract?

While there are certainly smarter people then myself who have thoroughly explained every inch of OOP, I'll give it my two cents anyway. (For those interested in other resources, I personally like Deitel and Microsoft's C# Spec).

What is the difference between using virtual and new?

The Virtual keyword (used in the base class) lets a method be overridden (by using override in the derived class) such that if you declare an object of a base type, and instantiate it of a derived type, then calling the virtual method will call the derived type's method, not the base type. This is polymorphism.

The new keyword (used in the derived class) lets you hide the implementation of a base method such that when you declare and instantiate an object of the same type, the method of that type is called. There is no polymorphism here.

I think the best way to explain this is via a code sample that shows the different effects of using virtual vs. new. Say you have the following two classes:

    public class MyBase
        public virtual string MyVirtualMethod()
            return "Base";

        public string MyNonVirtualMethod()
            return "Base";
    } //end of sub class

    public class MyDerived : MyBase
        public override string MyVirtualMethod()
            return "Derived";

        public new string MyNonVirtualMethod()
            return "Derived";
    } //end of sub class

This has two classes, MyBase and MyDerived. For illustration, the classes each only have two methods. For comparison, MyBase has one method virtual, and one not. Likewise, MyDerived overrides the virtual method and hides the non-virtual. The following code snippet shows three separate cases. Note that the virtual-new keywords affect the value of MyNonVirtualMethod based on whether the object is declared as type MyBase or MyDerived.

MyBase m1 = new MyDerived();
Console.WriteLine("MyVirtualMethod='" + m1.MyVirtualMethod() + "', MyNonVirtualMethod='" + m1.MyNonVirtualMethod() + "'");
//    Returns: MyVirtualMethod='Derived', MyNonVirtualMethod='Base'

MyDerived md = new MyDerived();
Console.WriteLine("MyVirtualMethod='" + md.MyVirtualMethod() + "', MyNonVirtualMethod='" + md.MyNonVirtualMethod() + "'");
//    Returns: MyVirtualMethod='Derived', MyNonVirtualMethod='Derived'

Note that in the first case, where we declare the object of type MyBase yet instantiate it to type MyDerived, calling MyNonVirtual method is called on the declared type. In the second case, it is called on the Instantiated type.

These concepts are very common when declaring an array of base objects, and then cycling through the array and calling virtual members. A classic example is making an array of Shape objects, calling the GetArea() method on each one, and it correctly returning the area for that shape.

What is the difference between virtual and abstract?

These two keywords are sometimes confused because both can be over-ridden. However, unlike Virtual members, Abstract members have no implementation. As Deitel puts it, "Abstract base classes are too generic to define real world objects" (C# How to Program, pg 392). The Abstract keyword can be applied to either members or the class as a whole. Note that if a class has an abstract member, then that class must itself be abstract (There is no such thing as a "virtual" class).

In the following code snippet below (from Deitel, pg. 394), the abstract class shape has two virtual methods: Area and Double, and one abstract property Name. Note that both virtual members have a concrete implementation. If you create a derived class "Point", and call its Area method, it will return 0.

public abstract class Shape
    public virtual double Area()
        return 0;

    public virtual double Volume()
        return 0;

    public abstract string Name

The following table highlights these differences:

Has implementationYesNo
Scopemembers onlymembers and classes
Can create an instance ofNot Applicable (you can only create instances of classes, not members)No - but you can create an instance of a class that derives from an abstract class. And you can declare a type Abstract class and instantiate that as a derived class.

Common Abstract classes in the .Net framework include System.Xml.XmlNode and System.IO.Stream. Perhaps the most common virtual method is System.Object.ToString().

Tuesday, February 22, 2005

Using HttpUtility.UrlEncode to Encode your QueryStrings

[This was originally posted at]

Perhaps the most popular way to pass data between web-pages is via querystrings. This is used to both pass data to a new pop-up window, as well as to navigate between pages. (Side note: A querystring is the part of the URL that occurs after the ?. So in http://localhost/myWeb?id=3&name=Tim, id=3&name=Tim is the querystring. The querystring provides name-value pairs in the form of ?name1=value1&name2=value2...)

While this works great for simple alpha-numerics, it can be a problem to pass special characters in the URL, especially in different browsers.

  • An ampersand would split the name-value pairs. (If you want to pass the value "A&B", but the & indicates a new name-value pair, then the value will be truncated to just "A". For example, in "id=A&B", getting the querystring "id" will return just "A", and B will be interpreted as its own key.
  • Apostrophes, greater than or less than signs may be interpreted as a cross-site scripting attack by some security plug-ins. As a result, these plug-ins may block the entire page.
  • Other special characters (like slash or space) may be lost or distorted when sending them into a url.

While some may argue that querystring values should only contain simple IDs, there are legitimate benefits to being able to pass special characters. For example:

  • Legacy Systems - The client's legacy system could include & or ' in the primary key.
  • Performance - You could be returning a value (such as a name like "O'reilly" or "Johnson & Sons") from a pop-up control. Just passing the id would require re-hitting the database. Therefore you could pass the name as well to help performance.

Fortunately there is a solution to handling special characters. .Net provides us the ability to Encode and Decode the URL using System.Web.HttpUtility.UrlEncode and HttpUtility.UrlDecode (note this is not HtmlEncode, which encodes html, and won't affect the &. We want Urls). This replaces problematic characters with URL-friendly equivalents.

The following table shows what UrlEncode translates:


3220 +

While alpha-numerics aren't affected, these special characters aren't encoded either:



Side note 1: You can see the full ASCII tables online.

Side note: You can generate these tables with a simple loop like so:

for (int i = 0; i< 128; i++)
    string s = ((char)i).ToString();
    Console.WriteLine(i.ToString() + " Char: [" + s + "], UrlEncode: [" + System.Web.HttpUtility.UrlEncode(s) + "]");

Essentially UrlEncode replaces many problematic characters with "%" + their ASCII Hex equivalents.

Most of these remaining special characters don't pose a problem, except for the apostrophe that can cause cross-site scripting warnings. For that, one solution is to replace it with a unique token value, such as "%27". Note that we pick a reasonable token - "27" is the ASCII Hex for the apostrophe, and it follows the pattern of other Encodings. We could then write our own Encode and Decode methods that first apply the UrlEncode, and then replace the apostrophe with the token value. These methods could be abstracted to their own utility class:

public static string UrlFullEncode(string strUrl)
    if (strUrl == null)
        return "";
    strUrl = System.Web.HttpUtility.UrlEncode(strUrl);
    return strUrl.Replace("'",_strApostropheEncoding);
private const string _strApostropheEncoding = "%27";

public static string UrlFullDecode(string strUrl)
    if (strUrl == null)
        return "";
    strUrl = strUrl.Replace(_strApostropheEncoding,"'");
    return System.Web.HttpUtility.UrlDecode(strUrl);


Monday, February 21, 2005

Making a MessageBox in JavaScript

[This was originally posted at]

This is classic - I lose track of how often I see people asking the practical question "How do I create a MessageBox in ASP.Net?" The basic problem is that MessageBoxes occur at the client-side, whereas ASP.Net runs on the server. Therefore you need to have ASP.Net call client-side script, like JavaScript, in order to create the MessageBox.

I had the opportunity to explain this in detail on an article at 4guysfromrolla, at: Although that article was a year ago, I still get regular emails about parts of it. Most people wanted to know how to have a button give a Yes/No prompt, and then trigger the appropriate server event. Eventually to simplify it I wrote a custom-server control and uploaded it to the control gallery on, at: It's a free control, with a link to a demo.

[Update]: I posted the source code for this control:

The gist of it is you:

  1. Have ASP.Net create the JavaScript confirm or prompt using either Me.BtnDelete.Attributes.Add(...) or Page.RegisterStartupScript
  2. Put a runat=server hidden html control on the WebForm.
  3. Clicking a button on the JavaScript messagebox runs a client-side script that (1) sets the hidden control's value property and then (2) submits the form (to return flow to the server).
  4. The ASP.Net classes can access the hidden control's value property at the server.

Essentially, you go from Server to Client by registering the script, you go from client to server by submitting the form, and you make data accessible to both by storing it in a runat=server hidden control.

Sunday, February 20, 2005

Using System.Diagnostics to run external processes

[This was originally posted at]

Sometimes you'll want to run an external program, like bat, exe, or vbs. I find this especially useful for testing and deployment - I'll want to reset IIS or run a VBScript that initializes the environment, or call osql (the command line for SQL Server) to run a database script. Fortunately .Net provides a great way to do this using the System.Diagnostics namespace.

The simplest approach is a static utility method that can take the filename and command line arguments ike so:

System.Diagnostics.Process.Start("Simple.bat", "arg1 arg2");

This opens up a console window and runs the command. While it works for simple things, such as resetting IIS, there's two additional useful features we may want to do, especially for quick-running processes: (1) Run the process in a hidden window, (2) Wait for the process to exit. We can do both of these with code like the following:

public static void RunHiddenConsole(string strFileName, string strArguments, bool blnWaitForExit) {    //run in process without showing dialog window:    ProcessStartInfo psi = new ProcessStartInfo();    //psi.CreateNoWindow = true;    psi.WindowStyle = ProcessWindowStyle.Hidden;    psi.FileName = strFileName;    psi.Arguments = strArguments;    Process p = System.Diagnostics.Process.Start(psi);    if (blnWaitForExit)        p.WaitForExit();} //end of method

To run the process in a hidden window, we create a ProcessStartInfo object and specify its window style to be Hidden. We can then pass this ProcessStartInfo into the Process.Start method. To wait for the exit, we create a process object from the Process.Start method, and use its WaitForExit method.

Note that this is very context-independent functionality, and is therefore great to put into your own utilities class that you can reuse from project to project.

Thursday, February 17, 2005

Validating external dependencies

[This was originally posted at]

In any project, there are always external dependencies that your components depend on. Perhaps two of the most common are database tables that you can only read from, and someone else's machine that you need to deploy to. I've found some tips for handling each of these cases.

Verifying that your data works for external datatables

Say that there are datatables that you use, but they are maintained by another department. For example, you may need to use an external table as lookup values in your own records, but someone may change the source table after you've already gotten the value from it. Even worse, those tables might not have any referential integrity on them. Although your component may work perfectly with all your unit-test data, it's very possible that their data could be invalid. Given that you don't have the resources to also test every way that their data could be invalid, it would be nice to run a quick diagnostic check.

One solution is to create a stored procedure to do this. You could check that lookup values still exist in the source table or that certain char/varchar fields don't contain invalid characters. A simple such SP with two rules might be:

    if not exists(select myCode from TblCode where myCode is not null)
        Print 'Error: TblCode has a null myCode.'

    if exists (select * from TblLocation where [description] like '%&%')
        Print 'Error: Location can not contain a & '

Verifying someone's else environment is set up correctly.

It seems like every developer I meet, including myself, has the problem of writing code that works on our environment, but fails elsewhere. While sometimes it's a legitimate bug in the code, sometimes it's also a error in the other person's environment. For example, the other machine might be missing the correct prerequisite software or have a bad configuration file. I especially see this problem with Unit Tests because all the developers want to always be able to run all the unit tests on their machine. This means that your code will frequently be run on another's machine.

A solution I've found useful is for the team or testing lead to write a small set of diagnostic tests to check that the environment is set up correctly. These diagnostic tests could include:

  • Write "Hello World" to the command line to ensure that test engine is installed correctly
  • Access external data stores (such as the database or file system) to ensure that the system is configured correctly.

This way I find that if someone says the tests are failing, I can quickly diagnose if they have an easily-fixable environmental problem like bad config files.

Tuesday, February 15, 2005

Remote Desktop

[This was originally posted at]

Remote Desktop (a.k.a. Terminal Services) allows your to remotely connect to another machine. Whenever you need to do tasks across machines, such as setting up a build process or logging into your desktop from work, RDT is great. While this tool has been out for several years, it is automatically included in Windows XP.

There is a feature that I found very useful - the ability to copy files between computers. It's one thing to be able to see another machine, and even manipulate data on it, but very quickly you'll want to transfer data between machines. Of course this could pose a security risk because a hacker who got remote access could now upload a dangerous file, but the risk seems worth it in most cases.

Below I outline the steps to configure RDT to allow file transfers.

First open up RDT from Start > All programs > Accessories > Communications > Remote Desktop Connection. You'll be prompted with a login window. You can enter either a computer name or IP address.

Select the options button and goto Local Resources > Local Devices group > check "Disk Drives".

You may be prompted to still make the drives available, select ‘OK'.

You can find out more information about Remote Desktop by clicking the Help button in the connection window, or searching on MSDN.

Sunday, February 13, 2005

Tips for making machine-independent code

[This was originally posted at]

The code you write as a developer will  be run on other people's machines. Therefore while it's great that it runs on your local machine, people (such as your manager and other developers) will come knocking on your door if your code doesn't also run on their machines. There are a couple tips I've found useful for writing machine-independent code:

Never hard-code absolute file paths.

Physical paths change from machine to machine. The harddrive letter may change (your machine may be "C", but perhaps a build server will be something else like "D". Developers, or even external tools, may also run the code in different directories. Hard-coding these absolute paths in the code itself will therefore ensure errors. There are a couple tips to avoid hard-coding the paths:

  1. Use relative paths where appropriate.
  2. Use IO.Path.GetTempPath() as the temporary directory.
  3. When running from batch scripts, use the %cd% to get root directory as opposed to hard-coding the directory. Also, use environmental variables for common folders such as %windir% and %programFiles%.
  4. Make an appSetting in the app.config/web.config to store the directory.

Write-File methods create the necessary directories.

Consider having your write-file method create the necessary directory. For example, if you're trying to write out to "C:\Projects\myFile.txt", but the "Projects" folder does not exist, the write will fail. There is a pro and con to this:

  • Pro: Your files paths will automatically be created if they haven't been initialized, preventing a crash.
  • Con: Could pose a security risk - you may want to lock down the application such that it does not have security permissions to create its own directories. A hacker could possible exploit creating an unintended directory.

If you opt not to have your write-methods create their own directories, then at least have a initialization script that creates the appropriate directories.

Use Embedded Resources

I discussed the benefits of embedded resources in my previous post. The main benefit for machine independent code is that embedded resources are contained within the DLL, ensuring that you (1) don't need to worry about the file path of each resource, and (2) only need to transfer 1 file (the DLL) from the bin instead of all the source resource files. Two particularly useful methods here would be:

  • GetEmbeddedResourceContent --> (returns string of resource content given assembly and resource name)
  • CreateFileFromEmbeddedResource (makes copy of the resource and returns file path)

Make sure that your code runs on the development build

The build server contains the official version of your code. If your code isn't even flexible enough to run on the build server, then it is definitely "broken".

Thursday, February 10, 2005

Embedded Resources

[This was originally posted at]

Many non-trivial programming tasks require using data files such as xml, images, bat files, or sql scripts. There are common problems associated with using these files:

  1. They may not be included in the MSI during deployment. Merely adding a batch script to a project will not make it included in the MSI's files, even if you add it in the file editor.
  2. They often need to be referenced by a physical file path, which may be difficult to get or unintuitive if the base calling assembly is shadow-copied (this can happen if you test with NUnit).
  3. It can be hard to manage tons of individual files and keep their path straight - especially if you're sending your code to someone else's machine.

What we'd really like is a way to package those files in the DLL. Fortunately .Net lets us do exactly that – make them embedded resources. This will embed the file within the DLL during compilation, ensuring that it is both included in the MSI and accessible wherever the DLL is copied to. Embedded resources do not appear as their own separate files. Therefore 100 images embedded into a single DLL shows up as only one convenient file.

The steps to embed a resource are simple: In Solution Explorer, set it's Build Action to "Embedded Resource". This alone will solve the first problem of not being included in the MSI.

Once embedded, there are two ways that we'd like to be able to access the content: (1) A string of the direct content (no physical file needed). (2) A physical file path. We can handle both of these using the System.IO and System.Reflection namespaces.

Getting the direct content
The code below is a method that takes an assembly and the fully qualified resource name (with namespace), and returns a string of the content.

private static string GetEmbeddedResourceContent(Assembly asm, string strResourceName)
    string strContent = "";
    Stream strm = null;
    StreamReader reader = null;
//get resource:
        string strName = asm.GetName().Name + "." + strResourceName;
        strm = asm.GetManifestResourceStream(strName);
        if (strm == null)
            strContent = null;
//read contents of embedded file:
            reader = new StreamReader(strm);

            if (reader == null)
                strContent = null;
                strContent = reader.ReadToEnd();
//end of if
        if (strm != null)
        if (reader != null)
//end of finally

    return strContent;

} //end of method

So within the assembly "MyAssembly", you could access the sql script "AddData.sql"  in folder "Scripts" like so:

string strActual = GetEmbeddedResourceContent(typeof(MyAssembly.MyClass),

Accessing via a physical file
This may be useful if you needed the file for a batch script – for example if you wanted osql.exe to run a sql script. Because an embedded resource is not its own physical file, we need to first get the content (like above) and then copy this somewhere.

Embedded Resources are just one more useful technique. You'll find that when you need them, you really need them.