Monday, February 23, 2009

What makes a framework easy to experiment with?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/what_makes_a_framework_easy_to_experiment_with.htm]

I've been reading the excellent Framework Design Guidelines. One of the sections that stood out is the tips they offer to make your framework easy for other developers to experiment with. As most developers are experimental in nature, any framework must be "cooperative" in this manner in order to be popular.

  • "Allow a developer to use it immediately, whether or not it does what the developer ultimately wants it to do or not" (23).
  • "Types used in advanced scenarios should be places in subnamespaces" (23).
  • "Provide simple overloads of constructors and methods" (24). A type that is hard to instantiate is hard to experiment with.
  • Give common scenarios recognizable names. For example, "MyClass.CreateFile" sounds more friendly than "MyClass.OpenInputOutputSteam", so more developers would naturally experiment with the former.
  • "...a default should never result in a security hole or horribly performing code." (26)
  • "Do ensure that APIs are intuitive and can be successfully used in basic scenarios without reference documentation (27). I notice their emphasis - instead of writing tons of tutorials (which most devs won't even read), make the framework itself easy to use.
  • Make things strongly typed.
  • And the classic - A good framework makes it easy to do the right things, and hard to do the wrong things.

They offer ADO.Net as an example of what is confusing (juggling all the different types just to hit the database).

These are good points, and easy to overlook, but make sense when someone points them out to you.

Sunday, February 22, 2009

Killing the file handles, but not the process (from the command line)

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/killing_the_file_handles_but_not_the_process_from_the_comm.htm]

Most developers have come across that annoying error where you try to delete or rename an innocent file, only to be  rebuffed with "Access denied, another process is accessing this file." For example, you'll get this type of error if you create a text file, open it in Microsoft Word, and then try to delete the file. This happens for all sorts of things - whether it's a process that didn't clean up after itself (perhaps from an unexpected crash), or something like IIS that locks certain website directories. While PSTool's ProcessExplorer GUI is great for finding what process is locking the file, what if you just want to find the handle, and kill it - from the command line?

Note that there's s a difference between the process, and the handle that that process has on the file. For example, if you want to delete a website that IIS is locking, you could shutdown or kill IIS (i.e. the process itself), but say you want to leave IIS running? Killing just the handle would leave the process intact, while allowing you control of the file again.

A useful tool for this is the (free) PsTools Handle.exe. From the command line, you can query all handles to a file, and then you can delete those handles. For example, this will find all handles on the given directory:

handle.exe C:\MyFolder

And returns something like so:

inetinfo.exe pid: 1696 3DC: C:\MyFolder\MySubfolder
inetinfo.exe pid: 1696 3E0: C:\MyFolder

You can then use this info to close the handles (one-by-one), without killing the process by passing in the "-c" switch, along with the handle id (in hex), and the process id, and the "-y" switch to confirm the delete.

handle.exe -c 3DC -p 1696 -y
handle.exe -c 3E0 -p 1696 -y

You could glue these two steps together into one by calling System.Diagnostics.Process to run the first command, parse its output, and then use that to create the parameters for the second command.

I'm sure there are more elegant ways (perhaps direct C++ calls, or even some special method in the .Net Framework), and I'm all ears to any, but this approach will get the job done.

Tuesday, February 17, 2009

The different between Count, Length, and Index

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/the_different_between_count_length_and_index.htm]

When dealing with arrays and collections (like List), there are three integer "things" that can mess people up: Count, Length, and Index.

  • Count - refers to collections. This simply gets the number of items in the collection.
  • Length - refers to arrays. to quote: "Gets a 32-bit integer that represents the total number of elements in all the dimensions of the Array." (emphasis added). For a 1-d array, Count and Length seem similar. But for multi-dimensional arrays, the difference becomes apparent. A 3x2 array will have a length of 6. Because an array is allocate up front (as opposed to a collection that can grow or shrink), this conceptually makes sense. Length for an array doesn't change after declaration; Count for a List does (as you add or remove items).
  • Index - used by arrays, and some collections (like List), to indicate a specific item in the array or collection. Whereas Count and Length are 1-based properties, Index is a 0-based indexer.

This code snippet shows these in action:

    [TestMethod]
    public void TestMethod1()
    {
      //Length --> total number of items in the array
      //  acts like "Count" for a 1-d array
      string[] astr = new string[]{"a","b","c"};
      Assert.AreEqual(3, astr.Length);

      //  but very different for a 2-d array
      string[,] astr2 = new string[2, 3];
      Assert.AreEqual(6, astr2.Length); //Length = 2 * 3

      //Count --> 1-based
      List<string> lstr = new List<string>();
      lstr.Add("a");
      lstr.Add("b");
      Assert.AreEqual(2, lstr.Count);

      //Index --> 0-based
      Assert.AreEqual("a", astr[0]);
      Assert.AreEqual("b", lstr[1]);
     
    }

Monday, February 16, 2009

Real life: Taking down the Christmas lights and project failure

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/real_life_taking_down_the_christmas_lights_and_project_fail.htm]

In Chicago, we've had another cold winter. Finally, we got a reprieve as it got warm enough for the snow to melt. I looked outside at our 25 foot (?) pine tree, and saw my opportunity to take off the lights. I had done this before, and I figured it should be a quick "project".

However, as luck would have it, things did not go well. The winter wind must have pushed the lights closer to the tree trunk, and entangled them in the branches. I had borrowed someone else's extension pole to initially put them out, and returned it, so I did not have the ideal tools. Regardless, I started out working on the bottom of the tree - close to the ground where I had the most maneuverability, and things seemed optimistic. However, as I worked on higher and higher branches, it got slower. At first I thought I'd just "tough" it through, perhaps with the job taking 2-3 times longer than I thought, but things just screeched to a halt at the 15-foot mark.

With the long strands of lights entangled in all the branches, I had no choice but to actually cut the lights. I thought that I had found my solution, so now it would just take 4-5 times longer than scheduled, and I'd need to throw away the lights. But, at the 20-foot mark, I didn't even have the tools with which to cut the lights. Now I was desperate - it would be a fire-hazard to have openly-cut electrical lights hanging on a tree 20 feet in the air. So, I broke down and drove to the hardware store. The guy found an old Christmas-light extension pole, complete with claw for grasping lights to pull them down (i.e. the right tool), and I returned home with new hope.

Finally, with this new tool, I could finish the job - behind schedule and with a mess of cut-up lights for garbage. It almost felt like a software project.

Sunday, February 15, 2009

Part of the problem testing UI technologies is that they change so frequently

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/part_of_the_problem_testing_ui_technologies_is_that_they_cha.htm]

Anyone who has actually tried to write automated tests knows that testing the UI is much harder than testing a backend class library. Besides having an obscure interface to the test (like parsing an HttpResponse), and juggling far more dependencies (like needing a live application, complete with database and HttpContext), another problem is that UI technologies change much more frequently than backend ones. For example, you could still call a C# .Net 1.0 library from 8 years ago. However, during that time, the web UI has gone from Html (request/response) to JS-enabled, to advanced JS with Ajax/JSON/JQuery (by now your test harness needs its own JS engine), to Silverlight (where the request/response model no longer even applies). By the time you've built a robust UI test framework, your UI could be using a different technology that your framework cannot handle.

This makes it much harder to write automated tests for the GUI - i.e. not the sort of thing you just add in hindsight. There seems to be a lot of projects out there that have designed themselves into a corner. A manager rushes the project to market with an un-testable design because they think they'll just whip up UI testing later, but then they find they've accrued an insurmountable technical debt, and they just can have it. It's like they spent years constructing the pyramid on the east side of the river, but oops - now they realize they wanted it on the left side instead.

I think this is one of the reasons why good unit testing (and MVC) is becoming so popular in many circles.

Thursday, February 12, 2009

Should a good developer know all six StringBuilder overloads?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/should_a_good_developer_know_all_six_stringbuilder_overloads.htm]

Short answer: no, that's just trivia that you can look up.

Long answer: I'm a big fan of focusing on concepts as opposed to trivia. I couldn't really care if a developer has all six constructor overloads for the StringBuilder class memorized - that's what reference guides are for.

However, I would expect that a decent developer could guess what certain constructors it should at least have based on good design principles.

For example, I'd expect a basic empty constructor. However, as an object that grows in size, I'd also expect some way to initialize its anticipated size. Another simple case would be a way to initialize it with starting data.

In other words, I'd expect that a developer could guess that StringBuilder would at least have these three constructors:

  • StringBuilder() --> need an empty constructor for usability
  • StringBuilder(Int32) --> need a way to initialize the size
  • StringBuilder(String) --> need a way to initialize the starting string

I would see anticipating these constructor overloads as a "conceptual" knowledge, not "trivia". I'm sure that a really smart developer could probably infer the other three methods, but I think these first ones are good enough for average development.

Tuesday, February 10, 2009

How slow build time hurts code quality

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_slow_build_time_hurts_code_quality.htm]

Every developer on a large-scale project has probably had to deal with slow compile times. For example, you change one line in a backend assembly, and you wait three whole minutes for the assembly to recompile. While I understand that there are some cases out there where this is just life (I've had smart people on enormous projects drool over the thought of "just" three minutes to compile), in general, this is very bad for application developers.

It's not just the three minutes lost waiting for the compiler. There are at least a few other big problems:

  1. It constantly ruins your train of thought. Imagine a train that needs to stop at every station - it's not just the time stopped (i.e. compiling), it's also the time accelerating and decelerating (i.e. getting back in the groove). If Visual Studio is effectively frozen for over a minute because it's compiling, most devs will distract themselves with something else - and remain distracted even once Visual Studio finishes.
  2. It discourages developers from writing unit tests. For example, in an ASP web project, you can compile just the single page you're one. So, rather than a developer putting code in a backend assembly (where you can test it), they'll put it in the codebehind (where you cannot test it) so that it compiles fast enough and they can move to the next thing (this example assumes no MVC).
  3. It discourages developers from re-factoring. If even removing a dead comment will cost you several minutes, developers simply won't clean up "working" code.

 In other words, given human nature - slow compile times don't just slow you down, they degrade your code quality.

While sometimes the machine is just slow, there are a lot of tips and pointers out there to tweak Visual Studio to run faster:

Another big thing is to split up your assemblies. If you have a 5MB assembly, changing one line will requiring building all 5MB. However if you split that up into five one-MB assemblies, changing a line requires building just that single assembly, sparing you from rebuilding the other 4MB. There's a balance between number vs. size of assemblies, but it's a good thing to keep in mind when dealing with slow build times.
 

Wednesday, February 4, 2009

How do you know when a project is screwed?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_do_you_know_when_a_project_is_screwed.htm]

At the MSDN DevCon last month in Chicago, one of the groups in the community center had a very good discussion going - "How do you know when a project is screwed?"

 

A lot of good ideas floated around:

  • There is no test code

  • All code just has  "catch (Exception ex)" everywhere

  • There is no source control - really some projects still just make zip file backups, not even using the free SVN.

  • Reinventing basic framework methods

  • Lack of team innovation

I think that team chemistry and discipline is also a very good indicator of project success. A team that hates each other will likely fail, even if they're all individual stars. So, I think another good indicator of failure is bad team chemistry.

 

This sort of thing is very-opened ended., I'm assuming that anyone who's been around long enough has seen a dying project, and there are a slew of different reasons.

Tuesday, February 3, 2009

Book: Working Effectively with Legacy Code

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_working_effectively_with_legacy_code.htm]

A while back I was working on some huge legacy security component. I found it quite challenging, especially the legacy code part of it. Afterwards I thought about writing a blog post on "tips for working with legacy code". While I never got around to that, I did recently finish reading Michael Feather's excellent book Working Effectively with Legacy Code. His book is infinitely better than any measly blog post I could have come up with.

 

This book is awesome. I encounter people who effectively (and naively) say "just write it perfectly the first time." However, that misses the point. For example, many devs weren't even around when the system was first written - they're inheriting someone else's code. The author tackles the problems head-on with concise examples and clear guidance. The book has three parts: the first part starts as a general overview and then explains why this is really a problem, the second part offers tons of practical ways to test difficult code, and the third part explains how to break dependencies so that the code is no longer so tightly-coupled.

 

Two main themes of the book are (A) you want to be able to somehow write unit tests for this code, and (B) tightly-coupled code makes that very difficult. For example, if you've got a some "Employee" object, and its constructor requires a live database connection, singleton references, external configs, and web HttpContext access (like session state), you're somewhat screwed. He then proceeds to show how to salvage that situation by making low-risk changes that allow you to pull the code into test harnesses.

 

You've got to love empathetic chapters like "it takes forever to make a change" and "I can't get this class into a test harness". I think this is a perfect book for anyone dealing with legacy code.

Sunday, February 1, 2009

What if you're a big fish in a small pond - life without a mentor?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/what_if_youre_a_big_fish_in_a_small_pond__life_without_a_m.htm]

In the ideal world, you'd work at a perfect company that surrounded you with wise mentors who could guide you past those insurmountable learning obstacles. Of course you'd work hard and take a stab at it first, but you'd know that an experienced guru would always be there to catch your back. But obviously, life isn't ideal, and many developers simply don't have that safety net of available mentors. Especially eager developers who work at small to mid-range companies might need to deal with being a big fish in a small pond. It's great that you can help your coworkers - but who is helping you? For example, I talk to a lot of guys in 5-person shops, and they're always the goto guy, always the one setting the path.

 

I hear it can get tiring. Here are some ideas to deal with it:

  • Find others who are "bigger" - books, blogs, online forums, user groups. What's amazing about the internet (as opposed to 20 years ago), is that you can get access to all these brilliant minds out there.

  • Leverage your coworkers niches. - Chances are, even with a 5 person team, the total-sum-knowledge of the entire team is greater than yours, i.e. someone there knows things that you don't. Maybe you're a star UI developer, but the DBA can teach you a few tricks. While this may not take you deeper in your own niche, it will help you spread out and be more well-rounded.

  • Potentially leave your job - Sometimes you've out-grown your current job, and it may be time to "move on". For example, a lot of people go to industry leaders like Microsoft because they want that learning opportunities that a star company provides.

While it may provide a learning disadvantage to always be the one breaking-the ice, or drilling through rock to pave the path, there is an advantage. It forces you to pro-actively learn and demonstrate leadership skills, and a lot of companies (and life situations) value that sort of thing.