Sunday, December 28, 2008

Book: Silverlight in Action

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_silverlight_in_action.htm]

I've been excited about Silverlight since I first heard about it over a year ago. Because I started looking at it as an alpha technology, there weren't even the books out yet. So, I used the quickstarts, Bill Reiss's game tutorials, Jesse Liberty's tutorials, and other people's blogs. Well, the books finally started coming out, and I got a copy of Chad Campbell and John Stockton's Silverlight in Action.

 

I liked it. While I got a lot of the background from the online Silverlight resources, the book was just thorough, filled in gaps, and had really good chapters on transforms, animation, and resources (they finally just "clicked" for me).

 

It reminded me of the "good old days" when learning a brand new technology, and you just huddled down with a book and a computer, and hour-by-hour you kept learning something new. I think this book does a really good job of introducing developers to Silverlight.

Monday, December 22, 2008

New LCNUG website up

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/new_lcnug_website_up.htm]

Check out the new Lake County .Net User's Group website!

It has several new features, including a job board, forums, and other community-building features.

The LCNUG is only 7 months old, and it's been off to a great start. It's been exciting watching it flourish.

Next month (January), the topic will be on NHibernate.

Sunday, December 21, 2008

Super Mario in JavaScript (no kidding)

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/super_mario_in_javascript_no_kidding.htm]

I am impressed - someone has ported a trimmed-down Super Mario engine to JavaScript. Wow. I was impressed with Silverlight because it allowed 2D game development and let you avoid writing as much JavaScript. Just wow, got to give credit where credit is due.

http://www.nihilogic.dk/labs/javascript-games/

Thursday, December 18, 2008

LCNUG: JavaScript - Beyond the Curly Braces

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/lcnug_javascript__beyond_the_curly_braces.htm]

Last night several people braved the potential snow storm to hear Sergio Pereira's good presentation on advanced JavaScript. Sergio did a good job of explaining advanced JavaScript nuances, but not dwelling on the obvious stuff. It's a difficult balance to strike because everyone has already heard of JavaScript, but few people know its many obscurities.

 

Sergio blogs at: http://devlicio.us/blogs/sergio_pereira/

 

 

Tuesday, December 16, 2008

Why syntax is becoming less painful

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_syntax_is_becoming_less_painful.htm]

Unless you're a language enthusiast, most developers dislike "wasting" time trying to understand syntax. Syntax trivia makes for bad interviews, developers dismiss a problem as "that's just syntax", and it's generally considered a waste of mental energy for developers to thrash over syntax.

 

Now, at least for the type of .Net application development that I often do, syntax is mostly pain-free, partly thanks to:

  • Powerful IDEs that include intellisense and compiler checking.

  • Billions of online examples that are a google-query away, such as complete reference guides, tutorials, forums, and fellow blogger's who encountered the exact same one-in-a-million bug.

  • Better designed APIs. for example, both C# and VB.Net reuse the entire .Net framework, making syntax differences trivial.

  • Emergence of standards, like XML, HTML, and language conventions.

  • The deliberate attempt by designers and architects to reduce the need for syntax by wrapping interfaces with abstraction, using standards and patterns and reusable blocks, and leveraging config files.

  • More developers in the field with whom you can ask syntax questions too. For example, I can often ask clear-cut SQL syntax questions to our DBA, and he'll just nail them. ("What's the syntax for setting an index on a temp table...?")

  • More powerful hardware that lets you do all this. If you only have 4KB of memory, it's a challenge just to make something possible, and you're willing to throw easy-syntax overboard to get it to work. Your machine lacks the resources to "afford" an IDE, and every language construct is optimized for the limited resources as opposed to ease-of-learning.

However, it wasn't always this way. I remember as a kid back in the late 80's, with the TRS80 and RSDOS, just staring blankly at the green and black television (didn't have a monitor). There were a few "computer people" I could talk too (like my brothers), but it was nothing like today.

 

This comes to mind because I've been reading up on PowerShell and Silverlight, and it just works. I haven't had a major syntax problem yet. It feels like I'm just cruising down the highway, and that's a nice feeling.

 

Thursday, December 11, 2008

Total Cost of Ownership - a Story

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/total_cost_of_ownership__a_story.htm]

A company needs some software service that isn't their standard-line of business, so they outsource it to you (and your company) and pay for the application. So far, so good.


But then, as the client is using your app, they're struggling on the employee search filter page. After getting frustrated for five minutes, they ask their coworker if they know how to work it, just to make sure it's not some obvious mistake. After neither one can figure it out, they think it's an error (not the friendly name "bug", but an error, and to them a critical and fatal one).

So, they phone your company's customer service rep, who rushes around trying to find a fix, and inevitably has to get back to them. After a string of High-Priority emails between the customer service rep and several maintenance developers and documentation writers, the customer service rep is confident that this is an "issue" (perhaps an "error", but issue sounds less blamable), they apologize to the client and offer that it will be fixed in the next release - several months in the future. "No deal" the client says, "We need this now."

Your company's account manager (who oversees that account) gets involved, trying to work their magic diplomacy to make everyone happy. But the client insists "we need this - the thing simply will not work for our business cases without it." So, your company being one that always puts the customer first, pulls in an executive who directs the issue to a developer who can make a fix right away, and hopefully get out a patch or hotfix.

The developer drops whatever they were doing, losing all the momentum they have built up for their current task, and re-mobilizes to hunt down this issue (which is in code they never even wrote). They coordinate with a product manager to make sure they understand the exact new functionality. After spending hours coming up to speed, and ensuring they can reduplicate the bug, the developer finds a fix. It was one line that needed to be updated. Indeed, it often seems that these types of bugs are "just" one line.

The code is sent as an emergency item to QA, who redeploys their environment, confirms the fix, does a little regression to make sure it didn't break anything else. After all this, they then sends it to IT, who stays late that night to make a special patch deployment.

The team's work pays off - thanks to hours and hours from the customer service rep, the account manager, the product manager, the developer, the QA tester, and IT - one line of code has been fixed in production, and the client is happy again. For some development shops, this is actually optimistic, as many places wouldn’t just dismiss it as "a small issue" or have to wait to the next release instead of patching right away.

However, while it's great that the team was flexible enough to handle this change, think of the cost. Here's the problem. Fixing that one-line bug after production release could have easily cost hundreds of times more resources than just doing it right the first time. If it took the whole team 16 people-hours, and the original developer could have caught it with an extra 5 minutes of unit testing, then this was a colossal waste of time. The problem is that many developers insist (and managers cave in to) that "I just don't have time" to do . And then from their short-sighted perspective, they've saved the company time because they got the feature done 5 minutes earlier. However, the total cost of ownership - not just the developer creating the initial feature, but everything else involved (like QA and maintenance) is what really matters. If a sloppy developer can shortcut 5 minutes, only to waste the team hours later - whose really coming out ahead? Answer: Your competition.
 

Wednesday, December 10, 2008

Book: C# 3.0 in a Nutshell

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_c_30_in_a_nutshell.htm]

I recently finished a casual read of Joseph & Ben Albahari's good book C# 3.0 In a NutShell (by "nutshell" they mean 800+ pages).

 

They do a good job of including two audiences - those new to programming (who need C# syntax and the basics of an array explained), and those familiar with C# who just want to see the new stuff. I found the chapters on Linq and Ling-to-Xml very instructive.

 

They also then give a healthy chapter to each of the main parts of the framework. To be able to jump from language design to Linq to System.Net to Reflection to Xml to Serialization and beyond is an impressive feat. I think having that kind of broad knowledge gives the developer a powerful tool belt. Besides the basics (like xml), there are simply cool tools and process that you can never create unless you understand reflection, diagnostics, networking, streams, and even threading.

Tuesday, December 9, 2008

Real life: Automation and teaching your kids

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/real_life_automation_and_teaching_your_kids.htm]

We have 2.5 kids. Cute little bunch - but very dependent on Mom and Dad. Every few minutes, they come back to us, needing something. Of course, our goal is to teach each kid to just take care of things - brush their own teeth, pick up their toys, eat their food, etc... And then, ideally they could string all these acts together and just take care of themselves for some time on end. Especially as the older one learns to do this, it frees us up to focus on other things - like the young one throwing toys at the wall. Especially with the second, and third, you realize that you don't scale - there are too many kids per parent - you need to make them self-sufficient, at least for a time.

 

This reminds me much of automated process. Both ideally could be on autopilot. Both come back to me asking for help (my console app burped on a bad xml input file, my service didn't handle the machine restart, my build script crashed on spaces in a filename, etc...). While it's cute to initially have your pet application that you baby-sit and marvel in awe of how well it churns through those calculations, soon you'll have a new pet application, and you'll need that first one to be self sufficient.

 

One more similarity - you hope that one day they'll both make you rich. The kid will strike it big and take care of Mom & Dad in their old age, and those .Net applications will keep delivering business value and keep you employed. [just kidding]

 

Although, of course the analogy breaks down because the kiddies are so much cuter, but you get the idea...

Tuesday, November 25, 2008

Why I'd rather be an optimist

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_id_rather_be_an_optimist.htm]

Some programmers take pride in calling themselves either an optimist or a pessimist. Both have their positive and negative stereotypes: The out-of-touch bubbly optimist vs. the encouraging can-do optimist; the always grumpy pessimist vs. the face-the-facts pessimist.

Part of it is that most producers tend to be optimists, as it takes a can-do attitude to carry out the positive act of creating. Most critics/complainers tend to be pessimists, negatively and constantly pointing to the shortcomings. However, these are not equal positions. The producer can produce without the critic, but that the critic needs someone else to produce something so that they have something to criticize. In other words, the producer (usually the optimist) does not need the critic (usually the pessimist), but the critic needs the producer. It's like riding a sled down the hill vs. pulling it back up - they're not equal tasks; one is much easier. Likewise, a room full of complainers will never actually build something. While constructive feedback is great, a purely negative feedback loop isn't constructive. It also will get on other coworker's nerves.

Some pessimists think they're providing a great service by always pointing out the flaws, lest the optimists lose touch with reality. However, "reality" doesn't need any help in reminding developers about the facts of life. It's got the compiler, project plan, and end-users to do that.

Ideally, as with everything else in life, there would be a good balance. But, everything considered, if I had to choose one I'd rather be an optimist. It's been said that "The pessimists may be right, but the optimists have more fun along the way."
 

 

Monday, November 24, 2008

Random quotes that could apply to development

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/random_quotes_that_could_apply_to_development.htm]

Here's a list of seemingly-random quotes that could apply to development. I don't recall the sources for each of these. Every now and then, it's good to have a good quote handy.

Thursday, November 20, 2008

LCNUG: SharePoint Authentication: Inside the Hybrid Provider

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/lcnug_sharepoint_authentication_inside_the_hybrid_provider.htm]

At the LCNUG last night Chris Domino presented on Sharepoint. He's a SharePoint superstar, and it showed. The LCNUG still is a smaller group, so we could ask a lot of interactive questions. I especially liked a dialogue about document versioning history in SharePoint vs. saving words docs as xml and then storing those in normal version control (like subversion).

 

Overall, a very good night. I think it also marks the half-year mark for the upstart LCNUG group.

Tuesday, November 18, 2008

Bad Interview Question: What is your greatest weakness?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/bad_interview_question_what_is_your_greatest_weakness.htm]

For technical interviews, I think "What is your greatest weakness" is one of the dumbest interview questions ever. It's like the recruiter thinks they're being so smart and sneaky ("ah ha, this unsuspecting candidate will reveal all their shortcomings as I judge them"), but it really misses the point.

 

Problems with this question:

  • If the interview cannot determine your weaknesses from normal interview questions, are they really weaknesses? It is part of the recruiter's job to determine your weaknesses, and by directly asking you, they're essentially asking you to do their job for them.

  • Most people aren't even aware of their biggest weakness, especially because human nature always sees ourselves in the best light possible.

  • It's an abrasive question that puts the company in a bad light. People want to talk about their successes and how they'll add value, not their mistakes.

  • Does the interviewer actually expect the candidate to tell the complete truth about their own flaws?

  • To avoid ivory-tower syndrome, (in my opinion), recruiters should not ask behavioral questions that they themselves are not prepared to answer.

  • It's a dead-end question that prompts the recruiter to pick the "least of the evils" - if you're just discussing your negative qualities, you're not going to look like a positive candidate.

  • If I knew my weakness, I'd already be fixing them.

  • It's very dull, and shows no creativity from the interviewer.

  • This sort of question isn't going to impress a candidate. Often, the interview is really two ways, with many companies all trying to woo the star candidates.

Really lame replies:

  • "I work too hard", or "I try to hard". --> anyone who has been around can see straight through this.

  • "I do such a good job that it makes everyone else envious of how great I am."

  • "I don't really have any flaws." --> everyone has limits, which is very different than saying that you do have shortcomings, but they occur mostly in other, unrelated fields, or long ago when you first got started.

  • "I've never been in a position with enough influence to do any damage, so I'm not sure."

Possible good replies:

  • Humor: "You know, you could get a really good answer for that if you just talked to my wife." (or husband/spouse/significant-other )

  • Serious: "While I've made mistakes in other fields, like buying my first car, for years I've been putting most of my eggs into the software engineering basket precisely so I don't make huge costly mistakes anymore. But we can discuss my early car-buying experiences if you'd like."

  • Confident: "Let's have an interview and you tell me."

  • Change topic: "Here's how I've dealt with mistakes that the company has made, and turned them around to be profitable..."

  • Pick a small, past event: "When I first got started, 10 years ago, I used VSS for source control. The default single-user lock model really messed up our multi-team project. Now I'm happy that we always use Subversion."

  • Sarcasm: "My  biggest mistake ever - like single-handedly causing the stock market crash of 87 - I don't do those kinds of things anymore." (you'd need more charisma than I have to pull it off)

  • Smart Aleck: (when you don't want the job) pick a topic that's way over the recruiter's head. I'm trying to think of an impressive example, but I'm sure the readers of this blog would just tell me how simple it was.

  • Smart Aleck (when you don't want the job): "What would you consider a failure? Perhaps you can tell me some of your greatest failures so I know what kind of things you're looking for."

At Paylocity, I don't think I've ever asked a candidate this. Instead, I'll ask "what areas of development do you enjoy", or "what areas do you wish you had an opportunity to learn more about". The goal is not to trick people with dumb gotcha questions, but (for me) to figure out where their passion is. I prefer a positive approach of "what are you passionate about" as opposed to a negative approach of "what are your flaws". At Paylocity, we don't ask people to do what they're bad at, we ask them to do what they're good at - i.e. play to their strengths, not their weaknesses. Also, the interview should be an indicator of what the company is about, and what kind of company goes around asking their employees "what do you suck at most"?

 

Sunday, November 16, 2008

Published book: A Crash Course in Reasoning

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/reason.htm]

Clear reasoning is essential for developers. So, I'm proud to have finished publishing a book "A Crash Course in Reasoning".

 

 

It's a fast read (170+ pages), in plain language. Really it's intended for high school and college students who are looking at basic reasoning skills (i.e. I have kids and I'd like them to eventually read it). However, developers may get a kick out of it too. It covers a broad spread of topics:

  • Chapter 1: Introduction

  • Chapter 2: Objective Reality

  • Chapter 3: Premises

  • Chapter 4: Logic

  • Chapter 5: Epistemology

  • Chapter 6: Communication

  • Chapter 7: Psychology

  • Chapter 8: Learning

  • Chapter 9: Conclusion

Official abstract:

Using images, tables, and constant examples, this book provides a practical and plain-English introduction to reasoning. We start by emphasizing the importance of objective reality. We then explore the different types of premises (definitions, observations, given facts, assumptions, and opinions) that can be built on this rock-solid foundation. From those premises, we can make logical inferences (both deductive and inductive) to arrive at conclusions. We strengthen this process by explaining several epistemological issues. Once the reasoning is clear in our own head, then we can communicate it to others. As we deal with other people, we’ll also need to deal with the psychological aspects that people inevitably bring with them. Finally, thinking clearly for ourselves and with others, we’ll want to apply everything we’ve discussed to increasing our learning and problem solving abilities.

This page will contain updates for the book.

Wednesday, November 12, 2008

MVC troubleshooting: If the controller doesn't have a controller factory...

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/mvc_troubleshooting_if_the_controller_doesnt_have_a_contro.htm]

I was toying around with the MVC pattern, and I kept getting this error while doing a tutorial:

 

'TaskList.Controllers.HomeController'. If the controller doesn't have a controller factory, ensure that it has a parameterless public constructor.
 

I'm no MVC expert, so I had to poke around a little. Ultimately, what seemed to fix it was making the database connection key in web.config match the key in the auto-generated dbml code.

 

The dbml code was failing on this line (in a file like "*.designer.cs"):

public TaskListDataContext() :
base(global::System.Configuration.ConfigurationManager.ConnectionStrings["TaskListDBConnectionString"].ConnectionString,

mappingSource)
{
    OnCreated();
}

I had modiefied my web.config, so there was no ConnectionString setting for "TaskListDBConnectionString", so that returned null, hence it failed. I needed to update web.config to something like so:

 


   

    name="TaskListDBConnectionString"

    connectionString="your_connection_here"

    providerName="System.Data.SqlClient"/>

 

To troubleshoot, I started stubbing out things one at a time - making an empty solution, downloading the final code, comparing the differences, etc...

 

Tuesday, November 11, 2008

"I prefer my way of doing it to your way of not doing it"

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/i_prefer_my_way_of_doing_it_to_your_way_of_not_doing_it.htm]

I once heard this quote - I forget the context or who said it - but I really like it. Software Engineering is full of second-guessers and Monday-morning quarterbacks. Many projects seem to have that pessimist who snubs the teams' attempt to do something because it's not as good as that "perfect past project" that they were on. There's something to be said for the developer who just gets it done - even if the code isn't the best.

Monday, November 10, 2008

More Xml Design Patterns

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/more_xml_design_patterns.htm]

 

Last month I mentioned some tips for structuring xml, which could be seen as the start of primitive Xml Design Patterns. Here are three more ideas.

 

Strongly-typed nodes

 

Say you have a node, which can have several different groups of attributes. For example, you're making a test automation DSL, and you need to assert that various conditions are met. You may have a basic assert, a correct-page assert, and a text-search assert:

The schema for this becomes messy - you'll have one node (Assert) with all possible attributes. Your interpreter will need to infer which action to take based on the current attributes. Validating the xml file also becomes error-prone. It's doable, but messy. Compare that to this:

.Page pageName="myPage.aspx" />

.Text textToFind="EmpId123" isRegex="false" isCaseSensitive="false" />

This approach has essentially three nodes: Assert, Assert.Page, and Assert.Text. This effectively gives you "strongly-typed nodes". Your interpreter instantly knows which type of node, and therefore what action it expects and how to validate its attributes. Your schema becomes simpler because each node type has only the relevant attributes. This isn't always the best approach, but it can be a good one in certain cases.

 

 

Serializing complex objects to attribute strings

 

You can serialize almost any object to Xml, but the question is what is the best way to do it? For example, you could serialize an array as a bunch of nodes:

   

        12

        35.5

        40

        9

   

However, this is somewhat bloated. A more concise way is to serialize it to a string, and then store the results in an attribute:

You could serialize an array as a CSV list: "1,2,3".

You could serialize a name-value pair: "name1=value1;name2=value2" (this is what MSBuild does)

You could even have a mini DSL, like the Silverlight path expressions: "M 10,100 C 10,300 300,-200 300,100".

 

The main thing this approach requires is an easy way to serialize and deserialize the object to a string. If you've got that, then this technique becomes very reasonable.

 

 

Abstract complex objects to inner content

 

Silverlight uses Xaml markup to describe the UI. Xaml allows a lot of style markup.

 

You can create a simple ellipse, shading the entire thing a solid blue.

Fill="Blue" />

However, you can also give it a more advanced two-color gradient shade by abstracting the "Fill" markup to the inner nodes of the ellipse. Specifying a single attribute is great when you can serialize all the necessary info to a single string, else you can leverage the more-powerful inner xml nodes (per a Jesse Liberty tutorial).


   
     
        
        
     

  

This allows the best of both worlds - the single-line common case, and the more advanced case.

Thursday, October 30, 2008

LCNUG: Gorking the ASP.NET MVC Framework

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/lcnug_gorking_the_aspnet_mvc_framework.htm]

Yesterday, Derik Whittaker presented at the LCNUG about the MVC pattern. I had a last-minute family emergency come up, so I couldn't make it, but it looked good.

This is the fifth presentation at the LCNUG, almost half a year. It's been great to see the new group up and running.

Sunday, October 12, 2008

Real life: the leaking window

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/real_life_the_leaking_window.htm]

House repairs provide a lot of good software analogies. Once during a big rain storm, our window started leaking. It was a newly installed window, and it had never leaked before. Well, obviously a leaky window can become a huge problem if left unfixed. So, I went out sometime later, sprayed the window with the hose to try to get an idea of where the leak is (I was not, and still am not, a house repair expert), and to my great frustration - the window did not leak. Of course I wanted to reproduce the problem, narrow it down to the exact cause, and then make a quick fix - just like I would in a software project. I didn't want to rebuild the whole thing.

So, here is a "critical feature bug" (the leaky window), which occurred in "production" (during the actual rainstorm), but I cannot reproduce in my "development environment" (sunny day with my spraying the window with a hose). It was a non-reproducible bug. However, I couldn't just ignore it or look the other way, I needed to ensure that it didn't happen again (given that it's my house, I need to take "ownership" of the "project"). It's the kind of thing that drives a software project mad.

In this case, I just 'blindly" resealed things - checked the siding, exterior frame, interior, etc... And the window never leaked since. If it does end up leaking again, then I'll probably need to call an expert, much like how some doomed software projects sometimes call in a star consultant to troubleshoot their obscure bugs.

Thursday, October 9, 2008

Xml Design Patterns

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/xml_design_patterns.htm]

Xml is the defacto way to store data in files - whether it's build instructions as an MSBuild script, a Domain Specific Language based on code-generating from an xml filea config file, or something else.

 

So, your xml files are storing lots of mission-critical information. They're getting huge, being modified by tons of developers, and getting hard to maintain. It's time to refactor them. Here are several ideas to refactor your xml files (much of this is demonstrated in MSBuild):

  • Create a variables section. For example, MSBuild has a  section, where each inner node can define a new variable. You can then reference those variables elsewhere throughout your script like so: $(myVar).

  • Allow importing of files. Often you'll want to either split a big file into smaller ones, or reuse a single file many times (for example, put all your global variables into a separate file, and then re-import it everywhere).

  • Related to this, allow chunks of xml to be automatically included in other chunks with some sort of "include" syntax. In other words, ensure that the xml file itself is refactored. MSBuild allows you to call other targets with a task. Or, say you have a test script that's generated from xml, and every test needs to repeat your "login routine". Define the login routine in one xml snippet, and allow it to be re-included in all the other snippets.

  • Have the xml structured to automatically determine relationships. For example, if you have a parent-child relationship of data (here is the menu, here are all the pages accessible from that one menu), then consider using Xml's natural hierarchy (a

    node with a bunch of child nodes). You could also use the natural order of the xml nodes to determine the natural sequence that the data appears (like the order of the pages).

  • Provide some way of an extension method or hook. In MSBuild, you can define your own custom tasks and dynamically load them.

  • Create special syntax for domain-specific concepts. For example, the MassDataHandler takes xml snippets, and converts them to SQL insert statements to assist with database unit testing. A common problem for inserting SQL statements is handling the identity columns. In this case, if you prefix the value with an '@', the MassDataHandler automatically knows to go lookup its identity value.

  • Create templates for default values, which other xml nodes can override. For example, the MassDataHandler allows you to run SQL inserts, one xml node per database row. But say five rows all have the same values for something (like they're all children of the same parent), then it provides the ability to have a "default row" which defines the default values. Other xml nodes can then override these default values.

I guess you could say there are design patterns for good xml. As Xml becomes more and more prevalent, I expect that "Xml Design Patterns" to become more prevalent too.

Wednesday, October 8, 2008

Having Console apps write out Xml so you can parse their output

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/having_console_apps_write_out_xml_so_you_can_parse_their_out.htm]

Say you have an automatic process calling some executable, and you want to get information back from that executable to use in your own code. Ideally you'd be able to call a class library that you could just reference, but what if no such class library is available (it's a different development platform like Java vs. C#, or it's a black box third-party component). Is there any way to salvage the situation?

 

Some developers would stream out the console output to a string, and then write a bunch of ugly parse code (even with regular expressions), and get their results that way. For example, let's say you use Subversion for your source control, and you want to search the history log. For the sake of argument, assume there's no friendly C# API or web service call (it's a local desktop app you're calling), and you need to parse command output. (If anyone knows of such an API, please feel free to suggest it in the comments.) You can run a command like "svn log C:\Projects\MyProject", and it will give you back console output like so:

------------------------------------------------------------------------
r3004 | username1 | 2008-09-26 10:47:19 -0500 (Fri, 26 Sep 2008) | 2 lines

Solved world hunger
------------------------------------------------------------------------
r3000 | username1 | 2008-09-06 14:10:56 -0500 (Sat, 06 Sep 2008) | 2 lines

Invented nuclear fusion
 

Ok, you can parse all the information there, but it's very error prone. A much better way is if the console app provides an option to write out its output as a single XML string. For example, you can specify the "--xml" parameter in SVN to do just that:


       revision="3004">
    username1
    2008-09-26 10:47:19 -0500
    
      Solved world hunger
    

  
       revision="3000 ">
    username1
    2008-09-06 14:10:56 -0500
    
      Invented nuclear fusion
    

  

Obviously, that's much easier for a calling app to parse. If you need to bridge a cross-platform divide such that you can't just provide the backend class libraries, outputting to xml can be an easy fix.

Monday, September 29, 2008

Death by a thousand cuts

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/death_by_a_thousand_cuts.htm]

A single paper cut won't kill you, but a thousand of them probably will. That's what people mean by "death by a thousand cuts". This is how many software projects die - it's usually a hundred little things that pile up, and whose combined pain becomes unbearable. It's the runaway bug list, the brittle test script that gets abandoned because it's too hard to maintain, the endless tedious validation on a details page, the component with the fuzzy interface, the buggy deployment, all the little features that just don't work right, etc...

 

This is why continuous integration, unit tests, automation, proactive team communication, code generation, good architecture, etc... are so important. These are the techniques and tools to prevent all those little cuts. I think that many of the current hot methodologies and tools are designed to add value not by doing the impossible, but by making the routine common tasks "just work", so that they no longer cause developers pain or stress.

 

Sure, you can write a single list-detail page, any developer can "get it done". But what about writing 100 of them, and then continually changing them with client demands, and doing it in half the scheduled time? Sure, you can technically write a web real-time strategy game in JavaScript, but it's infinitely easier to write it with Silverlight. Sure, you can manually regression test the entire app, but it's so much quicker to have sufficient unit tests to first check all the error-prone backend logic.

 

The whole argument shifts from "is it possible" to "is it practical?" Then, hopefully your project won't suffer death from a thousand cuts.

Sunday, September 28, 2008

Real life: How do you test a sump pump?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/real_life_how_do_you_test_a_sump_pump.htm]

Back when we moved into our home (a while ago), this was our sump pump.

As home-owners knows, sump pumps are very important because without one, your basement gets flooded. That causes at least tens of thousands of dollars of damage, as well as potential loss of any personal items in your basement - for example all your electronic equipment or books could get ruined due to water damage.

 

In other words, it's a big deal, i.e. an important "feature" of your house. So the natural question as a new home-owner is "how do I ensure that this mission-critical feature actually works?" Obviously I didn't want to wait for a real thunderstorm and power outage to find out if everything would be ok. While I had heard the buzzwords ("make sure your sump pump works and you have a battery backup in case of a power outage"), and I had been on previous "teams" (i.e. my parent's house, growing up as a kid) that had successfully solved this problem, when push came to shove, I was certainly no expert. For all I knew, this could be the best sump pump in the world, or a worthless piece of junk. However, I knew the previous owners of the house, they were great, so I assumed that the sump pump was great too, and everything would be okay.

 

Eventually, I figured out how to test it by contacting some "domain experts" (previous house owners), who explained the different components to me. I then "mocked out" a power outage by simply unplugging the power plug (as opposed to waiting for a real power outage, or even turning off my house power). I then lifted the float to simulate water rising, and listened for a running motor. I checked a couple other things, and became confident that the feature did indeed "work as designed". I was told that is was actually a very good setup because it had a separate batter charger, and two separate pipes out, so they were completely parallel systems (kudos to the previous owners).

 

The number of analogies between me needing to test my sump pump, and a user needing to test a critical feature of a software project, are staggering. It's a reminder to me how real life experiences help one understand software engineering.

 

Thursday, September 25, 2008

Writing non-thread-safe code

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/writing_nonthreadsafe_code.htm]

Multithreading is hard, and I'm certainly no expert at it. I like this post about Measuring how difficult a Threading bug is. Most application developers hover around phase 0 (totally reproducible) to 2 (solvable with locks). Application Developers constantly hear about how some piece of code isn't "thread safe". What exactly would that look like? How could you deterministically write non-thread safe code, like writing a unit test that fails your non-thread-safe object?

 

The trick is to run a unit test that opens up multiple threads, and then runs them in a loop 10,000 times to force the issue. The following code does something like that. Say we have a simple "Account" object with Credit and Debit instance methods, both sharing the same state (the _intBalance field). If you run the unit test "Threads_2_5", it opens up 2 threads, calling Credit in one and Debit in the other. Because Credit and Debit should cancel each other out, and they're "theoretically" called the same number of times, the final result should remain zero. But it's not - the test will fail (if it doesn't fail, increase the number of iterations to force more contention).

 

So, we have a reproducible multi-threading failure in our unit test. The Account object is not thread safe. However, we can apply the C# lock keyword to put a lock on the methods in question, which at least for this simple case, fixes the problem. I've shown how to apply the lock keyword in the commented-out lines of the Account object. I see two observations:

  1. Without the C# lock keyword, this gets essentially random errors as contention (thread count or loops) increases.

  2. Adding the lock keyword will prevent errors, but the code runs much slower.  This means it's also possible to abuse the lock keyword, perhaps adding it in places you don't need it, such that you get no benefit, but now your code runs slower. Tricky tricky.

Here's the code:

 

    public class Account
    {
      //private Object thisLock = new Object();

      private int _intBalance = 0;

      public void Credit()
      {
        //lock (thisLock)
          _intBalance++;
      }

      public void Debit()
      {
        //lock (thisLock)
          _intBalance--;
      }

      public int CurrentValue
      {
        get
        {
          return _intBalance;
        }
      }
    }

    private Account _account = null;
    private int _intMaxLoops = -1;

    private void RunThreads2(int intLoopMagnitude)
    {
      _intMaxLoops = Convert.ToInt32(Math.Pow(10, intLoopMagnitude));
      _account = new Account();

      Assert.AreEqual(0, _account.CurrentValue);

      Thread t1 = new Thread(new ThreadStart(Run_Credit));
      Thread t2 = new Thread(new ThreadStart(Run_Debit));

      t1.Start();
      t2.Start();

      t1.Join();
      t2.Join();

      //We did an equal number of credits and debits --> should still be zero.
      Assert.AreEqual(0, _account.CurrentValue);
    }

    private void Run_Credit()
    {
      for (int i = 0; i < _intMaxLoops; i++)
      {
        _account.Credit();
      }
    }

    private void Run_Debit()
    {
      for (int i = 0; i < _intMaxLoops; i++)
      {
        _account.Debit();
      }
    }

 
    [TestMethod]
    public void Threads_2_5()
    {
      RunThreads2(5);
    }

 

Someone who knows multi-threading much better than I do (and who hasn't yet started their own blog despite all my pleading), explained it well here:

With multithreading, it’s important to understand what’s happening at the assembly level.
For example, a statement like “count++” is actually “count = count + 1”. In assembly, this becomes something like:

1: mov eax, [esp + 10] ; Load ‘count’ into a register
2: add eax, eax, 1 ; Increment the register
3: mov [esp + 10], eax ; Store the register in ‘count’

With multithreading, both threads could run through this code at different rates. For example, start with ‘count = 7’. Thread ‘A’ could have executed (1), loading ‘7’ into ‘eax’. Thread ‘B’ then executes (1), (2), (3), also loading ‘7’ into ‘eax’, incrementing, and storing ‘count = 8’. It then executes 3 more times, setting ‘count = 11’. Thread ‘A’ finally starts running again, but it is out of sync! It then stores ‘count = 8’ because it didn’t get updated.


When using locks, that would prevent multiple threads from executing this code at the same time. So thread ‘A’ would make ‘count = 8’. Thread ‘B’ would make ‘count = 9’, etc.
 

The problem with locks is what happens if you ever need multiple locks. If thread ‘A’ grabs lock (1) and thread ‘b’ grabs lock (2), then neither of them would ever be able to grab both locks. In SQL, this throws an exception on one of the threads, forcing it to release its lock and try over. SQL can do this because the database uses transactions to modify the data. If anything goes wrong, the transaction rolls the database back to the previous values. Normal C# code can’t do this because there are no transactions. Modifying the data is immediate, and there is no undo. ;-)

Obviously there's a world more to multi-threading, but everyone has got to start somewhere.

Sunday, September 21, 2008

Getting Categories for MSTest (just like NUnit)

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/getting_categories_for_mstest_just_like_nunit.htm]

Short answer: check out: http://www.codeplex.com/testlistgenerator.

 

Long answer: I previously discussed Automating Code Coverage from the Command Line. Another way to buff up your unit tests is to add categories to them. For example, suppose you had several long-wrong tests (perhaps unit testing database procedures), and you wanted the ability to run them separately, using categories, from the command line.
 

The free, 6-year old, open-source NUnit has category attributes, for which you can filter tests easily by. For a variety of reasons, MSTest - the core testing component of Microsoft's flagship development product, after 2 releases (2005 and 2008) still does not have these. I've had the chance to ask to ask people at MSDN events about these, and I've heard a lot of "reasons":

 

Proposal for why MSTest does not need categories:Problem with that:
Just put all those tests in a separate DLL
  • This does not handle putting a single test into two categories. Also, it's not feasible, often the tests we want to categorize cannot be so easily isolated.
Just add the description tag to them and then sort by description in the VS GUI
  • You still cannot filter desc from the command line - it requires running in a GUI
  • This fails because it does an exact string comparison instead of a substring search. If you have Description("cat1, cat2"), and Description("cat1") - you cannot just filter by "cat1" and get both tests, it would exclude the first because "cat1" does not match "cat1, cat2".
  • Conceptually, you shouldn't need to put "codes" in a "description" field. Descriptions imply verbose alpha-numeric text, not a lookup code that you can filter by.
Just filter by any of MSTest's advanced filter GUI features - like class or namespace.
  • This requires opening the GUI, but we need to run from the command line.
  • Sometimes the tests we need to filter by span across the standard MSTest groups.
Just use MSTest test lists
  • This is only available for more advanced version of VS.
  • This is hard to maintain. It requires you continually update a global test file (which always becomes a maintenance problem), as opposed to applying the attributes to each test individually.
Well, you shouldn't really need categories, just run all your unit tests every time.
  • Of course your automated builds should eventually run all unit tests, but it's a big time-saver for a developer to occasionally just run a small subset of those.

 

So, if you want to easily get the equivalent of NUnit categories for your MSTest suite, check out http://www.codeplex.com/testlistgenerator.

Thursday, September 18, 2008

Automating Code Coverage from the Command Line

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/automating_code_coverage_from_the_command_line.htm]

We all know that unit tests are a good thing. One benefit of unit tests is that they allow you to perform code coverage. This works because as each test runs the code, (using VS features) it can keep track of which lines got run. The difficulty with code coverage is that you need to instrument your code such that you can track which lines are run. This is non-trivial. There have been open-source tools in the past to do this (like NCover). Then, starting with VS2005, Microsoft incorporated code coverage directly into Visual Studio.

 

VS's code coverage looks great for a marketing demo. But, the big problem (to my knowledge) is that there's no easy way to run it from the command line. Obviously you want to incorporate coverage into your continuous build - perhaps even add a policy that requires at least x% coverage in order for the build to pass. This is a way for automated governance - i.e. how do you "encourage" developers to actually write unit tests - one way is to not even allow the build to accept code unless it has sufficient coverage. So the build fails if a unit test fails, and it also fails if the code has insufficient coverage.

 

So, how to run Code Coverage from the command line? This article by joc helped a lot. Assumeing that you're already familiar with MSTest and CodeCoverage from the VS gui, the gist is to:

  • In your VS solution, create a "*.testrunconfig" file, and specify which assemblies you want to instrument for code coverage.

  • Run MSTest from the command line. This will create a "data.coverage" file in something like: TestResults\Me_ME2 2008-09-17 08_03_04\In\Me2\data.coverage

  • This data.coverage file is in a binary format. So, create a console app that will take this file, and automatically export it to a readable format.

    • Reference "Microsoft.VisualStudio.Coverage.Analysis.dll" from someplace like "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies" (this may only be available for certain flavors of VS)

    • Use the "CoverageInfo " and "CoverageInfoManager" classes to automatically export the results of a "*.trx" file and "data.coverage"

Now, within that console app, you can do something like so:

//using Microsoft.VisualStudio.CodeCoverage;

CoverageInfoManager.ExePath = strDataCoverageFile;
CoverageInfoManager.SymPath = strDataCoverageFile;
CoverageInfo cInfo = CoverageInfoManager.CreateInfoFromFile(strDataCoverageFile);
CoverageDS ds = cInfo .BuildDataSet(null);

This gives you a strongly-typed dataset, which you can then query for results, checking them against your policy. To fully see what this dataset looks like, you can also export it to xml. You can step through the namespace, class, and method data like so:


      //NAMESPACE
      foreach (CoverageDSPriv.NamespaceTableRow n in ds.NamespaceTable)
      {
        //CLASS
        foreach (CoverageDSPriv.ClassRow c in n.GetClassRows())
        {
          //METHOD
          foreach (CoverageDSPriv.MethodRow m in c.GetMethodRows())
          {
          }
        }
      }

 

You can then have your console app check for policy at each step of the way (classes need x% coverage, methods need y% coverage, etc...). Finally, you can have the MSBuild script that calls MSTest also call this coverage console app. That allows you to add code coverage to your automated builds.

 

[UPDATE 11/5/2008]

By popular request, here is the source code for the console app: 

 

[UPDATE 12/19/2008] - I modified the code to handle the error: "Error when querying coverage data: Invalid symbol data for file {0}"

 

Basically, we needed to put the data.coverage and the dll/pdb in the same directory.

 

using System;
 using System.Collections.Generic;
 using System.Linq;
 using System.Text;
 using Microsoft.VisualStudio.CodeCoverage;
 
 using System.IO;
 using System.Xml;
 
 //Helpful article: http://blogs.msdn.com/ms_joc/articles/495996.aspx
 //Need to reference "Microsoft.VisualStudio.Coverage.Analysis.dll" from "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies"
 
 
 namespace CodeCoverageHelper
 {
   class Program
   {
 
 
 
     static int Main(string[] args)
     {
       if (args.Length < 2)
       {
         Console.WriteLine("ERROR: Need two parameters:");
         Console.WriteLine(" 'data.coverage' file path, or root folder (does recursive search for *.coverage)");
         Console.WriteLine(" Policy xml file path");
         Console.WriteLine(" optional: display only errors ('0' [default] or '1')");
         Console.WriteLine("Examples:");
         Console.WriteLine(@" CodeCoverageHelper.exe C:\data.coverage C:\Policy.xml");
         Console.WriteLine(@" CodeCoverageHelper.exe C:\data.coverage C:\Policy.xml 1");
 
         return -1;
       }
 
       //string CoveragePath = @"C:\Tools\CodeCoverageHelper\Temp\data.coverage";
       //If CoveragePath is a file, then directly use that, else assume it's a folder and search the subdirectories
       string strDataCoverageFile = args[0];
       //string CoveragePath = args[0];
       if (!File.Exists(strDataCoverageFile))
       {
         //Need to march down to something like:
         // TestResults\TimS_TIMSTALL2 2008-09-15 13_52_28\In\TIMSTALL2\data.coverage
         Console.WriteLine("Passed in folder reference, searching for '*.coverage'");
         string[] astrFiles = Directory.GetFiles(strDataCoverageFile, "*.coverage", SearchOption.AllDirectories);
         if (astrFiles.Length == 0)
         {
           Console.WriteLine("ERROR: Could not find *.coverage file");
           return -1;
         }
         strDataCoverageFile = astrFiles[0];
       }
 
       string strXmlPath = args[1];
 
       Console.WriteLine("CoverageFile=" + strDataCoverageFile);
       Console.WriteLine("Policy Xml=" + strXmlPath);
 
 
       bool blnDisplayOnlyErrors = false;
       if (args.Length > 2)
       {
         blnDisplayOnlyErrors = (args[2] == "1");
       }
 
       int intReturnCode = 0;
       try
       {
         //Ensure that data.coverage and dll/pdb are in the same directory
         //Assume data.coverage in a folder like so:
         // C:\Temp\ApplicationBlocks10\TestResults\TimS_TIMSTALL2 2008-12-19 14_57_01\In\TIMSTALL2
         //Assume dll/pdb in a folder like so:
         // C:\Temp\ApplicationBlocks10\TestResults\TimS_TIMSTALL2 2008-12-19 14_57_01\Out
         string strBinFolder = Path.GetFullPath(Path.Combine(Path.GetDirectoryName(strDataCoverageFile), @"..\..\Out"));
         if (!Directory.Exists(strBinFolder))
           throw new ApplicationException( string.Format("Could not find the bin output folder at '{0}'", strBinFolder));
         //Now copy data coverage to ensure it exists in output folder.
         string strDataCoverageFile2 = Path.Combine(strBinFolder, Path.GetFileName(strDataCoverageFile));
         File.Copy(strDataCoverageFile, strDataCoverageFile2);
 
         Console.WriteLine("Bin path=" + strBinFolder);
         intReturnCode = Run(strDataCoverageFile2, strXmlPath, blnDisplayOnlyErrors);
       }
       catch (Exception ex)
       {
         Console.WriteLine("ERROR: " + ex.ToString());
         intReturnCode = -2;
       }
 
       Console.WriteLine("Done");
       Console.WriteLine(string.Format("ReturnCode: {0}", intReturnCode));
       return intReturnCode;
 
     }
 
     private static int Run(string strDataCoverageFile, string strXmlPath, bool blnDisplayOnlyErrors)
     {
       //Assume that datacoverage file and dlls/pdb are all in the same directory
       string strBinFolder = System.IO.Path.GetDirectoryName(strDataCoverageFile);
 
       CoverageInfoManager.ExePath = strBinFolder;
       CoverageInfoManager.SymPath = strBinFolder;
       CoverageInfo myCovInfo = CoverageInfoManager.CreateInfoFromFile(strDataCoverageFile);
       CoverageDS myCovDS = myCovInfo.BuildDataSet(null);
 
       //Clean up the file we copied.
       File.Delete(strDataCoverageFile);
 
       CoveragePolicy cPolicy = CoveragePolicy.CreateFromXmlFile(strXmlPath);
 
       //loop through and display results
       Console.WriteLine("Code coverage results. All measurements in Blocks, not LinesOfCode.");
 
       int TotalClassCount = myCovDS.Class.Count;
       int TotalMethodCount = myCovDS.Method.Count;
 
       Console.WriteLine();
 
       Console.WriteLine("Coverage Policy:");
       Console.WriteLine(string.Format(" Class min required percent: {0}%", cPolicy.MinRequiredClassCoveragePercent));
       Console.WriteLine(string.Format(" Method min required percent: {0}%", cPolicy.MinRequiredMethodCoveragePercent));
 
       Console.WriteLine("Covered / Not Covered / Percent Coverage");
       Console.WriteLine();
 
       string strTab1 = new string(' ', 2);
       string strTab2 = strTab1 + strTab1;
 
 
       int intClassFailureCount = 0;
       int intMethodFailureCount = 0;
 
       int Percent = 0;
       bool isValid = true;
       string strError = null;
       const string cErrorMsg = "[FAILED: TOO LOW] ";
 
       //NAMESPACE
       foreach (CoverageDSPriv.NamespaceTableRow n in myCovDS.NamespaceTable)
       {
         Console.WriteLine(string.Format("Namespace: {0}: {1} / {2} / {3}%",
           n.NamespaceName, n.BlocksCovered, n.BlocksNotCovered, GetPercentCoverage(n.BlocksCovered, n.BlocksNotCovered)));
 
         //CLASS
         foreach (CoverageDSPriv.ClassRow c in n.GetClassRows())
         {
           Percent = GetPercentCoverage(c.BlocksCovered, c.BlocksNotCovered);
           isValid = IsValidPolicy(Percent, cPolicy.MinRequiredClassCoveragePercent);
           strError = null;
           if (!isValid)
           {
             strError = cErrorMsg;
             intClassFailureCount++;
           }
 
           if (ShouldDisplay(blnDisplayOnlyErrors, isValid))
           {
             Console.WriteLine(string.Format(strTab1 + "{4}Class: {0}: {1} / {2} / {3}%",
               c.ClassName, c.BlocksCovered, c.BlocksNotCovered, Percent, strError));
           }
 
           //METHOD
           foreach (CoverageDSPriv.MethodRow m in c.GetMethodRows())
           {
             Percent = GetPercentCoverage(m.BlocksCovered, m.BlocksNotCovered);
             isValid = IsValidPolicy(Percent, cPolicy.MinRequiredMethodCoveragePercent);
             strError = null;
             if (!isValid)
             {
               strError = cErrorMsg;
               intMethodFailureCount++;
             }
 
             string strMethodName = m.MethodFullName;
             if (blnDisplayOnlyErrors)
             {
               //Need to print the full method name so we have full context
               strMethodName = c.ClassName + "." + strMethodName;
             }
 
             if (ShouldDisplay(blnDisplayOnlyErrors, isValid))
             {
               Console.WriteLine(string.Format(strTab2 + "{4}Method: {0}: {1} / {2} / {3}%",
                 strMethodName, m.BlocksCovered, m.BlocksNotCovered, Percent, strError));
             }
           }
         }
       }
 
       Console.WriteLine();
 
       //Summary results
       Console.WriteLine(string.Format("Total Namespaces: {0}", myCovDS.NamespaceTable.Count));
       Console.WriteLine(string.Format("Total Classes: {0}", TotalClassCount));
       Console.WriteLine(string.Format("Total Methods: {0}", TotalMethodCount));
       Console.WriteLine();
 
       int intReturnCode = 0;
       if (intClassFailureCount > 0)
       {
         Console.WriteLine(string.Format("Failed classes: {0} / {1}", intClassFailureCount, TotalClassCount));
         intReturnCode = 1;
       }
       if (intMethodFailureCount > 0)
       {
         Console.WriteLine(string.Format("Failed methods: {0} / {1}", intMethodFailureCount, TotalMethodCount));
         intReturnCode = 1;
       }
 
       return intReturnCode;
     }
 
     private static bool ShouldDisplay(bool blnDisplayOnlyErrors, bool isValid)
     {
       if (isValid)
       {
         //Is valid --> need to decide
         if (blnDisplayOnlyErrors)
           return false;
         else
           return true;
       }
       else
       {
         //Not valid --> always display
         return true;
       }
     }
 
     private static bool IsValidPolicy(int ActualPercent, int ExpectedPercent)
     {
       return (ActualPercent >= ExpectedPercent);
     }
 
     private static int GetPercentCoverage(uint dblCovered, uint dblNot)
     {
       uint dblTotal = dblCovered + dblNot;
       return Convert.ToInt32( 100.0 * (double)dblCovered / (double)dblTotal);
     }
   }
 }

 

Tuesday, September 16, 2008

Next LCNUG on Sept 25 about ASP.NET Dynamic Data

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/next_lcnug_on_sept_25_about_aspnet_dynamic_data.htm]

The next LCNUG is coming up (located at the College of Lake County, IL). Josh Heyse will present on ASP.Net Dynamic Data on September 25, 2008, 6:30 PM

 

This is a great place to meet other developers and learn from the community.

Monday, September 15, 2008

The rewards from struggling to learn something

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/the_rewards_from_struggling_to_learn_something.htm]

In the movie Kingdom of Heaven, a knight, Godfrey de Ibelin, is instructing his son, Balian, about a sacred oath. At the very end, Godfrey slaps his son in the face and says "and that is so you will remember it [the oath]." The thinking is that people remember hardships. We remember when we've been wronged, certainly when someone slaps us in the face, and when we struggle to figure something out. The same thing applies to overcoming technical problems. When the computer "slaps you" in the face, you remember it. Maybe it's spending 5 hours for one line of code, redoing a huge component, or just throwing up your hands in despair, only to be rescued by an explanation from another developer. There's something about the struggle that really makes a lesson sink in.

 

Sure, of course there's no benefit to make something unnecessarily complex. Sometimes a technology is too new and unstable, and it's not worth the struggle yet. However, most work, will always require you to push through. Here are several benefits for pushing through the struggle to learn something:

  • It teaches you to be the one who breaks new ground, especially for new or unsolved problems. As every new wave of technology comes out, all the cutting-edge developers (the MVPs, authors, speakers, top bloggers, architects, etc...) are working overtime to digest it, expand their brains, and work through the kinks. It is a constant struggle.

  • It makes you truly understand the topic. If you just passively ask, you'll forget. If you actively figure it out yourself, re-deriving the algorithm or writing the code, then you'll get it.

  • It makes you appreciate how hard it is for those "experts" to have the answers. Sure, some people are just smart and tough concepts come naturally for them. But most of these people are burning the midnight oil. The only reason that your local star can explain off the top of her head why you'll have a multi-threading error in your instance method is because she probably spent hours (when no one was watching) suffering through multi-threading bugs herself. The expert may be able to answer your question in 10 seconds - but that's only because they spent hours coincidentally studying it before hand. The coworker who goes around trivializing others' work ("Oh, that's just scaling out the database to multiple servers") is usually the one who's never had to do it themselves. They've seen others do it, so they know it's possible - but they think it's just another standard task.

  • It spares you from wasting other people's time. Someone who always just asks other people the moment an issue comes up is wasting their co-workers time. If you haven't spent at least 1 minute googling it, you don't deserve to know it. Sure you can confirm your findings with others, but at least take a stab yourself. It reminds me when in a computer science class, a student posted to the forums asking "will 'int i = null' compile? " Obviously, they could have just figured that out for themselves by typing it in the IDE and trying - and it would have been a lot faster too.

  • It encourages you to extend grace to others. Developer estimates are notoriously inaccurate. Often this is because developers come across an unexpected problem. If you're that developer, and you get burned yourself, then you understand what it's like when it happens to others, and you can be empathetic in a way that builds the team chemistry.

  • It gives you experience with which to make your own code simpler for others. If you lost half your day tracking down a string case-sensitive error, you'll be sure to write your own code to be string-insensitive where applicable. Suffering at the hands of a poorly written component can guide you on what to do better.

  • It's inevitable, so get used to it. The goal of a developer is to solve technical problems, and problems often entail a struggle.

Think of it like you're training for a marathon. Every struggle is a training session that builds up your perseverance, your endurance, and your respect for your coworkers.

 

I realize in today's "give it to me now" world, the whole point is to avoid struggling. And of course there's no benefit to struggle just for the sake of struggling. However, nothing in life - when you're actually the one doing it - is every just simple. People who do practical house or yard work see this all the time. "It's just tearing down the wallpaper" becomes an eight hour ordeal as you find yourself needing to steam the paper, tear it off shred by shred, scrape off the adhesive, and plaster in the gashes.

 

There's an old saying: "Anything worth having is worth fighting for." Without the guts to push through problem areas, you'll find yourself surrendering to every problem - and because the whole point of development is to constantly solve problems, that will forever block you from getting very far.

Monday, September 8, 2008

Code Generation templates act as your Documentation

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/code_generation_templates_act_as_your_documentation.htm]

On every project, there's an endless list of things you're "supposed" to do, like documentation. I've personally never met a developer who enjoys this, so it usually gets shuffled to the side. One way to simplify documentation is to code generate what you can, because then the code generation template becomes your documentation.

Here's an example: I once was on a project where we had a large amount of base SQL data - things like populating dropdowns, security info, and config settings. Initially this was all done with hard-coded SQL files using tons of insert and update statements. Of course, to explain how to write all those inserts and updates, which tables they should alter, and what values they should set, there was tons of tedious MS word docs. Developers would constantly write failing SQL scripts, then recheck the documentation, fix their scripts, and keep trying until it worked. People always had questions like "Should the 'CodeId' column be set to the primary key from the 'RoleLookup' table? Technically, it "worked", it was just slow, brittle, and costing the business money.

Then, we tried migrating all those scripts over to xml files that got actively code generated into SQL scripts. All of a sudden, in order to actually write those templates, we needed to understand what the rules were. What tables, columns, and rows should be affected? What does an attribute like "active=true" translate into for a SQL script? So, this forced us to flush out all the rules. As time went on, when developers had questions, they could see all the rules "documented" in the code of the generation template (the *.cst file in CodeSmith). People essentially would say: "What happens if I set the entityScope attribute to zero? Oh, the code generator template specifically checks for that and will create this special insert in TableX when that happens. Ah, I see..."

Of course there are many other benefits of code generation, but I think this is an often overlooked one: generating the code from an xml file forces you to understand what you're actually doing. If you need to "explain" how to write code to a code generator, then anyone else can look at that explanation too. This is a similar principle to writing self-documenting code.

This also hit home after I wrote a UI page code generator that was 3000 lines long. I realized that in order for  a developer to create a UI page for that project, they essentially needed to (manually) reproduce whatever that template was doing, which meant that they had to understand 3000 lines of trivia. Anyone with questions on producing basic UI pages could always "look up" how the code generator did it.

Sunday, September 7, 2008

Chicago - Codeapalooza was great

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/chicago__codeapalooza_was_great.htm]

I had the good fortune to be able to attend the Chicago Codeapalooza this last Saturday. It was an all-around success - about 30 speakers spread across five comprehensive tracks, hundreds of developers, vendor booths, good community interaction, and more. I'm proud that Paylocity could  be a sponsor. And the event was free.

I've always thought that the big thing about user groups aren't necessarily the tracks (which were good), but meeting other developers in the community. The talent level is always humbling. Several companies were actively recruiting - much better to meet a recruiter in person at a professional event than as one of a thousand random resumes online.

Our demanding field requires continual education, and free, local, interactive events like this are one of the best ways to break the ice, followed up with your own personal study.

Tuesday, September 2, 2008

Collecting machine information using WMI

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/collecting_machine_information_using_wmi.htm]

Like most developers, I want a fast machine. And like most developers, despite my best efforts, I inevitably find myself coding on a machine that isn't quite fast "enough". One of the things I'm trying to track this down is making my own simple performance benchmark tool (if you know of a tool that works great for you, please feel free to suggest it). And as long as such a tool is running benchmarks, I'd like to collect relevant machine information like memory, disk space, etc...

 

Thanks to Windows Management Instrumentation (WMI), this actually ended up being easier than I thought. There's a lot of different ways to call from the API, here's one example (I saw this template from somewhere else online, cannot remember where):

 

      //Add namespace, in assembly "System.Management"

      //using System.Management;

 

      //Put this code in your method:

      ManagementObjectSearcher mos =
        new ManagementObjectSearcher("WMI_query");
       
      foreach (ManagementObject mob in mos.Get())
      {
        foreach (PropertyData pyd in mob.Properties)
        {
          if (pyd.Name == "WMI_Property_Name")
            return pyd.Value;  //May need to convert type
        }
      }

 

where "WMI_query" is your select query, and "WMI_Property_Name" is the name of the property you want to return. The gist is that you can write WMI queries (a simple string), much like SQL queries, to get machine information. Some sample queries could be:

  • "SELECT Free_Physical_Memory FROM Win32_OperatingSystem"

  • "SELECT Total_Physical_Memory FROM Win32_ComputerSystem"

  • "SELECT Free_Space,Size,Name from Win32_LogicalDisk where DriveType=3"

  • "SELECT Name FROM Win32_Processor" //I've seen this occasionally give problems, someone suggested I try the advice from here instead.

Here's a quick tutorial.

Here's the official MSDN reference for WMI queries. There are hundred of things to query on, so a lot of it is knowing which query to write.

 

Note that these queries can be run from other scripting languages, like VBScript, so it's not just a C# thing. I'd expect this to be very popular with most IT departments.

 

It's also useful if you wanted to make a quick tool on your build servers to detect the free disk space, and have a scheduled task kick it off every night, and then log your results to a database. If you have a distributed build system (i.e. several servers are required for all your different builds), then it's a nice-to-have.

Sunday, August 31, 2008

Why I still read technical books

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_i_still_read_technical_books.htm]

Occasionally I hear a respected developer make the case against books - "Books are dead, use other online resources instead." I acknowledge that technical books do have several limitations:

  • They can be very linear, lacking the web's hyperlinks. While books can have cross references, but it's just not the same.

  • Books are possibly outdated - books take an average of 1-2 years to get published. Especially with technical books, where the rush is to be first and "catch the wave", books on newer topics may be inaccurate because they were written using the beta of the technology.

  • Books can contain information overload - you don't need every chapter in an ASP.Net book to get started (most developers never touch ASP.Net globalization).

  • As a book is ultimately printed paper, you can't get a dynamic interaction from a book - i.e. stepping through a source code demo or seeing an animating demo.

  • Books physically take up space, and can sometimes be very heavy, such as when you're a consultant in the airport and you need to pack everything into a single carry-on.

  • Books can only be in one place at a time - so it can cause problems when you need it at home, but left it at the office.

  • This is lame - but sometimes you're on a project where management discourages reading technical books during office hours ("we don't want the client thinking you don't already know what you're doing, read up on that stuff at home") - however the same manager is totally comfortable with you browsing online technical websites.

However, I think everything considered, there is definitely a time and place to use books as a learning resource. Some of these weaknesses can be turned into a book's greatest strengths:

  • Books, usually a 200-600 page journey through a dozen chapters, are often more thorough than online tutorials or blog posts. They cover more topics, and show the bigger picture. This gives you a more proactive approach to the topic, as well as inevitably makes you more confident about that topic.

  • Books are great for breaking the ice of a new technology because they are a step-by-step journey. You're not scrambling between misc blog posts or tutorials.

  • Especially for general topics (basic Xml, Html, C#, SQL, JS), where the entry-level knowledge is pretty stable, a books provides a good introduction and guide.

  • Because books are physical hardcopies, they're great when you just want to get away from staring at a laptop screen. Whether it's in an airplane (where practically there's not enough room to pull out a laptop), stepping outside, or even just taking advantage of 5 free minutes (good for reading a page, but barely enough time to even start up a laptop).

  • For some personality types, books provide a physical trophy: "wow, look at all those books that I've read through."

Note that books are not intended to be used as your only resource, but rather as one part of a comprehensive learning strategy. Even most tech books today are filled with online links to demos, tools, reference guides, and community groups. I benefit a lot from reading books; I think they're a great tool in one's "learning arsenal".