Thursday, January 29, 2009

LCNUG: NHibernate

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/lcnug_nhibernate.htm]

Yesterday, Robert Dusek and Hudson Akridge of GFX presented on NHibernate, which is a powerful way to persist data. The meeting was a big success - we had our largest turnout - about 20 people. There was a lot of good dialogue both before and after the presentation. We also announced the progress on the new LCNUG website, including our flourishing job board (6 jobs from 3 different companies so far).

Tuesday, January 27, 2009

How to increase chances of being allowed to research

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_to_increase_chances_of_being_allowed_to_research.htm]

For any software project, there's always something new to research. Even if the flood of new technology suddenly freezes, most projects would still struggle just to catch up to the existing technology. While a lot of small to mid-size departments don't have dedicated research teams (or even tasks), here are some ideas to subtly incorporate research items into your schedule.

  • Focus on small, concrete tasks that management cares about. A web app probably cares more about research JQuery or Silverlight than it cares about the WinForms DataGrid, or something ambiguous like "incorporating web best practices" (what does that practically mean?)

  • Emphasize the low-hanging fruit with the highest return - Not all research tasks are equal. A SQL static code analyzer (which benefits everyone on the team) may be far more profitable than some crusade to make sure no-one uses Hungarian notation n C#.

  • Piggyback off of existing assignments. If you're implementing an Aspx page, it may be the time to investigate Ajax, JQuery, or even something smaller like just JSON - you'd essentially have "the wind at your back". You could research an unrelated task, like hosting your  build process on virtual machines, but you'd be doing it all alone, without the support of your current assignments.

  • Focus on just one or two things at a time. You could easily list 50 things to do - new technologies, tools, refactoring, open-source projects, controls to integrate, upgrading your framework, testing, automation, code generation, etc... If you juggle too much, it will all crash and you'll have nothing.

  • Finish what you start; Actually deliver something - "A bird in the hand is worth two in the bush." For many departments, the thinking is it's better to have a weak solution that's completed (and hence usable - i.e you have something), than a powerful solution that's "still in progress" (and hence unusable - i.e. you have nothing.). The workplace is ablaze with buzzwords. Anyone can spew forth buzzwords or suggest grandiose visions, but at the end of the day - management cares about things that are actually done.

  • Work incrementally - Management may initially not allot 4 weeks to research how Ajax benefits your web app, but you could spend a day here integrating it, a day there using an update panel, another day later pulling in the Ajax Control Toolkit. Yes, it's slower, but it's better than nothing.

  • Establish a track record to "earn" bigger opportunities - As you gradually get research items actually completed, you'll become more credible, and will therefore probably be given more opportunity to research bigger tasks. For example, an unknown new-hire may be allowed to "explore" for a day, but a credible senior developer - who's already delivered many successful features - may be allowed to explore a research task for weeks.

Thursday, January 22, 2009

Real Life: Taking the fridge door off

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/real_life_taking_the_fridge_door_off.htm]

 To fix a normal squeaky door is relatively easy - just tap out the axle that joins the hinges, oil that, tap it back in, and... no more squeaky hinge. After the 20th such hinge, I got the hang of it. So, whenever my wife says the hallway doors are squeaking (translation: "please fix them"), I'm looking forward to an easy task. However, the other day she mentioned that the fridge door was squeaking. Now that was an issue. Taking off one hinge at a time for a hallway door is easy - but fridge doors aren't built like hallway doors, so you need to take the entire door off. Taking off the entire fridge door is hard (at least for me). But, alas, there was no other way. So, I got out the necessary tools, and took the entire fridge door off, oiled the door axle, and... it too stopped squeaking! The moral is that, just like in software development, people often need to take one step back before taking two steps forward. Maybe that means throwing away precious code, reading a long article instead of just jumping to a quick solution, writing a unit test harness, or something. The current problem might require you to "take the fridge door off".

Monday, January 19, 2009

Ajax Issue: "The history state must be small enough to not make the url larger than 1024"

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/ajax_issue_the_history_state_must_be_small_enough_to_not_m.htm]

I got a really weird Ajax bug the other day. I was working on an ASP.Net website and got this cryptic error whenever clicking a postbacking button within an Ajax update panel.

Microsoft JScript runtime error: Sys.InvalidOperationException: The history state must be small enough to not make the url larger than 1024

Then, re-clicking a tab, you get another ajax error:

Microsoft JScript runtime error: Object doesn't support this property or method
At this line:

if (element.tagName && (element.tagName.toUpperCase() === "SCRIPT"))

In this specific case, the element tag didn't have the toUpperCase() method. It had worked before, everything seemed strange. To make a long story short, it appeared to be a problem from installing VS SP1.  We had installed the old Ajax toolkit. The new SP updated the System.Web.Extensions.dll assembly in the GAC, which created different script resource files.
 

Old (which worked)New (which did not work)
  • Assembly Ver: 3.5.0.0
  • File Version: 3.5.21022.8
  • C:\MyProject\System.Web.Extensions.dll
  • Assembly ver: 3.5.0.0
  • File Version: 3.5.30729.1
  • C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Web.Extensions.dll


It worked on some machines and not on others because some machines had SP1 installed, and others did not. The new version is installed in the GAC, so the web app would always reference the new one. Note that both had the exact same credentials (like assembly version= 3.5.0.0), but different file versions.
 

Eventually, to get things working, we just uninstalled the new one from the GAC. (I guess if we had sufficient time, we'd see how to make the app play nice with the new DLL).  However, this was tricky because we couldn't just do a normal windows uninstall. Running:

Gacutil /uf System.Web.Extensions

Will return:

Assembly could not be uninstalled because it is required by Windows Installer
Number of assemblies uninstalled = 1
Number of failures = 0

This blog explained that that’s essentially a bug, and we can fix it from the registry.

http://blogs.msdn.com/alanshi/archive/2003/12/10/42690.aspx


Go to registry, for this key:

[HKLM\SOFTWARE\Classes\Installer\Assemblies\Global]
for this item:

System.Web.Extensions,version="3.5.0.0",publicKeyToken="31bf3856ad364e35",

processorArchitecture="MSIL",fileVersion="3.5.30729.1",culture="neutral"

and remove its data, so that in the "Edit multi-string" dialogue box, the "Value data" textbox is empty.

Now, you can remove form the GAC by running:

Gacutil /uf System.Web.Extensions

And confirm removal:

Gacutil /l System.Web.Extensions

NOTE – you may need to copy the System.Web.Extensions to your web’s bin folder and recompile the solution in VS.


Lastly, to make all the other websites still work, re-install the old dll:

gacutil /i C:\MyProject\System.Web.Extensions.dll

You should now be able to hit postbacks within Ajax update panels without errors.

Sunday, January 18, 2009

You're not a real dev unless you've read this book...

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/youre_not_a_real_dev_unless_youve_read_this_book.htm]

Every now and then some amazing book comes out. And then comes people who insist that "you're not a real developer" unless you've read that book. I was reminded of this while reading reviews on Amazon for some of the hot books out there. While there are core competencies that every dev should know, there are also a lot of fringe topics, and multiple books on the same topic. And while a lot of these things are valuable, I think such an exclusive approach is damaging because it emphasizes not what you know and can apply ("can you write code with design patterns"), but rather what you've read.

 

For example, of course the GoF design patterns book is phenomenal. However, is it really that bad if someone read the C# translation of it instead (Design Patterns in C# by Steven John Metsker), or even skipped the books and went with purely online tutorials? I'd expect a "senior developer" to know what a design pattern is, recognize the buzzwords, and know how to apply them. However, if they got to that point from a different path then "normal" (?), I think that's okay. Part of the problem is that one cannot read it all, so it effectively encourages bluffing - developers buy classic books and display them on their bookshelf like trophies, and are afraid to let on about their shortcomings for fear of being rejected.

 

In short, I think it's far more effective to offer positive criteria like "developers on this team must be fluent in design patterns, automated testing, and writing clean code", as opposed to exclusive criteria like "you must have read book X".

Thursday, January 15, 2009

Tool: Survey Monkey

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/tool_survey_monkey.htm]

Surveys are a great way to find team consensus and get everyone's opinions - especially anonymous surveys. While you could use blank sheets of paper, another way is to offer a web survey. A great tool for that is Survey Monkey. I've taken several surveys with them before, but what I didn't realize is that you can set up a free account, and start sending out real surveys right away - almost like setting up a free hotmail account. Of course there are limits, and they sell packages for more advanced features and quantity. However, for small and simple online surveys of 20 people, the free product works great.

You could ask your team all sorts of questions, like why don't we write more unit tests?, why do we break the build?, what would it take to work twice as fast?, and the like.

Wednesday, January 14, 2009

Why good-intentioned devs might not write good unit tests

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_goodintentioned_devs_might_not_write_good_unit_tests.htm]

I'm a big fan of unit testing. A related question to "How many tests are sufficient?" is "Why don't we write good unit tests?" While I've seen some people attribute it to purely negative things like laziness or dumbness or lack or care for code quality, I think that misses the mark. While sure, there are some devs who don't write tests for those reasons, I think there are tons of other devs who are hard-working, smart, and do care about their work, but still don't write good or sufficient unit tests. Calling these hard-working coworkers "dumb" isn't going to make anything better. Here are some reasons why a good-intentioned developer might not write tests.

  1. I think I already write sufficient unit tests for my code.

  2. I don't have time - the tests take too long to initially write.

  3. I don't have time - the tests take too long to maintain and/or they keep breaking.

  4. The unit tests don't really add value. It's just yet another buzzword. They don't actually catch the real errors. So it's not the best use of my time.

  5. It's so much faster to just (real quick) run through my feature manually because all the context is already there (the data, the web session, the integration with other features, etc...).

  6. My code isn't easily testable - unit tests are great for business logic in C#, but I write code other than C# (SQL, JS), or things that aren't business logic (like UI rendering), or my code is too complex for unit tests.

  7. My code isn't easily testable - there are too many dependencies and limits. For example, I can't even reference an ASP.Net CodeBehind in a unit test.

  8. The tests take too long to run (the full test suite takes about 10 minutes, even without the database tests it still takes 3 minutes).

  9. I write code that already works, so it doesn't require unit tests.

  10. My code is so simple so that it doesn't need tests. For example, I'm not going to test every option in a switch-case.

  11. Sounds great, but I just don't know how to write tests for my code.

Note that I absolutely don't offer these as excuses, but rather as practical ideas to help understand a different perspective so you can improve things. For example, if someone is working on a 2-million line project that takes 5 minutes just to compile, let alone run any sort of test, they might skip running the tests with a "I don't have time" mindset. Yes, I still think it's overall faster to write and run the tests, but at least it helps you understand their perspective so you can try to meet them half way (perhaps improve their machine hardware, split up the solution, split up the tests, etc...). Of, if someone thinks that unit tests don't catch "real errors", then you can have a discussion with concrete examples. Either way, understanding someone's reasons for doing something will help bridge the gap.

 

 

Tuesday, January 13, 2009

MSDN Dev Conference - Chicago

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/msdn_dev_conference__chicago.htm]

I was glad to attend the Microsoft Developer Conference yesterday in Chicago. Despite the snow, there was (I'm guessing?) maybe 500 developers. It's always humbling going to these events and seeing so many top developers. Besides the usual star-lineup of speakers, I also get a kick out of meeting other devs/architects in the same situation as I am. While the presentations are good, I see the main purpose of these events is networking and talking to real people face-to-face.

 

My key take-aways:

  • There is absolutely too much for one single person to learn. Doesn't matter how smart you are, or how much time you think you have, you cannot "do it all", therefore a critical skill for any devlead or architect is how to delegate and empower other people to innovate new things.

  • Team Foundation Server looks very interesting. Because we started our product back in .Net 1.0, before TFS existed, we wired up our process with all these open source packages like CruiseControl, now MSBuild (replaced NAnt), some issue-tracking system, SVN, etc... So, we've already got a lot of momentum there, but TFS looks very promising. I got to talk to several people here, like Angela Binkowski, Paul Hacker, and others whose blogs I don't know (sorry), and they made a good case. I'll probably blog about this more later.

  • JQuery rocks.

  • There are a lot of smart developers in Chicago and the midwest region.

  • I'm hoping that some of these people will be future speakers at the LCNUG.

I was also excited to see the Lake County .Net Users Group get a huge plug after Ron Jacob's keynote presentation.

Sunday, January 11, 2009

How many unit tests are sufficient?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_many_unit_tests_are_sufficient.htm]

By this time, every developer probably knows that along with a hundred other tasks, they're also supposed to write unit tests. One common question from devs still new to testing is "how many tests are enough?" I think there's a couple idealistic guidelines (disclaimer: I haven't personally implemented all the ideas below, it's more of a brain dump). My goal here isn't to say "as a developer, you need to add even more tasks to your already-full plate", but rather "here are practical ideas to help you know when you're done."

 

Automated techniques:

  1. Code coverage - Perhaps the most obvious, and automated, thing is code coverage. The team may agree that X% coverage is sufficient, and then have the automated build fail when new code doesn't meet that policy.

  2. Test Lines of Code - Some developers explain that they usually have X% lines of code for their tests as they have for their production "system-under-test" code. I've heard devs mention between 40% and 100% (one line of test code per production code). I'm personally not too sure how well this works out, and if it couldn't more effectively be captured by code coverage.

Manual "code smell" techniques:

  1. The logic is flushed out - Unit tests are great for checking boundary conditions, different code paths, various inputs, etc... If you don't have enough tests to catch the basic logic, then you don't have enough tests.

  2. The logic is documented - Related to the concept above, one of the benefits of unit tests is to document the code, especially the edge case conditions (what happens if this input array is null, what if I pass in a negative int, etc...). Ideally there are sufficient tests such that the common boundary cases are easily exposed and documented for someone who is reviewing the code.

  3. Will the tests catch errors? - Ideally, there is sufficient test coverage such that a test will fail if another developer "accidentally" breaks your code.

  4. Be able to write new tests - Ideally, there would be enough testing infrastructure such that you can write a new test for every logic error that arises. Even if a component has little code coverage. For example, often having to write even just a single unit test will force you to think about the component such that you could write more tests if you had to.

Thursday, January 8, 2009

How to discourage a developer from working overtime

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/how_to_discourage_a_developer_from_working_overtime.htm]

A while back I pondered what it would take to motivate a developer to work overtime. I was thinking about the flipside of that - what would discourage a developer from working overtime?

  • Constantly change the feature on them - This can be like pulling the rug out from under their feet. I saw this all the time in consulting - for some projects, everything was "of absolute importance". People get burnt out and stop being motivated. After all, why waste my evening plowing on a feature, if the whole thing is just going to be scrapped tomorrow at some executive's whim?
  • Assign boring tasks - This speaks for itself.
  • Provide slow hardware - Not having the proper tools to do your job is just demoralizing. Imagine your manager with a slow laptop - would they wait 60 seconds while their machine freezes when they try to send a single email, or wait 10 seconds every time they clicked a new cell in Excel? Of course not, they'd get furious about how such a slow machine prevents them from effectively doing their work. Same thing for developers - every time a laptop freezes when you try compiling, getting source code, or running tests, it just demoralizes and frustrates the developer. Yes, savvy developers can optimize their machine, but at the end of the day, the .Net development environment has certain hardware needs. For example, it's just wasting their time asking a developer to work on a machine with only 1 GB ram, or 5400 rpm hard drive, or a 1 GHz processor. They'll spend idle time throughout the day - constantly losing their rhythm. The manager saves a few hundred bucks, but both demoralizes their developer and diminishes the return of a $100,000 resource (total cost of the developer = salary + benefits + other stuff HR could tell you about). It's an absolutely clueless business model.
  • Never reward positive accomplishments - Management can offer "non-monetary" rewards like verbal affirmation, or allotting schedule time to pursue a promising research project.
  • Waste their time during the normal work day  - If a developer already "wastes" time due to excess meetings, pointless issues, rework from original bad design, or waiting on a slow machine, why would they spend their own evening to "make that time up"  - time that should never have been taken from them in the first place.
  • Assign them to a "sinking ship" project - Some projects are fundamentally screwed - the core architecture is hopelessly lost, or there's already a run-away bug list, or the spec is unstable (or even contradictory). There's little motivation to work on this kind of suicide project.
  • Have them do a task the hard way because the manager won't pay for the proper tools. For example, have a developer spend 100 hours writing an ajax datagrid, when you could just buy third-party controls for much cheaper. Or, have a developer scour through thousands of lines of database plumbing instead of using a code generator or ORM.

The irony of it all is that the rich get richer and the poor get poorer - i.e. A good environment will motivate the developers to work overtime (or at least be more productive during the day), hence getting farther ahead. Whereas a bad environment will constantly demoralize, frustrate, and slow down its developers, thus getting farther behind.

This is just a partial list - anything to add? What sort of things discourage you from working overtime?

Monday, January 5, 2009

Benefits of writing a new tool

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/benefits_of_writing_a_new_tool.htm]

I'm a big tool fan. I especially get a kick out of writing my own, custom tools - if it doesn't exist yet. Here are some benefits for writing your own, new tools:

  • Very practical - A custom tool can fill a very practical niche, and let you complete a task much faster than the alternative. Usually if there's some tedious task that takes more than an hour, and I can write the tool in an hour, I'll write the tool to do it.

  • You can consume it - There's something special about seeing your own code in action, especially when it's sparing you from a lot of tedious grunt work. I still get a kick out of running the MassDataHandler tool which makes it trivial to insert test SQL data.

  • Small scale - If you're working on large (legacy) applications, whipping out a small tool can be refreshing. Some custom "throw-away" tools can be as quick as an hour to write.

  • Huge variety - It's easy to get pigeon-holed into a specific technology or application framework. Writing a tool lets you explore areas of .Net that you might never get the chance to look at else wise. For example, some application developers might never touch reflection, diagnostics, threading (!), networking, or even streams because their application framework (usually written by someone else) abstracts those all away.

  • You can share it with the world - Tools are usually isolated and self-contained (i.e. not glued to a huge framework), and hence easy to share. They are often business-independent, so you can open-source a development tool without revealing any business secrets.

Sunday, January 4, 2009

Developer Scrabble

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/developer_scrabble.htm]

My wife and I were playing a game of scrabble the other day, and she was cleaning my clock. I kept staring at my random set of vowels and consonants, and thinking "Couldn't I please just use developer buzzwords and acronyms?" I'd bet the board would end up looking something like this:

 

Some random things I noticed:

  • Normal English words and developer buzzwords use very different letters. For example, normal English words always have vowels, and usually have lots of E's and A's, while having very few X's and Q's. Developer buzzwords seem the exact opposite (think SQL, LINQ, XML, XAML, AJAX, etc...).

  • Acronyms by their nature are short. So, lots of 3 and 4 letter buzzwords.

  • Because developer buzzwords use the rare (i.e. high point-value) letters like X and Q, you could really build up a high score.