Sunday, May 31, 2009

What is the difference between an average dev and a star?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/what_is_the_difference_between_an_average_dev_and_a_star.htm]

Time and again, experience keeps re-affirming me that a star developer is far more productive than several average developers combined. It begs the question - what is the difference between a novice, average, and star developer? The naive thinking is that they're all the same kind, but they just vary by degree - i.e. that the star produces more code at a faster rate, with less bugs. But there's so much more than that. The star developer is a fundamentally different kind. Here's a brain dump of some possible differences between star developers and average devs (This also relates to the Dreyfus model of skill acquisition: http://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition).

A star developer:

  1. Mitigates risk
  2. Writes higher quality code
  3. Predicts potential problems, and not design the app into a corner
  4. Addresses performance, security, maintenance
  5. Makes their teammates better
  6. Comes up to speed quicker
  7. Juggles multiple projects
  8. Troubleshoots and figures out answers themselves
  9. Does self correction
  10. Handles non-technical constraints (like a political monkey wrench)
  11. Can provide reliable estimates
  12. Interfaces with non-technical folks (Managers, Business Sponsors); understands the business side
  13. Throws away bad code; knows when to restart
  14. Creates reusable code that helps others
  15. Has influence beyond their immediate project
  16. Desires to grow
  17. Can work for longer periods without immediate and visible success
  18. Can compress the schedule, such as doubling productivity.
  19. Can coordinate other developers, even informally and not as a tech lead.
  20. Can set up and improve process
  21. Understands the concepts beyond syntax. Average devs crash when the technical road takes a curve with a new language; stars know the concepts and don't get thrown for a loop.
  22. Knows when the rules do not apply
  23. Knows where they want to go; can describe their career path.
  24. Is continually learning and improving.
  25. Can point to star developers and industry leads. If you're developing ASP.Net, and you've never even heard of Scott Guthrie, you're probably not a star.

Note that stars does not necessarily:

As a completely miscellaneous aside, I think of a real-time-strategy game like Age of Empires - what is the difference between an average unit and a star unit? Look at an average unit like a simple solder - it has basic properties like health points, attack, and movement. While merely upgrading the strength of these properties (more health, more attack, etc...) is great, it is still fundamentally missing other powerful aspects- the ability to heal itself, a ranged attack, the ability to swim (or fly!), the ability to collect resources or create buildings, etc... For example, a thousand normal AoE soldiers - which have zero ranged attack, and cannot swim - will never be able to hit an enemy across even a 1-block wide river.

Thursday, May 28, 2009

BOOK: Software Estimation: Demystifying the Black Art

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_software_estimation_demystifying_the_black_ar.htm]

I personally have never met a developer who liked giving estimates. Indeed, it's an uphill battle. It's not just that developers don't like working on things which don't compile. It's also that amidst endless ambiguity, changing scope, and technical challenges, some boss pressures you into a specific date when a non-specific feature will be fully complete (with no bugs, of course). However, every aspiring tech lead must eventually fight this battle. Because software estimation is perceived as a "soft science", it's easy for devs to view it as some necessary tax to tolerate in order to get to the "real" work of coding. Therefore, many devs never try to improve their estimating skills, and hence there are a lot of bad techniques out there.

That's why I'm glad that superstar Steve McConnell dumped his wisdom into Software Estimation: Demystifying the Black Art. The book is filled with gems. Here are some of the key tips that I found practical:

  • "Distinguish between estimates, targets, and commitments".
  • Once you've given an estimate and schedule, control the project so that you meet the estimate (reduce scope, juggle staff, prioritize features, etc...). The schedule is not like a basketball that you toss in the air and hope that it makes the shot - rather it's like a football that you personally carry to the end zone.
  • "We are conditioned to believe that estimates expressed as narrow ranges are more accurate than estimates expressed as wider ranges." (pg. 18)
  • Steve emphasizes the "Cone of Uncertainty" - namely that initially, the project has many unknowns, and hence the estimate has a much larger range. However, as the project progresses and more issues become known, the range shrinks. Ironically, it is at the beginner of the project (where everything is most unknown), that many bosses want a stable, singular, estimate. Consider using phase-specific estimation techniques for different stages of the project - some techniques work better at the begininng with higher uncertainty, others work better near the end. Also, within the same project phase, consider using multiple estimation techniques to detect for converging (and hence more reliable) estimates.
  • Always have the people who do the implementation work also provide the estimates.
  • Sometimes, you can use group estimates. However, each developer should come up with their estimates separately, else the dominant person in the group will likely overly-influence everyone else (especially because most devs tend to be introverts). If their estimates differ greatly, then discuss why they're different, and iteratively keep re-estimating until they converge.
  • Collect historical data for past projects, which you can then use to assist with future estimates.
  • Count, Compute, Judge. If you can simply count the size somehow (lines of code, modules, etc...) - then that's best. If you cannot directly count, consider computing from things you can count. The point is you always want to avoid subjective judgments when there's a more objective alternative.
  • Try to estimate based off of objective quantities like historical data and lines of code, as opposed to subjective measurements like developer quality. The boss will heckle the subjective measurements: "Your estimate assumes only average developers, but we're paying for senior devs, so re-calculate with senior devs. Great, that saved us 1 month."
  • When the boss doesn't like the estimate, change inputs, NOT outputs. For example, don't just shave a month off your 5-month estimate because your boss will now be more receptive, rather change the inputs (such as reduce features) so that recalculating the estimate now results in 4 months.
  • Establish a standard estimation procedure for your department. By following an objective set of steps, you (the estimator) have a level of "protection" from all the business sponsors who want a lower estimate. For example, you might say "Our standard procedure, which historically returns estimates within a 20% accuracy, always adds 15% risk buffer for this type of application." Then you aren't the "bad guy" for adding the "extra" 15% (funny how anything that increases an estimate is seen as "extra", not "striving for accuracy").
  • When presenting your estimates (and schedules), don't even bother presenting nearly impossible scenarios. The boss will just assume that you'll be lucky and of course hit the most optimistic number that is mentioned.
  • Always give an honest estimate. Don't lowball the estimate just so that you're boss accepts it - doing so will screw you with an impossible schedule, which will also destroy your credibility ("you can't even hit your own estimate!") If an honest estimate is too high, it is the business sponsor's decision to reject the project, not the developers.

Lastly, consider checking out Construx Estimate.

Sunday, May 17, 2009

Tips to refactor Asp.Net codebehinds to a common base page.

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/tips_to_refactor_aspnet_codebehinds_to_a_common_base_page.htm]

In Asp.Net, it is common to have the aspx page inherit from a custom page as opposed to directly from System.Web.UI.Page. Usually the application architecture has at least one common "base page" that handles core architectural components. However, you may find that for larger apps, you want to extend your base-page hierarchy such that an entire sub system gets its own base page (which in turn inherits from the global base page). While inheritance with normal C# classes (like in a ClassLibrary project) works great, there are some common hurdles when trying to refactor to a base page in ASP.Net codeBehinds.

Perhaps the biggest issue is the dependencies, namely:

  1. Your base page cannot directly reference any user controls. Assuming you put the page page in the AppCode folder, this gets compiled to its own separate DLL, and that DLL has no reference to the UC dll (however, the UC has a reference to AppCode).
  2. Your base page cannot directly reference the html controls instantiated in the derived-page's aspx file. For example, if your derived aspx page contains a hidden field or textbox, your base page cannot reference that.

There are ways to break these dependencies.

For problem #1 with the User Control references, you can make an interface, have that user control implement the interface, and then let the base page have a member variable of the interface type. The derived page could populate that variable. In other words:

  • Say you have UserControl "Address".
  • Your base page needs to call the "LookupZipCode" method
  • Create an interface IAddress with member "LookupZipCode". Have the UC implement this interface.
  • Have your base page contain an instance field of type IAddress. Your base page can then call this.
  • Have the derived page pass the UC reference to the base page (perhaps in the OnInit method), for example set base.IAddress = MyUserControl.

Another approach, say if you're casting, is to pass in a generic. For example, create this kind of method in your base page: Note that "T' would be a user control, but your base page cannot reference user controls.

public T HeaderControl<T>() where T : class
{
    return this.Master.FindControl("Header") as T;
}

For problem #2, with html control references, you could solve this in several ways. Keep in mind that unlike user controls, Html (or WebControls) are predefined in an external assembly - System.Web - and can hence be referenced in the base page. So the question becomes how to pass the reference from the derived to the base class?

  • Pass in references via method signature - for example the method that sets a hidden control expects an HtmlInputHidden parameter, as opposed to referencing "this.MyHiddenControl".
  • Make the base class have its own separate protected instance, and have the derived class set this (such as in the derived class's OnInit). For example, the derived class OnInit method may contain a line like "base.BaseHiddenField = thisMyHiddenField". This is useful if that control is referenced in many base-class methods, and you don't want to update all the signatures.
  • FindControl - You can always use the "Page.FindControl(string)" method to get a control given the string ID, but this is brittle and slower. Because you pass in a string, you don't get compile-time type checking. It also must search the entire control collection instead of just having a direct reference to the control.

While there's a lot more ways to refactor an ASP.net codeBehind, these two tricks are useful.

Thursday, May 14, 2009

10 Reasons to attend the Chicago Code Camp

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/10_reasons_to_attend_the_chicago_code_camp.htm]

The LCNUG and Alt.Net groups are hosting a Chicago Code Camp on Saturday, May 30th, at College of Lake County (CLC) in the Northwest suburbs. If you're a passionate developer looking to grow, this is exactly the kind of event you want to consider.

  1. 33 cutting-edge sessions across a wide spectrum of topics and developer communities.
  2. Presentations by industry leaders, MVPs, experiences developers, and tech leads.
  3. Networks with perhaps 200 local developers, which always results in great discussions with your peers.
  4. It's free!
  5. Tons of swag.
  6. Get to ask face-to-face questions with real experts.
  7. A full day of sessions - so don't skip out just because you can only make the morning or afternoon.
  8. See what new technologies could add value to your development projects.
  9. Lots of hands-on code, not just fluffy power points about ivory tower theories.
  10. You will be a better developer for having gone.

We're also looking for volunteers. Consider helping out, it's a great way to get involved.

Wednesday, May 13, 2009

Troubleshooting Visual Studio Profiler with ASP.Net

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/troubleshooting_visual_studio_profiler_with_aspnet.htm]

Visual Studio 2008 (and I think 2005) come with a built-in .Net profiler. Profilers are especially useful for troubleshooting performance bottlenecks (Related: CLR Profiler for old versions of Visual Studio; SQL Profiler). There are already good tutorials on it, so here I'm just offering misc tips and troubleshooting. Basically, you can go to Analyze > "Launch Performance Wizard", and just select all the defaults. You can use this with ASP.Net web projects too.

Error: When trying to start, I got "Value does not fall within the expected range".

Solution: You can set the config of a website to "Any CPU".

Error: VSP1351 : The monitor was unable to create one of its global objects

Other times I got this error:

Error VSP1351 : The monitor was unable to create one of its global objects (Local.vsperf.Admin). Verify that another monitor is not running, or that another application hasn't created an object of the same name.
Profiler exited
PRF0010: Launch Aborted - Unable to start vsperfmon.exe

Solution: You can use Process Explorer to see what is locking "vsperf.Admin", and then kill that process tree. For me, it was the aspnet_wp.exe process.

Error: PRF0010: Launch Aborted - Unable to start vsperfmon.exe

Sometimes I got this error:

PRF0010: Launch Aborted - Unable to start vsperfmon.exe
Error VSP1342 : Another instance of the Logger Engine is currently running.
Profiler exited

Solution: I restarted Visual Studio (yes, it's lame, but it appeared to work). I also ensured that all other instances of VS were closed.

Error: Sometimes clicking the stop button caused IE and VS to crash, without even collecting data

Solution: By just closing the IE browser directly, it worked.

Tuesday, May 12, 2009

Automated Code Governance

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/automated_code_governance.htm]

There are lots of ways for a tech-lead to encourage standardization. However, any policy that requires manual enforcement will continually be facing an uphill battle. The problem with the human enforcer is that:

  1. Enforcing policy is seen as being the "bad guy", and no-one wants to always be the bad guy
  2. The human will not have time - they'll be pulled onto other features
  3. The human will be accused of "ivory tower" antics that just slows down real work
  4. The human cannot possibly monitor everyone's code every day

The optimal way is to have an automated build policy as part of your continuous integration. This policy could check for many objective metrics, such as (DISCLAIMER: I haven't personally implemented all of these yet - it's just a brainstorm based on various research):

  • Code Coverage - Enforces developers to write unit tests by demanding that the tests provide X code coverage.
  • Code metrics (like NDepend) - Runs static metrics like LineCount (discourages large methods that have multiple responsibilities) and cyclomatic code complexity (including checks for dependencies, which is often then #1 culprit that prevents testability).
  • Code duplication (like Simian) - Encourages refactoring by checking for chunks of duplicate code. Ideally, this covers not just C#, but all languages like HTML, JS, and SQL.
  • Static code analysis (like FxCop) - Runs static rules to check for bad or risky code, kind of like compiler warnings on steroids.
  • Stored Procedure scans - Creates a test database, and runs all the stored procs to check their query execution plan for performance bottlenecks (like table or index scans), or too many dependencies.

While policies sound cool, in the trenches, many devs view them as just a nuisance that slows down "real" work. Here are some problems to anticipate:

  • Devs don't want to do it - it's not fun to write high-quality code.
  • Devs complaining that they don't have time
  • Management pulling the rug out from under you (they don't have time, or they don't want to be the "bad" guy)
  • Makes build take too long

Given these types of problems, here are ideas to minimize any riots as you try to roll these out.

  • Set up policy first - without it failing the build yet, so everyone can see results for a few weeks.
  • Ensure that people can run all policy checks locally first, and verify that they pass locally.
  • Create an exclude list so any developer can register exceptions.
  • Grandfather all existing code by using this exclude list.
  • Minimize the scope of what is checked (start with just 1 core assembly, and gradually expand to others).
  • Roll out 1 policy at a time.
  • Ramp up your build servers. Consider a distributed build, such as using CruiseControl's project trigger feature.

Monday, May 11, 2009

Avoiding unnecessary server work in ASP.Net

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/avoiding_unnecessary_server_work_in_aspnet.htm]

One of the best ways to increase performance in ASP.Net pages is to not do any unnecessary server-side work. For example, every postback and callback will re-trigger the entire Page's server life cycle - rerunning OnInit, OnLoad, OnPreRender, etc... However, a server-side action only require a fraction of the original code to run - for example clicking a button may not require you to repopulate all dropdowns.

ConceptExampleDetails
Is PostbackClicking an ASP.Net button postbacks the pagethis.IsPostBack
Is CallbackA JavaScript triggers an ASP.Net 2.0 callbackthis.IsCallback
Is RedirectingA method has called Response.Redirect, but there is still heavy processing (that is now unnecessary) that will still be called.Response.IsRequestBeingRedirected
Is Ajax update panel postbackA button within an Ajax update panel has been clicked, but don't redo server work outside of that update panelpublic static bool IsPartialPostback(System.Web.UI.Page p)
{
return ((System.Web.UI.ScriptManager.GetCurrent(p) != null)
&& (System.Web.UI.ScriptManager.GetCurrent(p).IsInAsyncPostBack))
}
Is Button clickUser clicks exit button - no need to repopulate dropdownsRequest["__EVENTTARGET"] //indicates which event (like a button) triggered the postback.

 

Sunday, May 10, 2009

Yes-no questions that a non-technical recruiter can ask during an interview

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/yesno_questions_that_a_nontechnical_recruiter_can_ask_duri.htm]

When interviewing, many companies first filter developers through HR (such as for online resume screening or a phone call). The irony is that they want a technical star, but screen all candidates through non-technical HR folk. Sometimes people in such situations, pressed for time, resort to quick yes-no questions. The naive approach is to just ask "rate yourself on 1-10", or "do you have over X years experience?" While I think the best type of interview is one where technical people can ask technical questions (or even write pseudo-code on the whiteboard), not every developer gets that opportunity. So if for whatever reason, the recruiter is limited to only yes-no questions, consider these kind of questions:

  1. Do you program in your spare time? --> programming in your spare time implies that you enjoy programming, which implies that you're motivated and good at it.
  2. Do you have any Microsoft Certifications? --> implies basic Microsoft-specific education
  3. Do you have an engineering or CS degree? --> implies a basic general education
  4. Have you ever lead a team of over 3 people? --> implies basic leadership abilities.
  5. Have you ever programmed an application over X thousand lines-of-code? --> big apps will provide scalability and process problems that small apps never will.
  6. Are you an active member of any professional groups? --> implies motivation
  7. Have you ever been published (magazine, online journal, book)? --> implies good communication
  8. Do you have your own website or blog? --> implies personal motivation and innovation
  9. Do you contribute to any open-source projects? --> implies hands-on coding and working with others
  10. Since college, have you read more than three technical books? --> implies continual, pro-active education, as opposed to just re-actively reading scattered blog posts.

This is only a partial list, but you get the idea. Many developers can have "X years experience", yet never do a single thing on this list. This list focuses on what you have done, not how long you've sat in front of a computer.

Thursday, May 7, 2009

Technical Debt is like a steep hill, not a brick wall

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/technical_debt_is_like_a_steep_hill_not_a_brick_wall.htm]

Overtime, bad software decisions (design, coding, process) get compounded, like interest on a loan, and effectively presents the team with a "technical debt" that yearns to be repaid. This debt weighs down the system, making it harder and harder to make changes or add new features. One way to measure technical debt is by lines of code that require maintenance. Therefore you can reduce debt by reducing lines of code (codeGen, refactor, automate, buy instead of build, etc...).

For example, a developer may copy and paste duplicate code a hundred times (like in HTML or SQL), or create thousands of lines of tedious plumbing code, or create an army of brittle JavaScript files, or write an entire app with no code coverage. Many of the best practices out there are designed to explicitly reduce technical debt.

One thing to note is that technical debt is like a steep hill that can go infinitely high. You can go fast bicycling on a flat road, but as that road turns into a steep hill that gets gradually steeper, you slow down. In software terms, because code may have been copied to 10 different places, making a single change requires 10 separate updates - so it takes longer.

I don't think technical debt is like a brick wall - where suddenly at a specific point you're blocked and you simply cannot go further. And that's part of the problem. As long as you're moving, albeit slower and slower, it's seductive to think that you can still keep making "enough" progress so you don't need to change yet. The stubborn leader can just keep pushing on: "longer hours, more developers..." There are two choices to make: keep going forward, or turn around. However, if you were to hit that brick wall, then it forces you to stop and reevaluate. It's easier in the sense that you now only have one choice - you must "turn around".

Monday, May 4, 2009

Cool Tool - NDepend for automated code metrics

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/cool_tool__ndepend_for_automated_code_metrics.htm]

Given human nature, and all the tedious things that go into coding, the coding standards that survive are usually the ones that you can automate with some external tool.

Code Metrics is one such type of governance. Two of the most popular metrics are line count and cyclomatic complexity. Perhaps the best tool on the market to assist with these (and much more) is NDepend.

For example, say you want to prevent developers from writing huge methods, or huge files. You could use the command line for NDepend to check for method-line-count, and then fail any check-in that doesn't meet this policy - just like you could fail a check-in that doesn't compile or breaks a unit test.

Cyclomatic code complexity is another one - this (to simplify it) measures the number of dependencies. The more dependencies, the harder to instantiate something in a test harness, so this is actually a big deal. You could reduce dependencies by leveraging interfaces and abstract classes, using injections, or the like (whole books are written about this).

This is just the tip of the iceberg. NDepend has dozens of these types of metrics.

Initially I tried to use Visual Studio's "code metrics" feature, but for some reason I cannot fathom, you cannot run it from the command line - which of course makes it useless for an automated build (which is the whole point). At least VS code coverage had undocumented techniques to work around this.

I realize there are open-source options for basic file line count, but I personally haven't come across any that can effectively do the other metrics like method line count and cyclomatic complexity.

Bottom line - if you're trying to enforce automated governance through your build, consider checking out NDepend. Yes, there's a license fee, but if it saves you even 10 hours by preventing issues and bad design, then it's paid for itself. Also, being aware of these types of metrics is the kind of thing that helps take a developer to the next level.

[Disclaimer - I haven't fully implemented this myself yet, it's still in a research phase, and I'm just sharing my findings]

Sunday, May 3, 2009

Chicago Code Camp sessions and agenda are up

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/chicago_code_camp_sessions_and_agenda_are_up.htm]

The Chicago Code Camp (at CLC in Grays Lake, IL), which will be May 30 (Saturday), has the speakers, sessions, and agenda up.

It's all all-star cast, including several MVPs.

There's a huge variety, including TDD everything (even TDD for JavaScript and the iPhone!), UI, backend, many non-.Net platforms, and much more!