Tuesday, January 22, 2013

Test Driven Development

OK, so I've mentioned test driven development (TDD) in several previous articles.  I have to mention here that I have not jumped in with both feet...yet.  My company has a lot of legacy code and I'm waiting for the next project start to jump in.  For those who are starting out like myself, there is a very simple example available here:

Test Driven / First Development by Example

I would recommend reading the whole thing.

For a more complete article on the subject of TDD including testing the business layer and testing the interface layer, I would recommend this article:

Test-Driven Development in .NET

NUnit and DotNetMock are used in this article.  If you have a favorite article, please be sure and leave a message.  I'll update this post with a complete list.

- I'm back!

I'm half-way through the book.  One item I would like to mention in Ninject, dependency injection software (Ninject).  The book covers an example of using Ninject for mocking an object.  The Ninject website wiki ("Visit the Doja", Documentation) doesn't seem to work.  I've tried it under ie and chrome and it just sticks at "loading..."  I have not physically tried Ninject yet.  My first impressions of their website is that it's not very active.  I've also never heard of Ninject before this book, but it might just be new.

OK, on to another subject.  The book goes into a sample project halfway through.  The author breaks down non-functional requirements and details how choices need to be made.  Choices such as: .Net, PHP, J2EE, Ruby on Rails, etc.  Choices on server technologies and on and on.  The author(s) must be assuming that this is the first project that a company is performing and the person reading the book is totally new to developing software for companies.  To be thorough, it's a good idea to cover the basis of this.  However, I would assume that anybody reading this book is already knee-deep in alligators and the reason they are reading about test driven development is because they are familiar with the down-side of fixing a lot of bugs and customer/employee complaints.  So I'm grinning when I read this part, because once a technology is purchased, it's pretty much set in stone.  Don't get me wrong, you could switch from PHP to .Net (or vice-versa) or you can build a second server (or server farm) with PHP right along side your .Net environment, but why would you?  So for the beginning of chapter 6, I'm assuming that most people have this stuff already set in stone.  In other words: Many times the best platform to build software on is the one that is already purchased and in place.  Not necessarily the best to get the job done.

With that said, technically, almost any web application can be built around either Unix or .Net.  Any PC application can be written in C++, C#, Java, etc. and be created to perform the same functionality.  Most decisions are driven by the technology available and at what price.  Assuming you are starting from scratch, keep in mind that any software you build will probably last 3 times longer than you anticipate and you'll be stuck with whatever hardware it runs on.  So if you pigeon-hole yourself into a Unix-based environment right from the start, it'll be very difficult to get out after several thousand hours of development go into your business software.

Back to the book.  So far This is a very enjoyable book to read.  I personally believe it should be on the bookshelf of every company that is building software.  Every developer with a desire to transition into test driven development should read this book.  After I've read through this book, I plan to run through the sample applications in the book and see how this works in more detail.

Stay tuned.

- I'm back again!

The author(s) have started the sample project (half-way through the book).  They describe a directory layout that they use when creating projects and solutions.  I like the use of a libs directory to contain all the dll's and code needed for 3rd party objects.  I slapped my head when I read that due to the fact that my developers and I have had difficulty in assigning a directory to contain all these.  It didn't occur to me to include them into the project and check them into the team server.  The advantage of doing this is that the correct versions of these libraries are always with the project they were used in.  Therefore, future versions of projects with upgraded libraries will not break old projects.

The authors also talk about a library called NBehave.  The purpose of NBehave is to make assert commands easier to read.  I'll be testing this in the near future.  I plan to finish reading the book, then I'm going to go back to the sample application and build it while re-reading the chapters where the project is built.  That way I can see what is going on.  A lot of complexity is going into mocking objects and it gets difficult to follow the sample precisely.  I think I'll start another post when I get to that point.

Unit Testing Book

Warning: Another Book Plug!

So I bought this book and so far it's really good.  The book starts with a short history lesson, of programming, then moves into a little unit testing history and extreme programming history.  Then it gets right to the heart of the problem.  Unit testing.  The author describes what is and is not a unit test.  This is where I nod my had and think "yup, thought so."  You see, I've done some unit testing and I've read a lot of books on how to do test driven development.  What I couldn't do is apply it to my own development environment because we use a lot of database access.  In fact most of the methods in our web site either render the output or it reads from the database.  In this book, testing this type of operation is called integration testing because.

I've read through the basics of test driven development, which I've read about before, but didn't quite grasp the details.  This book breaks the process down to the details of why each step is performed in the order that it is performed.  A toy example is used in the text so you don't have to spend a lot of time looking at code while reading the example.

I'm currently into mock objects (which is broken into dummy, stubs and fakes).  So far the subject is very clear and easy to understand.  The samples are clear enough to understand just by looking at their code, no need to type them in and try them out.

Stay tuned... I should have this book read by the end of the week.  I'll post some "real" findings when I finish the book.

Sunday, January 20, 2013

Unit Testing

If you're looking for real world information on C# unit testing, I have come across a few articles that are quite informative.  The first article is called On Writing Unit Tests for C# and it contains links to various tools as well as a very well written article on unit testing advice and experiences.

Visual Studio magazine has an article called Tips for Easier C# Unit Testing that is worth a look.  This article talks about making unit tests easier and more efficient.

Code project has a good article about unit testing called Advanced Unit Testing, Part I - Overview.  This article is very large and contains a content section.  Beginning with an introduction to unit testing, this article covers mock objects, NUnit and a case study.  The case study includes a description of a sample extreme programming project.

One last article I stumbled across involves several iterations to designing an application.  This is a Microsoft article at their .Net website called Build a Contact Manager program using MVC.  

I hope this directs you to what you're looking for.  I've waded through a lot of introduction to unit testing articles and while they explain the basics, it's difficult to see how to apply unit testing to a real-world application.  I have done some unit testing on software that my company currently maintains, but we are just getting started on converting legacy code into unit testable code.  I'll post back here on my adventures when some progress has been made.  For now, I'll just leave you with these articles to read and learn from.

Developer Metrics

In this blog post, I'm going to talk about developer metrics.  What I need to know is how much work a programmer can produce and how many man hours am I getting from this programmer per week.  The purpose of these metrics is to use them to estimate how many man hours it'll take to complete a project and predict when the project will be completed. 

Collect raw information

I currently require my programmers to track their time and tasks (I don't track start and end times, just time spent).  At the end of the month I have a list of what was completed and how long it took to complete.  I also have a list of how many hours were dedicated to programming, how many were dedicated to bug fixing and other tasks.  My programmers are only required to track time to the nearest hour.  This gives a less accurate, but easier to estimate total time.  Now I take each category of their time and divide it by the total hours they have worked (by subtracting vacation time, personal time and holidays from their normal 40 hour a week schedule).  This will give me the % of time spent on each task on average.  I use this only for predicting deadlines.

In order to determine their production rate, I have to rate the tasks they've done into difficulty.  There are a hundred ways to categorize software tasks.  They all depend on how you divide your tasks.  Let's just skip the details on this and admit to ourselves that this part is really subjective.  The trick here is to stick to the same "ruler" when measuring performance in the future.  Once I have a list of tasks with difficulty ratings, I can determine how fast each programmer can perform each of these categories of tasks (assuming I have given them at least one of each category of task).  As time goes on, I average these together to get a more accurate picture.  Sometimes programmers improve (this would be normal), and I have to adjust my numbers to match.  The point is, I need the production rate of each programmer in order to estimate software development times.  I also need these numbers to determine how many programmers I need on a team to complete a project by a specified deadline.  In the examples below, I'm going to just ignore the difficulty part of this and pretend that all tasks are equally difficult to perform.

There is another method of estimating that is used by the Menlo Innovations.  Their method requires the programmers assigned to the team to do their own estimates.  I have used this technique in the past and it works pretty good.  Most of the time I don't use this method because I know my programmers really well, and I have a good feel for how long it takes them to complete a particular task.  My company is also a bit lax on deadlines.  I set a deadline to make sure that projects get done, but we're never under any serious pressure to meet that deadline (other than my own self-imposed deadline).  If your company requires a hard deadline, then this method puts the pressure on the developer to make sure they estimate their own time and stick to it.  It also may require some negotiating between you and your programmer (otherwise, they could technically, estimate a crazy long amount of time just so they can slack off).

A product that I have been eyeing for quite some time is FogBugz.  This software has an estimating tool that can track performance of each programmer and adjust future estimates to reflect actual times that programmers took on previous projects.  They call this Evidence-Based Scheduling.  This product is a bit pricey for my current development schedule, but I'll be using it in the future if our development cycle gets too hot.

Keep your statistics

OK, so now you have a few numbers.  Let's say that you have 3 programmers and their numbers break down like this:

OK, first, I need to mention that we're going to pretend like the time dedicated to each task is just a function of their job in the company.  If your company has dedicated developers, they might spend time fixing bugs, or you might have a dedicated team of programmers that do bug fixes.  In this example, I'm just trying to show that it's rare to get 100% utilization out of a human and attempting to estimate time schedules with that expectation is going to result in great disappointment.  The "other" category might include things like office paperwork, meetings, etc.

Apply your statistics

User your programmer development percentage to calculate the daily rate in hours (i.e. devtime * 8).

Here is an example list of projects and their rough estimates:

Now we want to know how many work days it will actually take to complete these projects.  Just use the daily rate of each programmer to determine how many hours per day will be completed.  For this example, I'm going to just assign all three of my programmers on this project (I'll assume it can be neatly divided, or that I'm just getting a rough estimate).  So the daily hours of my 3 sample programmers totals to 15.2 hours.  Divide the total hours by that number to get the total days:

At this point, you can just round your estimates to the nearest day.  To obtain a deadline date, you will need to drag out a calendar and "x" off the days from the starting day to the number of days in the above spreadsheet.


The important facts to remember here is that we need raw numbers from the programming staff.  From these numbers the daily production rate can be calculated.  The daily production rate can be used to produce an estimate of days to complete each program on your list.  This can be used to prioritize projects or just to determine if your department needs more resources, or maybe just to determine the schedule of deployment dates for each project on your list.

I hope this information is helpful.  Drop me a comment if you have questions or other ideas.

Saturday, January 12, 2013

Keeping Up

I'm going to talk about keeping up with versions.  Versions of operating systems, developer tools, third-party tools, etc.  This is an ugly subject because there is nothing more frustrating in the software development world than the continuous work associated with making your software work with more than one version of the OS that it operates on.  If your software is used by a large number of people, chances are you will run into problems when a new OS is introduced. 

So what do you do?  The first step is to know when a new OS is scheduled to be released.  If you can obtain a Beta or RC (Release Candidate) version to test your software on, you can get a jump on any potential version conflict problems.  In the Microsoft world, you will also have to keep an eye on .Net versions.  If your application is a web site, you'll need to keep tabs on browser versions.  Currently my developers are required to test our software on the current version of IE, the previous version of IE (which is currently version 9 and 8, but IE 10 is also available on windows 8), FireFox, chrome, opera and safari.  We also test on the iPad/iPhone version of safari for basic problems. 

Third Party Libraries

Third party libraries are add-ons for Visual Studio (Linux developers also have 3rd party libraries, but I'll stick to Microsoft Visual Studio).  These add-ons might have compatibility issues with newer browsers.  Many companies will purchase a library and expect it to be a one-time cost.  The idea is that they own the library and they can use it forever.  That is true, but here's where the ugly part comes in.  If you upgrade Visual Studio in the future, there is a good chance that some of your libraries will not work.  So now you're thinking, why upgrade Visual Studio?  You can get away with that for a while.  I have skipped over versions regularly and that's OK.  If you upgrade your OS, you'll eventually run into compatibility problems with Visual Studio.  Especially after it has reached it's end of life.  You'll also run into problems if you happen to upgrade libraries that don't work on older versions of VS (assuming you add a new library to your project).

All of the scenarios I'm talking about can cause a cascade of required upgrades.  The longer you put off upgrading the bigger the cascade can occur.  You can also keep your hardware and old OS for as long as possible, but eventually, your hardware will break down and you'll be forced to buy a new PC (due to a lack of compatible upgrade parts).  Don't believe me?  Try to make a 5 1/4" floppy drive work on today's PCs.  Newer PCs will not run many of the older OS's because there are no drivers written for the new hardware to run on the old OS.  This is the point where everything will need to be upgraded.

What are your options?

First option: Run your software as long as possible on the hardware you have, then upgrade/create new software before your old software goes obsolete.  The risk in this method is that a PC will fail before you can upgrade the software and you'll lose data or business due to down-time.

Second option: Upgrade the moment a new version is available.  While this sounds safe at first thought, I would not recommend this method.  There is a chance that a newer version is still buggy. 

Third option:  Upgrade as necessary. Be prepared to upgrade or make your software compatible with new versions, but it's not necessary to upgrade everything (OS, browser, VS, libraries). As I've mentioned before, I only upgrade Visual Studio every other version, unless the new version has features that I really need. Also, it's not necessary to upgrade to the latest OS, just make sure you have a machine with the latest OS to test your software on (in case your customer has a new PC with the new OS on it).  I only upgrade libraries as needed.

Fourth option: Don't upgrade, create a new version to match new OS's.  This is common in the software world.  A new "shinier" version is introduced when a new OS is introduced to match the user interface of the new OS.

Other upgrade problems

Paradigm shifts are common in the computer world.  Predicting the new paradigm is difficult, fads come and go.  An example of a current paradigm is the proliferation of tablets.  Making your web-based software work on a tablet is a must at this point.  You can get by on a few tweaks just to make sure your web pages don't break on the most common tablets or you can go full-speed-ahead and design a special tablet-friendly version of your web site.  There is also the tablet App which is very popular.  For Apps you'll have to choose which tablets to target, depending on your budget.