Saturday, October 26, 2013

Software Testing

This past Wednesday (10/23/2013) I gave a tech talk at the University of Maryland.  The students there were awesome.  It's been a while since I had the chance to talk to a lot of young, energetic future computer scientists.  I still remember the days when I dreamed of putting my knowledge to practice.  Payam (my co-worker/boss) set up the whole shin-dig and we decided to ad-hoc the actual talk.  I have so much material to draw from that I could have talked for hours on end.  I hope that the students learned a few hints to help them avoid some of the errors that I've made in the past.  Some day I'm going to encapsulate this information into 1 hour lectures.  Maybe I'll get motivated a write a book!

I keep reading news about the new Affordable Care Act website and I'm horrified by the level of amateurism that went into developing that website.  Today I read that the integration testing was only done for two weeks, just before release.  Yikes!  Integration testing should be setup before the modules are even started.  In fact a testing environment can be designed and put into place as the requirements are nailed down.  The first integration test should fail and list every requirement that did not pass.  As modules are completed, the test is re-run and items should start to pass.  That gives the software developers time to fix any issues that crop up on the tests.  Of course, I'm assuming that the integration tests done on the ACA website were automated and not performed by hand (double-yikes!).

DealerOn's system has been build around coding techniques that involved hand-testing only.  We have recently hired a bona-fide quality person and she has coordinated the infrastructure changes necessary for automated testing.  I'm talking about automation beyond just unit testing.  We are currently using Telerik to perform some of these tasks.  She also uses scripts to manual test things that are not yet automated.  DealerOn is also in the process of completely revamping our staging system.  We are going to use tools to record html transactions coming into our live site and replay it back against our staging system to test concurrency and performance bugs.  As a computer nerd, I'm really excited about this stuff!

Testing Your Software

Testing is hard.  It's more of an art than a science.  That doesn't stop me from treating it like a science.  I just understand that no matter how hard I try, I'm probably not going to be 100% successful.  I've found that experience is the great teacher in testing.  No matter how many books I've consumed, or how many classes I've taken, my actual disasters are what sticks in my mind the most.  Probably because they were rather painful.  Story time: I remember a job I worked at involving a hand-full of armature programmers (some were newly minted computer scientists) working on legacy software.  When one of the programmers completed a program that would be distributed to contractors (by mailing packs of floppies), he informed his boss that he needed someone else to do some beta testing.  His boss was more than kind in doing the testing himself.  The first thing he did was install the program on his PC and run it.  He started entering letters in the text field used to enter bid prices.  CRASH!  The programmer in question said "You're not supposed to do that!"  Of course.  That's just something that wasn't thought of when he coded the program.  The obvious and stupid stuff.  Users are not going to be computer scientists. 

So what's the lesson here?  For every user input you should always test the boundaries of what can be entered.  If you are expecting numeric input, make sure your software only accepts numeric input.  You might need to inform the user of an error (preferably in a nice way, like a red background in the text box, or a small message above or next to the text box.  You should also try to not make the user angry.  If your input screen detects an error on submit, then you should provide gentle feedback and make absolutely sure you keep all their data entry points intact.  I can't tell you how many times I've filled out a form on-line and submitted it followed by an error because my credit card number was not supposed to have dashes and the whole data entry screen is blank.  Which requires me to re-fill in all the information that was correct and I could potentially typo something else causing a different error.  Angry customers are not something you need.

Automated Testing

It's nice that todays technology includes automated testing.  There are free tools for unit/integration testing such as NUnit.  Get familiar with these tools.  I normally use MSTest, which is built into the professional version of Visual Studio.  It's nearly identical to NUnit (and personally, I think NUnit has a leg up on some features).  My point is that every student should understand how to use unit tests, even if it's not taught at the University.  Unit testing is going to become much more important in the future of software development.  Here's why:  One of the biggest advantages to unit tests is that it can be re-ran at any time.  This means that you can do regression testing without consuming a lot of time.  Regression testing is where you test unrelated features of your software.  Because, unlike any other engineering practice devised by man, in software everything is related to everything.  Sometimes unintentionally.  In other words, when an engineer designs a jet, fixing a problem with the landing gear is not going to affect the performance of the jet engine.  In software, you can't guarantee or assume anything.

One other thing to remember with unit/integration testing.  When you find a bug and fix it, that means you found a bug that got past your unit tests.  That also means that you're missing a unit test.  So create a test that will find that bug in the future.  Don't assume that the bug you just fixed won't magically re-appear.  Yes, I have fixed the same bug over and over on systems where multiple developers are involved.  This occurs because there are two or more conflicting requirements.  Fixing the software one way, causes a bug in the other requirement.  Re-fixing for the alternate requirement causes the original bug to occur.  If you create a unit test, any other developer will see that unit test (make sure it's properly documented as to what behavior is being tested), and realized that they should not be breaking that requirement.  If a conflicting requirement is discovered, then a decision can be made to fix the problem right.

Test Early

Test should be done as early as possible.  It's easy to fall into the trap of just programming.  I've done it.  I still do it.  Try to make and plan your unit tests before you start to code.  If you can identify functionality early on, make unit tests to verify this functionality.  If you have a document listing each feature that must be supported, you should be able to create a set of unit or integration tests to satisfy each feature.  Initially, these tests will all fail.  As you complete each feature, you can see your unit tests turn green and see how much you have completed and how much is left to complete.  Add tests as you identify problems.  When all tests are green, you should be ready for final testing.  It's also easier to report your progress when you can show the percentage of tests left to complete.

Unit tests are also a way to test tricky areas of your software.  Sometimes real data is too big to really test effectively.  Especially when starting a project.  A unit test can be setup with a small portion of test data.  This can be used for the initial development of the software until it comes time to test on a bigger data set.  It can also save you development time if your program is written to read data from a remote system (think WCF or SOAP).  If I expect XML data (or JASON) from a remote site, I normally grab a one-record example from the remote site and put it into an XML file in the unit test directory.  Then I feed that data into the first object that will receive the data for processing.  I do this because there is no delay in feeding data directly, where the actual SOAP or WCF connection needs to wait for a response from a remote server.  Once the processing object is completed, I can do a full system test to make sure it works correctly with real data.  If I find a different set of data breaks my code, I create a new unit test (notice how I always maintain my original test) and feed that data into my object and fix the problem(s).  Eventually, I'll have a group of unit tests that will cover many possible input configurations.


One of the things I've learned from my experience in the world of software engineering is that you must keep on top of stuff.  Education is very important.  I consume a lot of books.  I have also started viewing on-line classes.  Udacity is one great source of educational materials.  It's free and they have subjects such as software testing (see course CS258).  If you want to learn testing from the very basics on up, this is an excellent course to view and interact with.