Saturday, March 30, 2013

Thoughts on Technical Support

I've probably mentioned this a dozen times or so, but I work for a small company.  That means that we don't have a dedicated help desk line.  Which means that I am help desk.  My programmers are help desk.  My network administrator is help desk.  Fortunately, we don't receive a ton of help desk calls in a day (sometimes we don't get a call for weeks).  What this really means is that I have to put the "help desk hat" on and be good at it.  

So when I stumbled across this blog entry, I was intrigued:


I have had to call help desk service from companies such as AT&T and one of my programmers, while dealing with AT&T help desk received the following explanation for why they didn't send a technician to fix our problem for the past week: "We are a very big company and we have a lot of customers to deal with".  Yikes!  I was so angry with that response that I vowed to switch our service as soon as humanly possible, and never use AT&T again.

I have also used Dell service for years (in fact I started in the early 90's).  I have had mostly great service from Dell, including many exceptional repairs.  I remember a laptop that ate my CD (the laptop belonged to a sales person in our office that I was loading MS SQL server onto for a demo).  The Dell technician coordinated with our sales person for a time to fix the laptop, which coincided with a time near noon where he would be in a meeting and didn't need his laptop.  The technician arrived promptly on-time and repaired the laptop in about 20 minutes by replacing the motherboard.  I have had a couple of mix-ups with Dell (one of our company laptops shipped in for repair was shipped to my home by mistake), but they have been very minor and few in number.

So I take my life lessons from the service that I view as spectacular and try to mimic this as best I can.  I don't want to be the obnoxious help desk person (even if I'm not technically a help desk person), because that's the road to lost customers.  The article above is yet another piece of information that I will file away in my memory for future implementation, no matter where I work.  

Company Efficiency Research

I'm on a company efficiency kick this week.  I was looking back at the blog posts I've done in the past and I noticed that I have not been spending enough time on the job that I perform as an IT manager.  Don't get me wrong, I like to program, but realistically, my company benefits the most from my non-programming job functions.

One of the tasks that I do is research technology to improve the efficiency of my company.  My point isn't to reduce the number of jobs that the company creates, but to be able to produce more product with the resources available.  An example of this is printing.  Printing includes technologies like scanning, copying, faxing, networking, servers, etc.  My company produces a lot of paper output.  We have marketing materials and we have customer documents (which can be as large as 600 pages, double-sided).  I start by breaking down the problem into equipment, then I move onto subjects such as furniture arrangements.  For example:

- What is the cost difference for faster printers?
- Can we improve the network speed to queue a print job faster?  Or is it fast enough?
- Are the printers located in an area where the most people can reach the output the quickest?
- What about technologies like "follow-me printing?"
- Can we add more printers and will that be cost-effective?
- What type of binding is being used and can it be automated?
- Use pre-punched paper or purchase a punch unit for the printer/copier?
- What technologies are available for print queuing?  How will each affect the efficiency of printing?
- What are the bottlenecks associated with the current printing layout?

Never stop asking questions like these.  It usually only takes one small change to impact efficiency in a big way.

Network Administration Efficiency

Sometimes there are efficiency improvements to everyday tasks that IT people don't even think about.  It's known as the mechanic fixing his car last.  For a network administrator it can sometimes be more efficient to purchase and setup a small number of identical machine types.  For example, you might decide to use one type of desktop for all employees.  It seems wasteful to spend $300 more per machine for individuals that don't need the power, but shifting resources around can cause issues with moving computers from one person to another.

Another factor to consider is the lifetime of a class of hardware.  For my company we purchase desktop computers knowing that we will replace them in five years, even if they are still 100% operational and efficient enough for the person using them.  The reason behind this is that after 5 years the chance of a failure for a desktop increases dramatically.  It's better to replace a good computer early and retire it rather than trying to replace a machine with a failed hard disk and dealing with loss of worker time, unscheduled replacement and possible loss of data which equates to a loss of employee work.

Website Efficiency

So you are hosting a website and you have an internet connection.  Be award that the transmission speed of your internet connection can determine the speed of your website to outside users (your transmit speed is the customer's receive speed).  If you are sharing this internet connection with internal users, you should install a router that can dedicate a minimum bandwidth for the web server, or give the web server priority over employees who may be connecting to streaming music or YouTube for example (if you allow this in your company).

Obviously I could go on and on about efficiency.  It never ends.  Always look for improvements and never be afraid to try something new to improve efficiency.

Simulation

In a previous blog post I talked about improving the efficiency of a process.  One of the pieces I didn't dive into is how to obtain the raw data necessary to build the task chart presented.  Here's the chart I'm talking about:


The hourly rate of the employee is easy to obtain or estimate since that should be readily available (though you might need to keep this diagram confidential to make sure employees don't know each other's pay, or use a fudge figure).  The time it takes to perform the task is the difficult to obtain number.  There can be many factors involved in this number and the number can vary from one person to another.  You'll need to obtain enough data to get an average and possibly the standard deviation or maximum and minimum.  The maximum and minimum can be used to produce two outcomes for the product, what is the most cost and least cost to produce the product.

Another factor could involve the fact that the product you're producing isn't identical from one to another.  In this example, if the product is custom blueprints for houses, then the amount charged to the customer could vary by the square footage of the house, or it might just be a fixed amount.  If it's by square footage, then the flow chart should be broken into decimal hours per square foot.

You'll need to collect this data before you begin to put improvements into place.  The purpose is to analyze the before and after numbers and demonstrate productivity improvements.

To collect the data, you need to get buy-in with the person who is the manager of the employee doing the work.  Provide the employee with a worksheet to record start and stop times.  Explain to the employee that this is just an efficiency study and they are not to change the way they perform the job to skew the numbers one way or the other (it would be nice to do a blind study involving a surveyor who records the information without the knowledge of the employee, but that usually takes too much time and effort).  The time recorded must be as accurate as possible, leave out breaks, meetings, phone time, interruptions, etc.  Anything that is not attributed to the task is not included as part of the task time.

You might need a week or a month's worth of times from each person.  Once this task is completed it's necessary to sit down and collate the data into something workable.  Here's a sample of two drafters and their raw times:


Now we need to sum up the times for each drafter by task number:


OK, so now we have some numbers to work with.  It appears that drafter 1 performs their task from 2 hours to 2.75 with an average of 2.25 hours.  Drafter 2 has an average of 2.16 hours.  If this work is by square foot, then we need to record the square footage of each task that the drafter performs, then we can get a per square foot average.

This is but a small example.  A real-world example should involve a week's worth of data, or possibly a month's worth.  With the above data, you can compute the total average of all drafters and use that number for your flow diagram.

If you find any discrepancies in my numbers, feel free to not them in the comments section.


Efficiency

This post is about company efficiency. The entire purpose of automation is to be able to perform more goods and services with fewer resources. Most people seem to get wrapped up in computerizing something rather than automating it.

So what, exactly, am I talking about? Imagine a small office environment full of people performing work to produce a product. It takes a wide range of people to create the product. Whether its a service or a widget doesn't matter for this topic, the results are the same. The first task is to document the process flow required to produce the end result. Once a diagram is produced, then some basic numbers can be applied to help analyze bottlenecks that can be made more efficient by automation.  You'll need to know how many man-hours (or minutes) it takes for each task (collect statistics on this task, I'll expand on this in a later blog post).  You'll also need to know how much the labor costs are.  The entire process should give an example cost per product delivery.

I'm going to make up an example using this diagram:



For this example, I'm going to use 4 simple steps, in linear order.  Many processes can get complicated and involve parallel tasks.  You must break out each task for each person.  If the task can be performed by different people and their rates are different, take an average, or assume one person or the other.  I've made this very simple and obvious, but task 2 is the most costly task.  Focusing your energy on reducing this task can have the most impact on the cost of your product.  Let's assume that task 2 consists of a person that creates an AutoCAD drawing.

Now it's time for the research phase.  Start thinking about the job that the person performing task 2 is doing.  Can you purchase a faster computer?  Does the speed of the network cost them productivity?  Do they use a file server and does that slow them down?  Is their monitor big enough?  Is their video card fast enough?

For each item I listed, we need to know how much it will cost and how much time it is expected to save from task 2.  I'm going to use a rough example and use all of the items I mentioned.  Technically, you should make a determination if an upgrade item has a cost to benefit that is too high.  For this example, I'm going to ignore that detail and just show a complete list:



Side Note: You might be asking "how does a bigger monitor save time?"  A larger monitor with a higher resolution gives the user a bigger desktop and reduces scroll time.  

OK, so now we know that we can reduce task 2 by 0.85 hours at a cost of $9,250.  Now we need to know 2 things: When do we get a payback?  and how much will this reduce our cost per product delivery?

The total cost per product delivery is equal to $102.50.  By reducing task to down to 1.15 hours, or 34.5, we reduce the cost per delivery to $77.00.  That saves us $25.50 per product.  So now we have one of our questions answered.  How much will this reduce our cost per product delivery.  While this may sound spectacular, it might not be worth the cost.  So let's find out how many units we need to make to break even with the $9,250 that we need to invest:

Total Units = 9250 / 25.5
= 363 units (rounded up to the nearest unit).

How many hours will that take?

Total hours = 363 units x 3.65
= 1325 (rounded up)
or 165 days or about 7 to 8 months.

Is it worth the cost?  Since we know that the hardware will last at least 5 years, I would say that the decision to spend the money on the upgrades are a good investment.

Next

Next we should focus on the next most expensive task.  Shipping.  What is being shipped?  How long does it take to package an item?  Would a bar-coding system improve this job?  Are there cheaper companies to ship the product (and what are the potential trade-offs)?

Continue this process, repeating the same procedures as I performed in step 2 above until you have determined which processes can be improved and which ones do not need improvement.  When you have completed this task and performed all the improvements, it's prudent to monitor the amount of time it takes each person to perform their tasks and get statistics on the results.

Other Considerations

When you complete the flow diagram, you should also analyze the flow itself and see if there is a way to improve the flow.  Sometimes efficiency can be obtained by combining jobs together, or changing a job into something different.  Sometimes a job can be automated to make it faster.  This takes a bit of head-scratching.  Don't be afraid to try something radical.  

If the process above is a factory type process involving multiple groups of people performing this task, then this is an opportunity to purchase hardware and software for one process and then compare results with other similar processes.  If the success is worth the price, then it can be rolled out large scale.  Reducing the amount of risk involved in purchasing equipment that might not have any real-world impact.



Saturday, March 23, 2013

Elementary AJAX

If you've never done AJAX before, or you want to know the nuts and bolts of how it works, this is going to be a simple AJAX setup.  Visual Studio has drop-in AJAX controls, but I'm going to cut out as much code as possible and show how this is done.  I'm also going to steer you clear of a couple of pitfalls.

The W3C schools website has a great introduction to AJAX that I would recommend if you are just starting out (AJAX Introduction).  Alternatively, you can just type in this code and experiment with it.

First, I'm going to create a Visual Studio project with one web page.  Create an "ASP.NET Empty Web Application" (I called mine "TestAjaxApp"), add a webform to the project (I called mine "Default.aspx").  Now clear all the text from the front page except the top line, so it should look like this:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="TestAjaxApp.Default" %>

We're also going to need a javascript page.  Create a javascript page in the project and call it "Default.js" to match the web page that it belongs to.  Then go to the code behind page (called "Default.aspx.cs" if you used the same names I did), and type in the following to make sure the javascript is loaded and setup the submit button and a text box to view the return data:

Response.Cache.SetCacheability(HttpCacheability.NoCache);

if (Request.QueryString["psData"] != null)
{
     Response.Write("Success!");
     return;
}

Response.Write("<html>");
Response.Write("<head>");
Response.Write("<script type='text/javascript' language='javaScript' src='Default.js'></script>");
Response.Write("</head>");
Response.Write("<body>");

Response.Write("<input type='text' id='txtResult' value='' />");
Response.Write("<input type='submit' value='SendData' onclick='ButtonPressed(); return false;' />");

Response.Write("</body>");
Response.Write("</html>");


The "SetCacheability" command will prevent your browser from reading parameters from the local PC cache.  When this happens, your AJAX will only work one time, then every transmission back to your website will be the same.  If you forget this line and have to add it after running your code (and pulling your hair out trying to figure out what is going on), then you'll also have to manually clear the browser cache before it will work properly.

Just before the web page is "printed" there is a Request.QueryString that catches a parameter called "psData".  This parameter is what will be sent back to the web site from the javascript when the button is first pressed.

OK, so now we need some javascript:

function AjaxCall(type, url, returnfunction)
{
    var xmlhttp = null;

    if (window.XMLHttpRequest)
    {// code for IE7+, Firefox, Chrome, Opera, Safari
        xmlhttp = new XMLHttpRequest();
    }
    else try
    {
        xmlhttp = new ActiveXObject('Msxml2.XMLHTTP');
    }
   

    catch (e)
    {// code for IE6, IE5
         try
         {
             xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
         }
         catch (e)
         {

         }
    }
    xmlhttp.open(type, url, true);
   
    if (returnfunction == null)
    {
        xmlhttp.onreadystatechange = function () { };
    }
    else
    {
        xmlhttp.onreadystatechange = function ()
        {
            if (xmlhttp.readyState == 4)
            {
                if (xmlhttp.status == 200)
                {
                    returnfunction(xmlhttp.responseText);
                }
            }
       };
   }

   xmlhttp.send();
}


function ButtonPressed()
{
    var lsURL = "Default.aspx?psData=5";
    AjaxCall('GET', lsURL, ButtonPressedReturn);
}


function ButtonPressedReturn(psReturnMessage)
{
     document.getElementById('txtResult').value = psReturnMessage;
}


OK, that ends the javascript.  The AjaxCall() function is just the code used to make an AJAX call itself.  The ButtonPressed() function sets up an AJAX call and uses a "GET" method to pass data back to the website. 

So let's start from the beginning:

1. The user clicks on the button called "SendData".

2. The javascript function named "ButtonPressed();" sends an AJAX message back to the webserver without the browser being submitted (this is the whole magic of AJAX, no submit followed by a browser refresh).

3. The parameter "psData" containing the number 5 is passed to the code-behind page and picked up using the RequestQueryString variable.

4. The RequestQueryString section sends back the text "Success!" to the browser.

5. The javascript return function called "ButtonPressedReturn" receives the text from the webserver in the variable called "psReturnMessage".

6. The javascript return function puts the text into the text box with an id of "txtResult"

Your code doesn't have to support a return message.  You can put null in place of the return function name and omit the return function.  Then you don't need to print back anything from the code behind page.  Your AJAX function would look something like this:

AjaxCall('GET', lsURL, null);


This technique can be used to send data back from the browser that you can store in your database.

Also, you can send multiple parameters using the "&" symbol, just like passing parameters to another web page:

var lsURL = "Default.aspx?psData=5&psData2=4";

In this example "psData" will contain a "5" and "psData2" will contain a "4".  You can also pass multiple parameters back to the web page from the web server by separating your data with a unique symbol.  I typically use the pipe ("|") symbol to separate multiple parameters, then I use a "split" command in javascript to get an array of data passed back:

Code Behind:

if (Request.QueryString["psData"] != null)
{
    Response.Write("Success!|Success2!");
    return;
}


JavaScript:

function ButtonPressedReturn(psReturnMessage)
{
    var laData = psReturnMessage.split("|");

    document.getElementById('txtResult').value = laData[0];
}


That's all for now. 

 

HTML Non-Breaking Spaces

One of the problems with HTML is that it will consume extra spaces.  So if you create a pick-list of items and one of your options contains two spaces in a row (as an example), the two spaces will be reduced to one space.  Normally this is not a problem, unless you are using javascript, or a submit to retrieve the data and expecting two spaces.  To overcome this problem, it's easy to convert each space into a non-breaking space by using the "&nbsp;" HTML entity.

Example:

<select name='lstMyStuff' >
<option>This contains two spaces&nbsp;&nbsp;in between.</option>
</select>


So now there's a tiny little problem with the submitted data, it will contain non-breaking spaces which are ASCII character 160.  For reference see this page from w3c schools:

HTML Entities

So now it's time to fix the data transmitted back.  The simple method is to use the string replace in C#:


string lsResult = lsSubmittedText.Replace((char)160,' ');


Notice the use of the single quotes to replace a character 160 with a space character.

I hope this helps.  Good luck with your programming.

Sunday, March 3, 2013

Visio Automation

Microsoft Office products have the ability to be integrated with Visual Studio in what is called "Automation."  I view this technique as a dumb and dirty capability that I use for extracting data from Word, Excel, Visio or other Office applications.  In this example I'm going to show a method to read some basic information from Visio.  MS Visio is a drawing tool.  This is different from a paint-like tool, in that, the objects drawing (circles, rectangles, lines, text, etc.) remain as distinct objects that can be resized, rotated and modified after they have been positioned.  MS Visio is a great tool for doing flow charts and diagrams.  It can also be used to create simple maps or floor plans. 

Let's say that you want to create a floor plan and you want to read the x,y coordinates of each line defined in a visio diagram so you can store them in your program for use in an SVG viewer application.  First, you need to create a file with your floor plan.  Here's my sample floor plan:



OK, now we need to create a blank project in Visual Studio.  Just create a new forms application and drop a button on the front of the form.  I titled my button "Read Plan", but you can make up any name you want, or just leave it on the default.  I also added a multi-line text box (use the text box object, and change the attribute to multi-line):

 

Remember, this is just a quick and dirty program to read the map data.  We're the only one's to use this program.  If you are planning to build a program that will be used by other people, you'll need to clean this up and put in a file-picker to choose the file to import.

OK, so now you need to include the using statement:

using Microsoft.Office.Interop.Visio;

Add the Microsoft.Office.Interop.Visio assembly to the references of the project.  Then use the following code to read the end points of the lines:

private void btnFloorPlan_Click(object sender, EventArgs e)
{
    // I stored the visio file in a temp folder on the C: drive
    string docPath = @"c:\temp\floor_plan.vsd";
    Microsoft.Office.Interop.Visio.Application app = new
        Microsoft.Office.Interop.Visio.Application();

    Document doc = app.Documents.Open(docPath);

    // read items from the first page
    Page page = doc.Pages[1];

    foreach (Shape shape in page.Shapes)
    {
        double lnStartX = shape.Cells["BeginX"].ResultIU;
        double lnStartY = shape.Cells["BeginY"].ResultIU;
        double lnEndX = shape.Cells["EndX"].ResultIU;
        double lnEndY = shape.Cells["EndY"].ResultIU;

        txtResults.Text +=
            lnStartX.ToString("0.00") + "," +
            lnStartY.ToString("0.00") + "," +
            lnEndX.ToString("0.00") + "," +
            lnEndY.ToString("0.00") + "\r\n";
    }

    app.Quit();
}


OK, so now we have some code and it opens the Visio file and reads each shape on page 1.  In my example, the Visio file only contains lines, so when I run it, I expect to read only lines and I don't need to verify which shapes are being read.  I just read the end points.  You'll notice when you run the app, that Visio starts up (and yes you need Visio installed to make this code work).  The last line, "app.Quit();" causes Visio to exit.  If you don't close the Visio application, then a new copy will start up if you run the program a second time.

Now you can run the program and you should get some numbers (not necessarily the same numbers displayed here):



Now you can select the text and copy to the clipboard.  I created a web application with a Default.aspx web page and deleted all the text (except the first line) from the front page:

 
 
 
Next, I wrote an empty method in the code behind:
 
 
protected void Page_Load(object sender, EventArgs e)
{
    RenderFloorPlan();
}
private void RenderFloorPlan()
{
}

And now I paste in the text from the Visio reader application, formatted the information to fit into a list of strings and came up with this result:
 

private void RenderFloorPlan()
{
    List<string> laLines = new List<string>() {
        "2.00,10.00,6.00,10.00",
        "2.00,10.00,2.00,7.00",
        "6.00,10.00,6.00,7.00",
        "2.00,7.00,6.00,7.00",
        "3.00,10.00,3.00,9.50",
        "2.00,9.50,2.50,9.50",
        "2.75,9.50,3.25,9.50",
        "3.50,9.50,4.25,9.50",
        "4.00,9.50,4.00,10.00",
        "4.50,9.50,5.25,9.50",
        "5.00,9.50,5.00,10.00",
        "5.50,9.50,6.00,9.50",
        "2.00,8.00,2.25,8.00",
        "2.50,8.00,3.50,8.00",
        "3.75,8.00,4.00,8.00",
        "4.00,7.00,4.00,8.00"};

}

You can format the output to include quotes and commas if you have a really big list and don't want to do it manually.  That can save some time.

Now, we need to do something with this list.  First, I'm going to setup a generic SVG html output, right after the list defined above:

 

Response.Write("<!DOCTYPE html>");
Response.Write("<html>");
Response.Write("<body>");
Response.Write("<svg id='SVGObject' xmlns='http://www.w3.org/2000/svg' version='1.1'>");

// content goes here
        
Response.Write("</svg>");
Response.Write("</body>");
Response.Write("</html>");


Now we need to put something in that spot that says "content goes here".  That's where we loop through the list and print the lines.  I'm going to warn you up front, that the coordinates that are read from the Visio drawing are not in the same scale as the SVG interface.  So you'll need to play with a multiplier on each coordinate to get your diagram to fit the area on the screen that you want.  So make sure you set this up so you can do that:

for (var i = 0; i < laLines.Count; i++)
{
    string[] lsLineCoords = laLines[i].Split(',');

    double lnStartX = Convert.ToDouble(lsLineCoords[0]) * 50;
    double lnStartY = Convert.ToDouble(lsLineCoords[1]) * 50;
    double lnEndX = Convert.ToDouble(lsLineCoords[2]) * 50;
    double lnEndY = Convert.ToDouble(lsLineCoords[3]) * 50;

    Response.Write("<line x1='" + lnStartX + "' y1='" + lnStartY +

        "' x2='" + lnEndX + "' y2='" + lnEndY +
        "' style='strokewidth:1px;stroke:black;' />");
}


Now run the program.  You should see something like this rendered in your browser:

Now you're probably wondering "why is it upside down?"  There's a reason for that and it's easy to fix.  First, Visio has an origin in the bottom left, so all coordinates go from bottom to top and left to right.  SVG has an origin in the top left corner.  So SVG's coordinates go from top to bottom and left to right.  All we need to do to fix this situation is subtract our "Y" coordinates from the height of the display area:

for (var i = 0; i < laLines.Count; i++)
{
     string[] lsLineCoords = laLines[i].Split(',');
     double lnStartX = Convert.ToDouble(lsLineCoords[0]) * 50;
     double lnStartY = 600 - Convert.ToDouble(lsLineCoords[1]) * 50;
     double lnEndX = Convert.ToDouble(lsLineCoords[2]) * 50;
     double lnEndY = 600 - Convert.ToDouble(lsLineCoords[3]) * 50;

     Response.Write("<line x1='" + lnStartX + "' y1='" + lnStartY +

         "' x2='" + lnEndX + "' y2='" + lnEndY +
         "' style='strokewidth:1px;stroke:black;' />");
}


I chose 600 just to make sure it fit on the browser.  You can add or subtract a constant from the x and y coordinates to shift the image on the screen.  Here's the final result:

Ta-da!

The whole purpose of this example is to show how to use some basic MS Visio automation in Visual Studio and then use the data read to render something in SVG to match the Visio image. 

If you find any errors or have something to add, please leave a comment.



Working for a Small Company

As I've probably mentioned a dozen times, I work for a small company.  First, let me define what I'm talking about when I say small.  My company has just under 50 people (somewhere around 45 the last time I looked at the phone list).  I've worked for large corporations in the past (GM, Pirelli, University of Michigan flint campus).  I've also worked for smaller companies (about 10 people).  I like the small company environment.  It has one major advantage: You get to work on all aspects of IT.  It also has one disadvantage: You have to work on all aspects of IT.  OK, not everybody in my company is saddled with this burden, it's mostly my problem.  Fortunately, I like to get involved.  I also like to make forward progress, and forward progress means being able to control all aspects of the computing environment.

What in the world am I talking about?  When I worked for EDS/GM, I was hired as a contractor.  At the time I was pigeon-holed into the "Consistent Office Environment" (COE)deployment team.  I was hired as a Novell network person.  Which I was under the understanding meant that I would be installing servers.  Unfortunately, it really meant I would just plug the server in.  The COE computing environment was a brilliant idea that was implemented in a not so brilliant way.  Every PC was configured identically, and every person had an identical model of PC.  Also, every Novell file server was configured identically.  Due to high turn-over problems of contract personnel, EDS subdivided the jobs into tiny pieces so they could train a person in no time.  There was a group of people that setup the servers (installed Novell using a script).  There was a group of people who setup the users and rights on servers (they called them network security).  There was a group of PC installers.  There was a group of people in charge of network infrastructure (mostly hubs and distribution wiring).  Finally, there was a 3-tiered help desk group.  Sounds good doesn't it?  Except, each group of people did not have permissions or authority to go outside their scope of work.  If a PC had a problem, it normally required 3 to 4 people to fix the problem.  Network security needed to check rights, a PC person needed to check the PC, the server group might need to check software installation problems and it could require a network infrastructure person to fix wiring if the PC in question couldn't connect to a server at all.

Then there was help desk.  Oh yeah.  When a user called help desk, they entered into level 1 help.  Level 1 help only asked if the machine had power and did the user reboot their machine.  If those two criteria were met, then the person was passed along to level 2 help desk.  Level 2 help desk had a dozen or so scripts.  These guys were not very technical, but some were familiar with PC problems and they at least knew how to use a PC (so they could talk a person through a simple problem, like restoring a shortcut that they accidentally deleted).  Level 3 was the technical or PC repair people.  It wasn't really a help desk person per-se.  Help desk would contact a person that would have to go to the PC and fix the problem.

Now back to the small company.  When a user has a problem, they call someone in IT.  In our case, we don't currently have a designated help desk person, so it just goes to my desk.  I handle the problem myself, or I might get someone else to call the person back right away (or get up and help the person, if it's someone in our office).  Sometimes, they just walk over to my office and tell me their problem.  We don't currently have enough help desk calls to warrant a dedicated help desk person.  We've been in this position before and we had a full-time help desk, office assistant.  He did odd jobs and was not a programmer, but he was able to help 90% of the people in the field that had issues with our software.

Another aspect of our company is that software developers have admin rights.  It's a little hazardous (think rouge employee), but it also allows us to fix things quickly.  We've had our share of mess-ups, but we're all qualified enough to be able to put systems back together when something goes South.  In our environment everybody gets qualified, because we built it.  From the ground up.  We bought the hardware, the operating systems, configured the servers, wrote the software and learned how to deploy it.  We all know every aspect of our own systems.  This is not normally achievable in a large company due to the division of labor and the shear size of software that a company can develop.

Our company employs more than one software developer which I believe is necessary.  Why?  Because you need someone to bounce ideas off of.  It's not uncommon to get stuck and not be able to figure out why something in code doesn't work.  Sometimes it just takes a second set of eyes.  We also have contacts in other companies who specialize in networking solutions.  We employ one network person and when he gets stuck, he can dial up an expert and we can pay to have him come out and solve a complex problem.

Not all small companies have the kind of IT infrastructure that my company has.  If you're looking for the kind of environment I'm talking about, you need to look at the way that the company uses computers.  Most companies with less than 50 people have a bunch of PCs and maybe a file server.  The business that my company does relies on an expert system that is custom designed around the company operations.  It's a complex system and requires us to host our own website, email server and other equipment for software development.  It also requires an on-site staff of personnel due to the constant demand for more features and improvements.

Well, I think I've rambled enough for one day.  I hope you've enjoyed reading this blog.  If you have something to say, drop me a comment.

Database Design Ideas

Standardized Naming Conventions

When you design a new database it's a good idea to sit down and list the standards you want this database to follow.  To simplify the maintenance of the database that my company uses, we decided to follow a standard field naming convention.  Here's what we do (and you can make up your own rules to assist your project):

Rule 1: We use one letter at the beginning of the name to designate what data type the field is.  This is very generic and we only use some basics.  We currently use "c" for character, "n" for decimal type (we use the decimal for all numbers and we standardized on two types), "d" for datetime, "b" for blob (or varbinary in MS SQL), "m" for memo (or text in MS SQL).  This rule was set in place in 1998 when Hungarian notation was still in common programming use.

Rule 2: We always set the primary key to the first field, and the primary key name is a "c" followed by the table name.  OK, our primary keys are 12 characters and we have a unique key generator object because we have remote databases that need to be able to merge with our master database.  You will probably design your database with an integer unique key, or use the MS SQL uniqueidentifier field type (I would recommend this).  The point of this rule is that the primary key always is the name of the table and therefore foreign keys to this table are the name of this table (so you know which table to connect to that foreign key).

Rule 3: Every table contains a "dEdit" and a "cEditID" field.  The "dEdit" field is the date and time of the last update or insert.  The "cEditID" is the user id of the person's account that inserted or updated the record.  We also use "dCreate" and "cCreateID", but not on every table.  This is overkill, but maybe you'd like to maintain that information.

Database Creating Tools

The reason we defined these rules is to make it easier to read the structure of tables and determine their purpose.  Also, and this is important, we designed and built tools to maintain our databases.  Our database definitions are stored in Excel files that can be read and used to generate a new database.  It's not necessary to store the definition in Excel, you can always create a template database and use that to duplicate your database, or you can create the definitions in code and use that as the master template database.  In our situation, we use excel.

So now you'll need a set of rules to define when changes are made, who can make the changes and how are the changes made.  Here are our rules:

1. The operating database definitions are stored in the production database directory.  This is just a designated folder that contains our utility that can generate and fix databases.  Our tool is able to add/remove indexes, relational integrity constraints, primary keys, fields, tables, and triggers.  It can also just read a database and compare with the spreadsheet to report what is mis-matched.

2. For systems under development, a copy is kept in the designated development directory where the project design documents reside.  We have an entire system for numbering our development projects and the project numbers match the directory structure in our development directory on the file server.  A better method (which we're converting to), is to create a subdirectory in your visual studio project called "DatabaseDefinitions" or some such name, and copy them there.  That way they'll be checked into your source control system (if you're using source control) and the proper database versions are always available for that version of software.

3. Make a clean version of the development database first, then write the software to make it work properly.  This is pretty obvious, but sometimes overlooked.  Make sure all the indexes and relational integrity constraints are defined before starting any code.

Using Scripts

Another method, which is more commonly used, is to create scripts to generate your databases and fields.  I normally only use scripts for a one-use temporary setup.  My last use of scripts was to "Detach" all databases from one database server and another script to "Attach" all databases to another server.  I used this to convert from an old SQL server to a new one.  Technically, you can create scripts that create databases from scratch.  We don't use scripts for this purpose because we already designed a tool to verify a database structure and that tool is easier to use than a script.  To generate a script for a database that you have setup in MS SQL:  Right-click on your database, select "Tasks", then "Generate Scripts".  This will bring up a wizard dialog box to lead you through the script generating process.  You can "Script all objects" to produce a lengthy script that will generate a duplicate of this database.  If you need a duplicate of a database with a different name, you'll have to search and replace the database name in the script before running the script.

I hope this article helps you think of ideas.  Good luck with your databases.