Wednesday, December 30, 2015

Upgrading the Computer

Summary

This is going to be a hardware blog post with some technology then and now mixed in.


The New Machine

I recently upgraded my computer.  It's been over six years since I assembled my last PC, so it was time.  One of the advantages of waiting so long is that I can spend less money and still get a significant increase in computing power vs. the older machine.  My real reason for waiting so long is that I've done this task dozens of times since the early 80's when I purchased my first machine (which as a Macintosh, when it was called the Macintosh computer).  So I have many painful memories of upgrading hardware, looking for the correct drivers and attempting to figure out how to configure and make things work.

I did some research on-line.  My first priority was hard drive speed.  I was one of the early adopters of the SSD.  To put it into perspective, I paid $700 for a 256 Gig SSD hard drive with a SATA interface.  That's SATA as in, before SATA 2.  I purchased Windows 7 at the same time because Windows 7 just came out (just to give an idea of how long ago that was).  I wanted to see what was available and I was very happy to discover that they are finally abandoning the hard drive interface.  Let's face it, an SSD is not a hard drive but a memory stick.  The new SSDs have an interface called the M.2 which mounts right on the motherboard.  The M.2 connector uses the PCI-Express 3 bus.  No wires!  Yay!  I chose the new Samsung 950 Pro, which comes in two sizes: 256 Gig and 512 Gig.  I ended up buying the 512Gig drive.  


My old SSD runs at 250Mb/s, the new drive runs at 2500Mb/s.  Yeah, it's 10 times faster.

So I started with the hard drive this time and built my computer around that component.  Usually, I start with the CPU.  My next task was to pick out a CPU and that was rough.  I've been out of the hardware business for too long and I had to compare speed stats with my existing PC to get a comparison.  I settled on the Intel Core i7-4790K, which is a 4Ghz quad core.  It's not cutting edge, but it's over 5 times faster than my previous Core-2 Duo that was using.  

Next up was the motherboard, obviously.  Let's face it, once you pick out a CPU, you need to research a motherboard to match.  ASRock seems to be the common brand these days.  ASUS is still big and I've purchased a couple of their boards in the past.  They go way back.  I stumbled onto ASRock because it was the first motherboard on NewEgg's website in the list of Intel motherboards that fit the socket of CPU I was looking for.  Oh, and it had an M.2 socket (I'm betting they all do by now, and some have two slots).  My choice was the Fatal1ty Z97 Killer.



Next was memory.  Oh, there are so many choices.  I've bought a lot of G.Skill memory sticks in my lifetime.  I settled for some DDR3 2400 16Gb sticks (actually 2x8Gig).  I bought 2 packs so I have 32Gig total memory, but they were only $75 a pack.  Which is cheap.  I use a lot of memory because I do development work and I run multiple instances of SQL server on my PC for development purposes.  I'm also researching virtual machine stuff like Docker.  So it'll be nice to have oodles of memory.



Now for the power supply.  One thing I learned a long time ago, was not to skimp on the size of the power supply.  Overkill is good.  So I started looking on NewEgg for a power supply and I stumbled onto the new Corsair power supplies and discovered just how sweet that technology has come.  The RMx series has coords that have plugs on both sides, so you don't have to install a metal box with a billion wires hanging out inside your computer.  Not to mention that most of those wires are never used.  I bought the 850 Watt model.  



Nearing the finish line.  The video card.  Sigh, I've been out of it for so long, I'm not sure what is hot and what is not.  I can tell by the prices, but I've already sunk some cash in the above hardware and I was trying not to make this into a $3,000 purchase, so I settled on the EVGA GeForce GTX 960.  It was selling for about $200.  Which is really a cheap price for a graphics card.  It takes up 2 slot positions, which is pretty normal these days and it came with 2GB of memory (they make a 4GB version as well).  My primary concern with a video card is to handle my 30" monitor, which requires a dual-link DVI connection (or I could adapt to HDMI).  The resolution of this monitor is 2560 x 1600, so it won't work with a cheap card.  If you're planning to build a gaming system, I would advise you research the video card first, then build your system around that.

The next component I bought is optional, but I have experience with CPUs and loud fans.  In fact, I assembled a Pentium 4 machine with a tiny jet engine attached to it (it was the Intel fan).  That prompted me to buy my first large and quiet copper fan.  So I also bought the Deepcool Gammaxx CPU cooler.  It is quiet!  





Oh and the blue lights are cool too!


The final piece was the case and I ended up going generic.  I spent too much time trying to pick out a case and I ended up choosing the Cooler Master HAF 912.  Which turned out to be a really nice case.


Assembly

The case arrived first so I had a box sitting around my computer room for a while.  Then the parts arrived, but UPS needed a signature so I had them hold at the distribution center.  I was happy to see that all the parts came in one large box.  Nothing worse than receiving all the parts, except the CPU (or some other critical part).  

The first thing to do is unpack the motherboard and get the case laid out.  I have done this so many times, I don't have to even think about how to assemble a computer any more.  So I positioned the motherboard and identified which holes in the case will need posts and installed the extra posts.  It's important to put in all the posts because that makes gives the motherboard a very solid foundation so it doesn't bend when you insert a card or plug in a power connector.  Then I attached the motherboard.

There is one step I always forget and curse myself every time I do it.  This time was no different.  I always forget to put the shield on the back of the case before installing the motherboard.  Some of my older PC's didn't have that shield because I didn't want to reassemble the machine.  This time I took the motherboard back out and put the shield in.  Then re-attached the motherboard.


The next component I installed was the hard drive.  There is a flush mount socket between the edge card connectors.  The socket area also has some screw holes and a screw that holds the board to the motherboard.  That's pretty nice, since it doesn't rely on the tightness of the socket to hold it in.  Here's what it looks like when it's mounted on a motherboard:


Next, I mounted the CPU.  This is a painful process.  I am not a fan of the LGA 1150 fan configuration.  The plastic posts that screw into the motherboard are difficult to mount tight.  The Deepcool CPU cooler has several brackets to mount the fan to many types of CPUs and after looking at the other options, the LGA 1150 is the most annoying.  I do like the fact that processor companies have done away with the pins.  No chance of breaking any, not to mention the fact that they have become so small that you can't see them without a magnifying glass (or maybe my eyes are just getting bad... Nahh).

At this point, I would recommend connecting all the connections from the front panel.  These include stuff like the on/off button, USB connectors, the HDD light and maybe the earphone jack and microphone jack (or any other connections that are on the front panel of your case).  Don't do what I usually do.  I mound the video card and power supply first, then I try to squeeze my hands between the two to try and plug in tiny connectors in a slot the width of a finger.

Once the front panel connections are completed, mount the memory, the video card and finally the power supply.  The power supply is usually last because once the wires are connected, it gets tight inside the case and you need to move wires around to get cards into their slots.

When the machine is powered up, you can connect a network cord and run the BIOS update.  This is sweet.  In the "good ole' days" we had to download the BIOS update software to another computer, put it on a bootable device (like a CD rom or floppy), then flash the BIOS.   


Installing Windows

For now, I'm re-installing my Windows 7 Ultimate edition on this PC.  I'll perform the Windows 10 upgrade later.  In the bios, the new SSD was recognized right away but Windows 7 needs a driver.  This driver can be installed on a USB stick which beats the old method of using a floppy drive (ouch, I just revealed how "old" I am).  It took some effort to find the right drivers, but these worked:

Samsung windows 7 driver
 
Then the windows install took another 5 minutes or so.

One of the problems with being the first person on the block to buy Windows on a disc is that it has no patches installed.  So the patch process took a couple of days and about 200 or so reboots (only exaggerating a little).  In the middle was an IE upgrade and after I installed Visual Studio 2015 Community edition and MS SQL Server 2014, there were more updates.  

I intended to just move my old 2TB hard drive from the old machine for now.  It has photos and other bulk storage.  It didn't work with the new motherboard.  Not sure what the issue was, but I ended up purchasing a 4TB SATA-3 hard drive and installing it clean.  That worked right out of the box.  Copying from the old drive required me to share the drive (with permissions), and doing a bulk copy and paste, and leaving that go for a few hours.

The new machine doesn't have the quick-boot option because it's not supported under Windows 7.  However, it still boots in about 2 to 3 seconds.  Very nice.  I also run CrashPlan, which I use for backup to cloud storage.  This is my "just in case the house burns down" backup system.  Like any backup software, that software can kill the speed of a computer in no time.  On this machine, I can't tell if its running or not.  That is due to two factors: Super fast hard drives and lots of memory.


Performance Tests

Passmark makes a performance test that I've used on many machines.  This machine benchmarked at a total of 5426.  The CPU benchmarked a little slower than the Intel i7-4790K @ 3.3 Ghz.  The 2D benchmarks are better than most of the graphics cards it compares with (there are a lot of tests).  The 3D tests came in about about half the speed of the GeForce GTX 960.  Which I expected because it cost about half as much for this card.  The memory performance was better than all other memory systems compared, except the DDR4 SDRAM PC4-17000 Crucial for the threaded tests.  Not sure if that's going to matter much.  The hard disk test was as expected.  The MBytes transferred per second test blows away any other disk.  It rated at 821 and the next fasted SSD was the Kingston SH100S3120G which clocked at 262.7.

My own real-world tests included the use of SQL Server, which is blazing fast to query data.  Photoshop pops up like it's a small utility program.  I opened 87 photos from a directory containing raw jpg files from my camera that range from 5Mb to 8Mb in size.  Photoshop opened these photos without choking and they opened within 30 seconds or so.  My old PC would have died after about 20 photos.  Visual Studio will compile and run a program so fast that I can't tell if it did anything.  So I put a break-point in the code and it just appeared at the break-point right as I hit the F-5 key. 


History

My first PC was a 386SX running at running at 16Mhz.  I don't remember how much memory the machine had in it but I do remember the hard drive was somewhere in the neighborhood of 160 Megabytes.  It was barely able to play Doom, not possible to play Descent.  

I skipped over the 386 and went right into the 486DX2 running at 66Mhz.  That machine was fast!  That was the first machine I installed Windows 95.  There were several hard drive, memory and video card upgrades during this time.  I remember upgrading the sound card to the Sound Blaster AWE32.  Now I just use on-board surround sound.

My next motherboard was the 90Mhz Pentium.  I remember the 1.9 Gigabyte hard drive coming out about then.  I had to partition that thing into 4 drives because DOS couldn't support such a large hard drive when they were first introduced.

Next came the 200Mhz Pentium Pro.  That had a really cool giant chip.  The motherboard for that machine had a new 6-pin power connector on it and I was unable to use my existing power supply.

After that came a Pentium II (I think it was a 400Mhz version).  This was the new socket that had an integrated heat sink and fan.  I was never much of a fan of that configuration.  I ended up waiting out past the Pentium III.  I upgraded to Windows XP on this machine.  40 Gigabyte hard drives came out around this time.  I remember thinking that my first hard drive was 40 Megabytes that I purchased for $900 in 1986 for my Macintosh.  I don't think I paid much more than $100 for the 40 Gig drive at the time.

The Pentium 4 arrived and I jumped on that CPU.  Mine ran at 1Ghz.  I used this machine for a long time.

My last machine was a Dell Precision workstation.  This was configured with a Core-2 Duo and came with 2Gb of memory and as mentioned earlier I added a 256Gb SSD.  I also had to buy a video card to support the new monitor.  I finally ditched my old 22" video tube for a flat screen and I forked over the $1100 for the 30" dell 3007WFPHC monitor.

What comes next?  Something bigger, better and faster!

Monday, December 7, 2015

Side by Side Error

Summary

In this blog post I'm going to demonstrate how to generate an error that is difficult to find.


The Error

The error message that I'm going to show can be caused by a lot of different things.  The cause of this error will be the use of an "&" symbol inside the app.config file.  The error message you get will be:

The application has failed to start because its side-by-side configuration is incorrect.  Please see the application event log or use the command-line sxstrace.exe tool for more detail.

If you get this error, you'll need to go to the control panel and open up the event viewer.  Then look under the applications section and look for a side-by-side error message.  This message will probably point you in the right direction.  Now I'm going to show how to generate this error with a few lines of code.  First, I'm going to create a console program in C# with Visual Studio 2015:

using System;
using System.Configuration;

namespace SideBySideBlogPost
{
    class Program
    {
        static void Main(string[] args)
        {
            string temp = ConfigurationManager.AppSettings["test"];

            Console.WriteLine("test output");
            Console.ReadKey();
        }
    }
}


Then I'm going to create an app.config file and add the following inside the <configuration> section:

<appSettings>
    <add key="test" value="test output & more testing"/>
</appSettings>


If you are using Visual Studio 2015, you'll notice that you can't compile the program because the "&" symbol is not allowed.  So I removed the symbol, compiled the program and edited it in the resulting app.config file in the bin/debug directory.  Then ran the program.  Here's what the event viewer looks like:



 

Saturday, December 5, 2015

Serializing Data

Summary

In this blog post I'm going to talk about some tricky problems with serializing and deserializing data.  In particular, I'm going to demonstrate a problem with the BinaryFormatter used in C# to turn an object into a byte array of data.


Using the BinaryFormatter Serializer

If you are serializing an object inside your project and storing the data someplace, then deserializing the same object in your project, things will work as expected.  I'll show an example.  First, I'll define a generic object called AddressClass.  Which stores address information:

[Serializable]
public class AddressClass
{
    public string Address1 { get; set; }
    public string Address2 { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string Zip { get; set; }
}


The first thing you'll notice is that there is a [Serializable] attribute.  This is needed in order for BinaryFormatter to serialize the object.  Next, I'll create an instance of this object in my console application and populate with some dummy data:

// create an instance and put some dummy data into it.
var addressClass = new AddressClass
{
    Address1 = "123 Main st",
    City = "New York",
    State = "New York",
    Zip = "12345"
};


Now we're ready to serialize the data.  In this example, I'll just serialize this object into a byte array:

// serialize the object
using (var memoryStream = new MemoryStream())
{
    var binaryFormatter = new BinaryFormatter();
    binaryFormatter.Serialize(memoryStream, addressClass);

    storedData = memoryStream.ToArray();
}


There's nothing fancy going on here.  You can even use a compressor inside the code above to compress the data before saving it someplace (like SQL or Redis or transmitting over the wire).  Now, I'm going to just deserialize the data into a new object instance:

//deserialize the object
AddressClass newObject;
using (var memoryStream = new MemoryStream())
{
    var binaryFormatter = new BinaryFormatter();

    memoryStream.Write(storedData, 0, storedData.Length);
    memoryStream.Seek(0, SeekOrigin.Begin);

    newObject = (AddressClass)binaryFormatter.Deserialize(memoryStream);
}


If you put a break-point at the end of this code as it is, you can see the newObject contains the exact same data that the addressClass instance contained.  In order to make all the code above work in one program you'll have to include the following usings at the top:

using System;
using System.IO;
using System.Runtime.Serialization.Formatters.Binary;



Deserializing in a Different Program

Here's where the trouble starts.  Let's say that you have two different programs.  One program serializes the data and stores it someplace (or transmits it).  Then another program will read that data and deserialize it for its own use.  To simulate this, and avoid writing a bunch of code that will distract from this blog post, I'm going to dump the serialized data as an array of integers in a text file.  Then I'm going to copy that raw text data and then use it in my second program as preset data of a byte array.  Then I'm going to copy the AddressClass code above and the deserialize code above and put it in another program.  This should deserialize the data and put it into the new object as above.  But that doesn't happen.  Here's the error that will occur:

Unable to find assembly 'SerializatingDataBlogPost, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.

This error occurs on this line:

newObject = (AddressClass)binaryFormatter.Deserialize(memoryStream);

Inside the serialized data is a reference to the dll that was used to serialize the information.  If it doesn't match, then the assumption is that BinaryFormatter will not be able to convert the serialized data back into the object defined.

If you dig around, you'll find numerous articles on how to get around this problem.  Using a BindToType object is one method as shown here:

Unable to find assembly with BinaryFormatter.Deserialize

And it goes down a rabbit hole from there.  


Another Solution

Another solution is to serialize the data into JSON format.  The Newtonsoft serializer is very good at serializing objects into JSON.  After that, the deserialized data can be cast back into the same object inside another dll.  Use the NuGet manager to add Newtonsoft to your project.  Then use the following code to serialize your addressClass object:

// serialize the object
var serializer = new JsonSerializer();
string resultSet = JsonConvert.SerializeObject(addressClass);


This will convert your object into the following string:

{"Address1":"123 Main st","Address2":null,"City":"New York","State":"New York","Zip":"12345"}

Inside another project, you can deserialize the string above using this:

// deserialize object
AddressClass newObject;
newObject = JsonConvert.DeserializeObject<AddressClass>(resultSet);



Compressing the Data

JSON is a text format and it can take up a lot of space, so you can add a compressor to your code to reduce the amount of space that your serialized data takes up.  You can use the following methods to compress and decompress string data into byte array data:

private static byte[] Compress(string input)
{
    byte[] inputData = Encoding.ASCII.GetBytes(input);
    byte[] result;

    using (var memoryStream = new MemoryStream())
    {
        using (var zip = new GZipStream(memoryStream, CompressionMode.Compress))
        {
            zip.Write(inputData, 0, inputData.Length);
        }

        result = memoryStream.ToArray();
    }

    return result;
}

private static string Decompress(byte[] input)
{
    byte[] result;

    using (var outputMemoryStream = new MemoryStream())
    {
        using (var inputMemoryStream = new MemoryStream(input))
        {
            using (var zip = new GZipStream(inputMemoryStream, CompressionMode.Decompress))
            {
                zip.CopyTo(outputMemoryStream);
            }
        }

        result = outputMemoryStream.ToArray();
    }

    return Encoding.Default.GetString(result);
}


You'll need the following usings:

using System.IO.Compression;
using System.IO;


Then you can pass your JSON text to the compressor like this:

// compress text
byte[] compressedResult = Compress(resultSet);


And you'll need to decompress back into JSON before deserializing:

// decompress text
string resultSet = Decompress(compressedResult);



Where to Get the Code

As usual, you can go to my GitHub account and download the projects by clicking here.



Sunday, November 15, 2015

The Cloud

This post is going to be a bit different from the usual.  I'm going to talk a little about cloud computing.  Specifically about Amazon's cloud and what you should consider if you are thinking about using a cloud-based system.  This article will be geared more toward the business end of cloud computing, though I'm going to describe some technical details up front.


Some History

When the cloud was first offered, I was a bit skeptical about using it.  At the time I was working for a company that hosted their own equipment.  Internet connection bandwidth was not like it is today (I think we had a 5 megabit connection back then).  The cloud was new and virtualization was new and expensive.  There were a lot of questions about how to do it.  If I were to startup a new system for a company like that today, I'd recommend the cloud.

Any IT person who installs equipment for a data center today knows about cloud computing.  I have now worked for two companies that have hosted their equipment at a major data center, using virtual hosts.  The advantages of hosting equipment at a data center verses providing your own facility are numerous.  Off the top of my head are: Cooling, backup power, data bandwidth to the internet, and physical security to the site.  The cloud provides additional benefits: Pay for equipment as you need it, avoid delay required to order new equipment.


Amazon Web Services (AWS)

I examined a few different cloud services and I'll blog about other services as I get time to gain some experience.  The reason I started with AWS is due to the fact that they have a 1 year trial for free.  That is marketing genius right there!  First, they encourage software developers to sign up and learn their system.  That allows them to get their foot in the door of companies that might start using cloud computing and abandon their physical data centers.  All because they have developers on staff that already know the technology.  Second, a year is a lot of time to experiment.  A person can get really good at understanding the services or they can attempt to build a new product using their service to see how it operates.

I signed up and it does require a credit card to complete the sign up.  That sends off a few alarms in the back of my head because technically, they could charge my card without me knowing it.  So the first thing I did was find out where I can review any charges.  I also noticed that there are warning messages that tell me when I'm attempting to setup a service that does not apply to the free tier (which means that I'll get charged).  The great unknown is what happens if I accidentally get a flood of traffic for a test application that I've posted?  I guess I'll find out, or hopefully not.

Anyway, here's what the billing screen looks like:


This is accessible from the drop-down menu above with your name on the account ("Frank DeCaire" for my account).

There are a lot of services on AWS and their control panel is rather large:


Where to start?  I started with Elastic Beanstalk.  Amazon uses the word "Elastic" in all kinds of services.  At first, I thought it was just a cute word they used to describe their product the way that Microsoft uses the word "Azure".  I began to read some documents on their services and the word "Elastic" refers to the fact that you can program your cloud to provision new servers or tear-down servers according to trigger points.  So you can cause more servers to be put on line if your load becomes too high.  Conversely you can automatically tear-down servers if the load gets too low (so you don't have to pay for servers you don't need during low volume times).  This is where the term "Elastic" comes in.  The number of servers you apply to your product is elastic.  

Back to Beanstalk.  The Elastic Beanstalk application has a web server and an optional database server.  So I clicked into the Beanstalk app and created an IIS server (there are several web server types to choose from).  Then I added a SQL Server Express database under RDS.  The database server required an id and password.  Once that was created there is a configuration details screen and it contains a url under the section named Endpoint,  This is the connection url that can be used by SQL Server Management Studio.  Once connected, I was able to manipulate SQL Server the same as a local instance.  I created tables and inserted data to make sure it worked.


IIS

The IIS server control panel looks like this:



You can click on the blue link to pop-up the website url that points to this web server (or server farm).  I have intentionally obscured the id by replacing it with "abcdefgh", so the id above will not work.  You'll need to create your own account and a random id will be generated for your own server.

Next, you need to download the tool kit for Visual Studio (click here).  I installed it on VS 2015, so I know it works on the newest version of Visual Studio.  I also tested on VS 2013.  There are a few gotchas that I ran into.  First, I ran into an error when attempting to deploy to AWS.  The error I received was that the URL validation failed ("Error during URL validation; check URL and try again").  This turned out to be a false error.  What I discovered was that there was a permissions problem with access to IIS.  This can be found in the Identity and Access Management console (IAM).  I had a user created, but I did not assign a group to the user.  The IAM console is rather complex and requires some head-scratching.  Stack overflow is where I found the best answer to troubleshooting this issue:

aws-error-error-during-url-validation-check-url-and-try-again

My next problem gave an error "The type initializer for 'Microsoft.Web.Deployment.DeploymentManager' threw an exception." which was just as cryptic.  As it turned out there are registry entries that SQL Server doesn't remove when uninstalling older versions of SQL Server that interfere with the deployment software in Visual Studio.  The keys are:

HKLM\Software\Microsoft\IIS Extensions\msdeploy\3\extensibility
HKLM\Software\Wow6432Node\Microsoft\IIS Extensions\msdeploy\3\extensibility


They both should be removed.  I also found that information from stack overflow:

Web deployment task failed. (The type initializer for 'Microsoft.Web.Deployment.DeploymentManager' threw an exception.)

At that point I was able to deploy my application and get a "Hello World" program running.  Once this capability is in place you can focus on the development process and not deal with configuration details until you need more capabilities.


Real World Application

Now that I have the basics down, I still need to test some of the other features of AWS (like their EC2 virtual servers).  However, I have enough knowledge to actually use AWS for a production system.  If you're analyzing this service as a migration of an existing system, then there are a lot of things you still need to consider.  The first thing you'll need to do is find out how much it'll cost to store the amount of data that you already use.  How much web traffic are you using?  How many servers do you currently use?  These are going to go into an equation of cost.  When you compute those costs it should be lower than what you are currently paying for your equipment, data connection and facility.  If not, then you should not move your system.

If you are contemplating a start-up, you'll have other factors to consider.  First and foremost, assuming you haven't created your software yet, you'll need to decide which web platform and database engine you'll use.  If you're not experienced with working at a company that has a large database system, you might not realize how much licenses can cost when you need to scale out.  In the early stages of development priority might be placed on how easy it is to get the site up and running.  This will haunt you in the long run if your user base grows.  I would seriously consider using free or open-source software where you can.  AWS has MySql and Apache with Java, Python or PHP.  Ruby is another option.  If you lock yourself into IIS and SQL Server, you'll need to pay the extra licensing fees when your application outgrows the Express edition.  Once you have created thousands of stored procedures in SQL, you're locked in, with a re-development cost that is astronomical or license fees that are almost as bad.

Another factor to contemplate in a start-up is the cost of getting your business going.  If you have seed capital, then you're probably set for a fixed period of time.  If you are doing this on your own, then you're probably worried about how much it will cost until you get enough customers to cover your fees.  You'll need to compute this information ahead of time.  You need to ask yourself: "How many paying customers do I need in order to break even."  If you are providing a two-tier website that has a free component (which is a great way to hook people) and a paid component that has powerful features, you'll need to figure out what the ratio of paid vs. free customers there will be.  If you're conservative with your figures, you'll come out ahead.  I would start with a 5%/95% and compute what you need.  That means you'll need to pay for 100% of your customer's data and bandwidth usage, but you'll only collect money from the 5% that are paying.  If you plan to sell advertisements, you'll need to compute that.

Now you're probably thinking "how do I know what these numbers are going to be?"  Well, that's where this free AWS service is handy.  If you're clever, you'll get your application up and running before you sign up for AWS, or if your application is expected to be small and easy to build, you can build it directly on AWS.  When you're ready to do some usage testing, you can put it on line and get it into the search engines.  At first you'll end up with 100% free users.  Your traffic should increase.  You'll have to take an educated guess at what to charge for the advanced features.  Too much, and nobody will see the value.  Too cheap and you'll go broke.  The ideal price point would be something that seems cheap for what the customer receives, but enough to cover costs and earn a profit.  What that price point is, depends on what your application does.

AWS has a system for taking credit care information and keeping track of accounting information.  You'll need this type of system in order to keep track of who has paid and how much they have paid for.  This service is called DevPay.  The goal is to automate the process of collecting payment information, activating accounts and deactivating accounts.  That's a task that can overwhelm a person in no time if your product becomes successful.  Here's the basic information on DevPay:

What is Amazon DevPay?


Other Considerations

Once you launch your application and it becomes established, you'll need to consider your growth rate.  If your income is large enough, you can plan for new versions of your software according to how many developers you can keep on staff or contract.  In the cloud scenario, there is no need to pay for office space.  Technically, you can run the entire operation from your home.  Avoid adding the cost of an expensive facility until you really need it.  

Keep your eyes open on other cloud providers.  Google or Microsoft (and others) can provide equivalent services.  If their pricing structure makes your product cheaper to operate, consider porting to their cloud.  If you keep this in mind when you're small, you can keep your application in a format that can be re-deployed quickly.  If you build in too many Amazon specific features you might be stuck until you can redesign a feature (Yes, I mentioned this fact after I talked about DevPay in the previous paragraph).  Another option is to use a cloud provider specific feature long enough to design your own non-cloud provider specific feature.  In other words, use DevPay for your application until you can hire developers or put in the development time to write your own (or possibly use another 3rd party product).  Always keep your application capable of being moved.  Otherwise, you'll be hostage to a provider that someday may become hostile to your business.

Deployment tools are another feature you should get familiar with.  Automate your deployment as much as possible.  AWS has deployment tools that allow the developer to clone a production web server in isolation and to deploy a development version of your application for testing purposes.  If you need to do a lot of manual steps to get your application tested and deployed, you'll be wasting valuable developer time.  Time that is very expensive.

Get familiar with the security features.  If you hire outside contractors to perform maintenance or development tasks, you'll need to be able to shut off their accounts quickly if something goes wrong.  Make sure you understand what capabilities you are giving to another person.  Don't allow a rogue programmer to put in back-doors and open holes to the internet that you don't know exist.  Always monitor what is going on with your system.

I could go on all day, but at this point you should go to the AWS site and sign up for free usage.  Get some experience.  Click here.  When you get a "Hello World" program deployed and working, try some new features.  I would also recommend seeking out other cloud products from other vendors.  Google and Microsoft come to mind but there are others like AT&T, EMC, IBM, etc.



  

Saturday, November 14, 2015

Web APIs with CORS

Summary

I've done a lot of .Net Web APIs.  APIs are the future of web programming.  APIs allow you to break your system into smaller systems to give you flexibility and most importantly scalability.  It can also be used to break an application into front-end and back-end systems giving you the flexibility to write multiple front-ends for one back-end.  Most commonly this is used in a situation where your web application supports browsers and mobile device applications.


Web API

I'm going to create a very simple API to support one GET Method type of controller.  My purpose is to show how to add Cross Origin Resource Sharing CORS support and how to connect all the pieces together.  I'll be using a straight HTML web page with a JQuery page to perform the AJAX command.  I'll also use JSON for the protocol.  I will not be covering JSONP in this article.  My final purpose in writing this article is to demonstrate how to troubleshoot problems with APIs and what tools you can use.

I'm using Visual Studio 2015 Community edition.  The free version.  This should all work on version 2012 and beyond, though I've had difficulty with 2012 and CORS in the past (specifically with conflicts with Newtonsoft JSON).

You'll need to create a new Web API application.  Create an empty application and select "Web API" in the check box.  




Then add a new controller and select "Web API 2 Controller - Empty".




Now you'll need two NuGet packages and you can copy these two lines and paste them into your "Package Manager Console" window and execute them directly:

Install-Package Newtonsoft.Json
Install-Package Microsoft.AspNet.WebApi.Cors

For my API Controller, I named it "HomeController" which means that the path will be:

myweburl/api/Home/methodname

How do I know that?  It's in the WebApiConfig.cs file.  Which can be found inside the App_Start directory.  Here's what is default:

config.Routes.MapHttpRoute(
    name: "DefaultApi",
    routeTemplate: "api/{controller}/{id}",
    defaults: new { id = RouteParameter.Optional }
);


The word "api" is in all path names to your Web API applications, but you can change that to any word you want.  If you had two different sets of APIs, you can use two routes with different patterns.  I'm not going to get any deeper here.  I just wanted to mention that the "routeTemplate" will control the url pattern that you will need in order to connect to your API.

If you create an HTML web page and drop it inside the same URL as your API, it'll work.  However, what I'm going to do is run my HTML file from my desktop and I'm going to make up a URL for my API.  This will require CORS support, otherwise the API will not respond to any requests.

At this point, the CORS support is installed from the above NuGet package.  All we need is to add the following using to the WebApiConfig.cs file:

using System.Web.Http.Cors;

Then add the following code to the top of the "Register" method:

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);


I'm demonstrating support for all origins, headers and methods.  However, you should narrow this down after you have completed your APIs and are going to deploy your application to a production system.  This will prevent hackers from accessing your APIs.

Next, is the code for the controller that you created earlier:

using System.Net;
using System.Net.Http;
using System.Web.Http;
using WebApiCorsDemo.Models;
using Newtonsoft.Json;
using System.Text;

namespace WebApiCorsDemo.Controllers
{
    public class HomeController : ApiController
    {
        [HttpGet]
        public HttpResponseMessage MyMessage()
        {
            var result = new MessageResults
            {
                Message = "It worked!"
            };

            var jsonData = JsonConvert.SerializeObject(result);
            var resp = new HttpResponseMessage(HttpStatusCode.OK);
            resp.Content = new StringContent(jsonData, Encoding.UTF8, "application/json");
            return resp;
        }
    }
}

 

You can see that I serialized the MessageResults object into a JSON message and returned it in the response content with a type of application/json.  I always use a serializer to create my JSON if possible.  You can generate the same output using a string and just building the JSON manually.  It works and it's really easy on something this tiny.  However, I would discourage this practice because it becomes a programming nightmare when a program grows in size and complexity.  Once you become familiar with APIs and start to build a full-scale application, you'll be returning large complex data types and it is so easy to miss a "{" bracket and spend hours trying to fix something that you should not be wasting time on.

The code for the MessageResults class is in the Models folder called MessageResults.cs:

public class MessageResults
{
    public string Message { get; set; }
}


Now we'll need a JQuery file that will call this API, and then we'll need to setup IIS.

For the HTML file, I created a Home.html file and populated it with this:

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <meta charset="utf-8" />
    <script src="jquery-2.1.4.min.js"></script>
    <script src="Home.js"></script>
</head>
<body>
    Loading...
</body>
</html>


You'll need to download JQuery, I used version 2.1.4 in this example, but I would recommend going to the JQuery website and download the latest version and just change the script url above to reflect the version of JQuery that you're using.  You can also see that I named my js file "Home.js" to match my "Home.html" file.  Inside my js file is this:

$(document).ready(function () {
    GetMessage();
});

function GetMessage() {
    var url = "http://www.franksmessageapi.com/api/Home/MyMessage";

    $.ajax({
        crossDomain: true,
        type: "GET",
        url: url,
        dataType: 'json',
        contentType: 'application/json',
        success: function (data, textStatus, jqXHR) {
            alert(data.Message);
        },
        error: function (jqXHR, textStatus, errorThrown) {
            alert(formatErrorMessage(jqXHR, textStatus));
        }
    });
}


There is an additional "formatErrorMessage()" function that is not shown above, you can copy that from the full code I posted on GitHub, or just remove it from your error return.  I use this function for troubleshooting AJAX calls.  At this point, if you typed in all the code from above, you won't get any results.  Primarily because you don't have a URL named "www.franksmessageapi.com" and it doesn't exist on the internet (unless someone goes out and claims it).  You have to setup your IIS with a dummy URL for testing purposes.

So open the IIS control panel, right-click on "Sites" and "Add Website":



For test sites, I always name my website the exact same URL that I'm going to bind to it.  That makes it easy to find the correct website.  Especially if I have 50 test sites setup.  You'll need to point the physical path to the root path of your project, not solution.  This will be the subdirectory that contains the web.config file.

Next, you'll need to make sure that your web project directory has permissions for IIS to access.  Once you create the website you can click on the website node and on the right side are a bunch of links to do "stuff".  You'll see one link named "Edit Permissions", click on it.  Then click on the "Security" tab of the small window that popped up.  Make sure the following users have full permissions:

IUSR
IIS_IUSRS (yourpcname\IIS_IUSRS)

If both do not exist, then add them and give them full rights.  Close your IIS window.

One more step before your application will work.  You'll need to redirect the URL name to your localhost so that IIS will listen for HTTP requests.

Open your hosts file located in C:\Windows\System32\drivers\etc\hosts.  This is a text file and you can add as many entries into this file that you would like.  At the bottom of the hosts file, I added this line:

127.0.0.1        www.franksmessageapi.com

You can use the same name, or make up your own URL.  Try not to use a URL that exists on the web or you will find that you cannot get to the real address anymore.  The hosts file will override DNS and reroute your request to 127.0.0.1 which is your own PC.

Now, let's do some incremental testing to make sure each piece of the puzzle is working.  First, let's make sure the hosts table is working correctly.  Open up a command window.  You might have to run as administrator if you are using Windows 10.  You can type "CMD" in the run box and start the window up.  Then execute the following command:

ping www.franksmessageapi.com

You should get the following:



If you don't get a response back, then you might need to reboot your PC, or clear your DNS cache.  Start with the DNS cache by typing in this command:

ipconfig /flushdns

Try to ping again.  If it doesn't work, reboot and then try again.  After that, you'll need to select a different URL name to get it to work.  Beyond that, it's time to google.  Don't go any further until you get this problem fixed.

This is a GET method, so let's open a browser and go directly to the path that we think our API is located.  Before we do that, Rebuild the API application and make sure it builds without errors.  Then open the js file and copy the URL that we'll call and paste it into the browser URL.  You should see this:



If you get an error of any type, you can use a tool called Fiddler to analyze what is happening.  Download and install Fiddler.  You might need to change Firefox's configuration for handling proxies (Firefox will block Fiddler, as if we needed another problem to troubleshoot).  For the version of Firefox as of this writing (42.0), go to the Options, Advanced, Network, then click the "Settings" button to the right of the Connection section.  Select "Use system proxy settings".

OK, now you should be able to refresh the browser with your test URL in it and see something pop up in your Fiddler screen.  Obviously, if you have a 404 error, you'll see it long before you notice it on Fiddler (it should report 404 on the web page). This just means your URL is wrong.

If you get a "No HTTP resource was found that matches the request URI" message in your browser, you might have your controller named wrong in the URL.  This is a 404 sent back from the program that it couldn't route correctly.  This error will also return something like "No type was found that matches the controller named [Home2]" where "Home2" was in the URL, but your controller is named "HomeController" (which means your URL should use "Home").

Time to test CORS.  In your test browser setup, CORS will not refuse the connection.  That's because you are requesting your API from the website that the API is hosted on.  However, we want to run this from an HTML page that might be hosted someplace else.  In our test we will run it from the desktop.  So navigate to where you created "Home.html" and double-click on that page.  If CORS is not working you'll get an error.  You'll need Fiddler to figure this out.  In Fiddler you'll see a 405 error.  If you go to the bottom right window (this represents the response), you can switch to "raw" and see a message like this:

HTTP/1.1 405 Method Not Allowed
Cache-Control: no-cache
Pragma: no-cache
Allow: GET
Content-Type: application/xml; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:53:34 GMT
Content-Length: 96

<Error><Message>The requested resource does not support http method 'OPTIONS'.</Message></Error>


The first request from a cross origin request is the OPTIONS request.  This occurs before the GET.  The purpose of the OPTIONS is to determine if the end point will accept a request from your browser.  For the example code, if the CORS section is working inside the WebApiConfig.cs file, then you'll see two requests in Fiddler, one OPTIONS request followed by a GET request.  Here's the OPTIONS response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:58:23 GMT
Content-Length: 0


And the raw GET response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 24
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 01:10:59 GMT

{"Message":"It worked!"}


If you switch your response to JSON for the GET response, you should see something like this:



One more thing to notice.  If you open a browser and paste the URL into it and then change the name of MyMessage action, you'll notice that it still performs a GET operation from the controller, returning the "It worked!" message.  If you create two or more GET methods in the same controller one action will become the default action for all GET operations, no matter which action you specify.  Modify your route inside your WebApiConfig.cs file.  Add an "{action}" to the route like this:

config.Routes.MapHttpRoute(
    name: "DefaultApi",
    routeTemplate: "api/{controller}/{action}/{id}",
    defaults: new { id = RouteParameter.Optional }
);


Now you should see an error in your browser if the the action name in your URL does not exist in your controller:



Finally, you can create two or more GET actions and they will be distinguished by the name of the action in the URL.  Add the following action to your controller inside "HomeController.cs":

[HttpGet]
public HttpResponseMessage MyMessageTest()
{
    string result = "This is the second controller";

    var jsonData = JsonConvert.SerializeObject(result);
    var resp = new HttpResponseMessage(HttpStatusCode.OK);
    resp.Content = new StringContent(jsonData, Encoding.UTF8, "application/json");
    return resp;
}


Rebuild, and test from your browser directly.  First use the URL containing "MyMessage":






Then try MyMessagetest:


Notice how the MyMessageTest action returns a JSON string and the MyMessage returns a JSON message object.



Where to Find the Source Code

You can download the full Visual Studio source code at my GitHub account by clicking here