Sunday, November 15, 2015

The Cloud

This post is going to be a bit different from the usual.  I'm going to talk a little about cloud computing.  Specifically about Amazon's cloud and what you should consider if you are thinking about using a cloud-based system.  This article will be geared more toward the business end of cloud computing, though I'm going to describe some technical details up front.

Some History

When the cloud was first offered, I was a bit skeptical about using it.  At the time I was working for a company that hosted their own equipment.  Internet connection bandwidth was not like it is today (I think we had a 5 megabit connection back then).  The cloud was new and virtualization was new and expensive.  There were a lot of questions about how to do it.  If I were to startup a new system for a company like that today, I'd recommend the cloud.

Any IT person who installs equipment for a data center today knows about cloud computing.  I have now worked for two companies that have hosted their equipment at a major data center, using virtual hosts.  The advantages of hosting equipment at a data center verses providing your own facility are numerous.  Off the top of my head are: Cooling, backup power, data bandwidth to the internet, and physical security to the site.  The cloud provides additional benefits: Pay for equipment as you need it, avoid delay required to order new equipment.

Amazon Web Services (AWS)

I examined a few different cloud services and I'll blog about other services as I get time to gain some experience.  The reason I started with AWS is due to the fact that they have a 1 year trial for free.  That is marketing genius right there!  First, they encourage software developers to sign up and learn their system.  That allows them to get their foot in the door of companies that might start using cloud computing and abandon their physical data centers.  All because they have developers on staff that already know the technology.  Second, a year is a lot of time to experiment.  A person can get really good at understanding the services or they can attempt to build a new product using their service to see how it operates.

I signed up and it does require a credit card to complete the sign up.  That sends off a few alarms in the back of my head because technically, they could charge my card without me knowing it.  So the first thing I did was find out where I can review any charges.  I also noticed that there are warning messages that tell me when I'm attempting to setup a service that does not apply to the free tier (which means that I'll get charged).  The great unknown is what happens if I accidentally get a flood of traffic for a test application that I've posted?  I guess I'll find out, or hopefully not.

Anyway, here's what the billing screen looks like:

This is accessible from the drop-down menu above with your name on the account ("Frank DeCaire" for my account).

There are a lot of services on AWS and their control panel is rather large:

Where to start?  I started with Elastic Beanstalk.  Amazon uses the word "Elastic" in all kinds of services.  At first, I thought it was just a cute word they used to describe their product the way that Microsoft uses the word "Azure".  I began to read some documents on their services and the word "Elastic" refers to the fact that you can program your cloud to provision new servers or tear-down servers according to trigger points.  So you can cause more servers to be put on line if your load becomes too high.  Conversely you can automatically tear-down servers if the load gets too low (so you don't have to pay for servers you don't need during low volume times).  This is where the term "Elastic" comes in.  The number of servers you apply to your product is elastic.  

Back to Beanstalk.  The Elastic Beanstalk application has a web server and an optional database server.  So I clicked into the Beanstalk app and created an IIS server (there are several web server types to choose from).  Then I added a SQL Server Express database under RDS.  The database server required an id and password.  Once that was created there is a configuration details screen and it contains a url under the section named Endpoint,  This is the connection url that can be used by SQL Server Management Studio.  Once connected, I was able to manipulate SQL Server the same as a local instance.  I created tables and inserted data to make sure it worked.


The IIS server control panel looks like this:

You can click on the blue link to pop-up the website url that points to this web server (or server farm).  I have intentionally obscured the id by replacing it with "abcdefgh", so the id above will not work.  You'll need to create your own account and a random id will be generated for your own server.

Next, you need to download the tool kit for Visual Studio (click here).  I installed it on VS 2015, so I know it works on the newest version of Visual Studio.  I also tested on VS 2013.  There are a few gotchas that I ran into.  First, I ran into an error when attempting to deploy to AWS.  The error I received was that the URL validation failed ("Error during URL validation; check URL and try again").  This turned out to be a false error.  What I discovered was that there was a permissions problem with access to IIS.  This can be found in the Identity and Access Management console (IAM).  I had a user created, but I did not assign a group to the user.  The IAM console is rather complex and requires some head-scratching.  Stack overflow is where I found the best answer to troubleshooting this issue:


My next problem gave an error "The type initializer for 'Microsoft.Web.Deployment.DeploymentManager' threw an exception." which was just as cryptic.  As it turned out there are registry entries that SQL Server doesn't remove when uninstalling older versions of SQL Server that interfere with the deployment software in Visual Studio.  The keys are:

HKLM\Software\Microsoft\IIS Extensions\msdeploy\3\extensibility
HKLM\Software\Wow6432Node\Microsoft\IIS Extensions\msdeploy\3\extensibility

They both should be removed.  I also found that information from stack overflow:

Web deployment task failed. (The type initializer for 'Microsoft.Web.Deployment.DeploymentManager' threw an exception.)

At that point I was able to deploy my application and get a "Hello World" program running.  Once this capability is in place you can focus on the development process and not deal with configuration details until you need more capabilities.

Real World Application

Now that I have the basics down, I still need to test some of the other features of AWS (like their EC2 virtual servers).  However, I have enough knowledge to actually use AWS for a production system.  If you're analyzing this service as a migration of an existing system, then there are a lot of things you still need to consider.  The first thing you'll need to do is find out how much it'll cost to store the amount of data that you already use.  How much web traffic are you using?  How many servers do you currently use?  These are going to go into an equation of cost.  When you compute those costs it should be lower than what you are currently paying for your equipment, data connection and facility.  If not, then you should not move your system.

If you are contemplating a start-up, you'll have other factors to consider.  First and foremost, assuming you haven't created your software yet, you'll need to decide which web platform and database engine you'll use.  If you're not experienced with working at a company that has a large database system, you might not realize how much licenses can cost when you need to scale out.  In the early stages of development priority might be placed on how easy it is to get the site up and running.  This will haunt you in the long run if your user base grows.  I would seriously consider using free or open-source software where you can.  AWS has MySql and Apache with Java, Python or PHP.  Ruby is another option.  If you lock yourself into IIS and SQL Server, you'll need to pay the extra licensing fees when your application outgrows the Express edition.  Once you have created thousands of stored procedures in SQL, you're locked in, with a re-development cost that is astronomical or license fees that are almost as bad.

Another factor to contemplate in a start-up is the cost of getting your business going.  If you have seed capital, then you're probably set for a fixed period of time.  If you are doing this on your own, then you're probably worried about how much it will cost until you get enough customers to cover your fees.  You'll need to compute this information ahead of time.  You need to ask yourself: "How many paying customers do I need in order to break even."  If you are providing a two-tier website that has a free component (which is a great way to hook people) and a paid component that has powerful features, you'll need to figure out what the ratio of paid vs. free customers there will be.  If you're conservative with your figures, you'll come out ahead.  I would start with a 5%/95% and compute what you need.  That means you'll need to pay for 100% of your customer's data and bandwidth usage, but you'll only collect money from the 5% that are paying.  If you plan to sell advertisements, you'll need to compute that.

Now you're probably thinking "how do I know what these numbers are going to be?"  Well, that's where this free AWS service is handy.  If you're clever, you'll get your application up and running before you sign up for AWS, or if your application is expected to be small and easy to build, you can build it directly on AWS.  When you're ready to do some usage testing, you can put it on line and get it into the search engines.  At first you'll end up with 100% free users.  Your traffic should increase.  You'll have to take an educated guess at what to charge for the advanced features.  Too much, and nobody will see the value.  Too cheap and you'll go broke.  The ideal price point would be something that seems cheap for what the customer receives, but enough to cover costs and earn a profit.  What that price point is, depends on what your application does.

AWS has a system for taking credit care information and keeping track of accounting information.  You'll need this type of system in order to keep track of who has paid and how much they have paid for.  This service is called DevPay.  The goal is to automate the process of collecting payment information, activating accounts and deactivating accounts.  That's a task that can overwhelm a person in no time if your product becomes successful.  Here's the basic information on DevPay:

What is Amazon DevPay?

Other Considerations

Once you launch your application and it becomes established, you'll need to consider your growth rate.  If your income is large enough, you can plan for new versions of your software according to how many developers you can keep on staff or contract.  In the cloud scenario, there is no need to pay for office space.  Technically, you can run the entire operation from your home.  Avoid adding the cost of an expensive facility until you really need it.  

Keep your eyes open on other cloud providers.  Google or Microsoft (and others) can provide equivalent services.  If their pricing structure makes your product cheaper to operate, consider porting to their cloud.  If you keep this in mind when you're small, you can keep your application in a format that can be re-deployed quickly.  If you build in too many Amazon specific features you might be stuck until you can redesign a feature (Yes, I mentioned this fact after I talked about DevPay in the previous paragraph).  Another option is to use a cloud provider specific feature long enough to design your own non-cloud provider specific feature.  In other words, use DevPay for your application until you can hire developers or put in the development time to write your own (or possibly use another 3rd party product).  Always keep your application capable of being moved.  Otherwise, you'll be hostage to a provider that someday may become hostile to your business.

Deployment tools are another feature you should get familiar with.  Automate your deployment as much as possible.  AWS has deployment tools that allow the developer to clone a production web server in isolation and to deploy a development version of your application for testing purposes.  If you need to do a lot of manual steps to get your application tested and deployed, you'll be wasting valuable developer time.  Time that is very expensive.

Get familiar with the security features.  If you hire outside contractors to perform maintenance or development tasks, you'll need to be able to shut off their accounts quickly if something goes wrong.  Make sure you understand what capabilities you are giving to another person.  Don't allow a rogue programmer to put in back-doors and open holes to the internet that you don't know exist.  Always monitor what is going on with your system.

I could go on all day, but at this point you should go to the AWS site and sign up for free usage.  Get some experience.  Click here.  When you get a "Hello World" program deployed and working, try some new features.  I would also recommend seeking out other cloud products from other vendors.  Google and Microsoft come to mind but there are others like AT&T, EMC, IBM, etc.


Saturday, November 14, 2015

Web APIs with CORS


I've done a lot of .Net Web APIs.  APIs are the future of web programming.  APIs allow you to break your system into smaller systems to give you flexibility and most importantly scalability.  It can also be used to break an application into front-end and back-end systems giving you the flexibility to write multiple front-ends for one back-end.  Most commonly this is used in a situation where your web application supports browsers and mobile device applications.


I'm going to create a very simple API to support one GET Method type of controller.  My purpose is to show how to add Cross Origin Resource Sharing CORS support and how to connect all the pieces together.  I'll be using a straight HTML web page with a JQuery page to perform the AJAX command.  I'll also use JSON for the protocol.  I will not be covering JSONP in this article.  My final purpose in writing this article is to demonstrate how to troubleshoot problems with APIs and what tools you can use.

I'm using Visual Studio 2015 Community edition.  The free version.  This should all work on version 2012 and beyond, though I've had difficulty with 2012 and CORS in the past (specifically with conflicts with Newtonsoft JSON).

You'll need to create a new Web API application.  Create an empty application and select "Web API" in the check box.  

Then add a new controller and select "Web API 2 Controller - Empty".

Now you'll need two NuGet packages and you can copy these two lines and paste them into your "Package Manager Console" window and execute them directly:

Install-Package Newtonsoft.Json
Install-Package Microsoft.AspNet.WebApi.Cors

For my API Controller, I named it "HomeController" which means that the path will be:


How do I know that?  It's in the WebApiConfig.cs file.  Which can be found inside the App_Start directory.  Here's what is default:

    name: "DefaultApi",
    routeTemplate: "api/{controller}/{id}",
    defaults: new { id = RouteParameter.Optional }

The word "api" is in all path names to your Web API applications, but you can change that to any word you want.  If you had two different sets of APIs, you can use two routes with different patterns.  I'm not going to get any deeper here.  I just wanted to mention that the "routeTemplate" will control the url pattern that you will need in order to connect to your API.

If you create an HTML web page and drop it inside the same URL as your API, it'll work.  However, what I'm going to do is run my HTML file from my desktop and I'm going to make up a URL for my API.  This will require CORS support, otherwise the API will not respond to any requests.

At this point, the CORS support is installed from the above NuGet package.  All we need is to add the following using to the WebApiConfig.cs file:

using System.Web.Http.Cors;

Then add the following code to the top of the "Register" method:

var cors = new EnableCorsAttribute("*", "*", "*");

I'm demonstrating support for all origins, headers and methods.  However, you should narrow this down after you have completed your APIs and are going to deploy your application to a production system.  This will prevent hackers from accessing your APIs.

Next, is the code for the controller that you created earlier:

using System.Net;
using System.Net.Http;
using System.Web.Http;
using WebApiCorsDemo.Models;
using Newtonsoft.Json;
using System.Text;

namespace WebApiCorsDemo.Controllers
    public class HomeController : ApiController
        public HttpResponseMessage MyMessage()
            var result = new MessageResults
                Message = "It worked!"

            var jsonData = JsonConvert.SerializeObject(result);
            var resp = new HttpResponseMessage(HttpStatusCode.OK);
            resp.Content = new StringContent(jsonData, Encoding.UTF8, "application/json");
            return resp;


You can see that I serialized the MessageResults object into a JSON message and returned it in the response content with a type of application/json.  I always use a serializer to create my JSON if possible.  You can generate the same output using a string and just building the JSON manually.  It works and it's really easy on something this tiny.  However, I would discourage this practice because it becomes a programming nightmare when a program grows in size and complexity.  Once you become familiar with APIs and start to build a full-scale application, you'll be returning large complex data types and it is so easy to miss a "{" bracket and spend hours trying to fix something that you should not be wasting time on.

The code for the MessageResults class is in the Models folder called MessageResults.cs:

public class MessageResults
    public string Message { get; set; }

Now we'll need a JQuery file that will call this API, and then we'll need to setup IIS.

For the HTML file, I created a Home.html file and populated it with this:

<!DOCTYPE html>
    <meta charset="utf-8" />
    <script src="jquery-2.1.4.min.js"></script>
    <script src="Home.js"></script>

You'll need to download JQuery, I used version 2.1.4 in this example, but I would recommend going to the JQuery website and download the latest version and just change the script url above to reflect the version of JQuery that you're using.  You can also see that I named my js file "Home.js" to match my "Home.html" file.  Inside my js file is this:

$(document).ready(function () {

function GetMessage() {
    var url = "";

        crossDomain: true,
        type: "GET",
        url: url,
        dataType: 'json',
        contentType: 'application/json',
        success: function (data, textStatus, jqXHR) {
        error: function (jqXHR, textStatus, errorThrown) {
            alert(formatErrorMessage(jqXHR, textStatus));

There is an additional "formatErrorMessage()" function that is not shown above, you can copy that from the full code I posted on GitHub, or just remove it from your error return.  I use this function for troubleshooting AJAX calls.  At this point, if you typed in all the code from above, you won't get any results.  Primarily because you don't have a URL named "" and it doesn't exist on the internet (unless someone goes out and claims it).  You have to setup your IIS with a dummy URL for testing purposes.

So open the IIS control panel, right-click on "Sites" and "Add Website":

For test sites, I always name my website the exact same URL that I'm going to bind to it.  That makes it easy to find the correct website.  Especially if I have 50 test sites setup.  You'll need to point the physical path to the root path of your project, not solution.  This will be the subdirectory that contains the web.config file.

Next, you'll need to make sure that your web project directory has permissions for IIS to access.  Once you create the website you can click on the website node and on the right side are a bunch of links to do "stuff".  You'll see one link named "Edit Permissions", click on it.  Then click on the "Security" tab of the small window that popped up.  Make sure the following users have full permissions:

IIS_IUSRS (yourpcname\IIS_IUSRS)

If both do not exist, then add them and give them full rights.  Close your IIS window.

One more step before your application will work.  You'll need to redirect the URL name to your localhost so that IIS will listen for HTTP requests.

Open your hosts file located in C:\Windows\System32\drivers\etc\hosts.  This is a text file and you can add as many entries into this file that you would like.  At the bottom of the hosts file, I added this line:

You can use the same name, or make up your own URL.  Try not to use a URL that exists on the web or you will find that you cannot get to the real address anymore.  The hosts file will override DNS and reroute your request to which is your own PC.

Now, let's do some incremental testing to make sure each piece of the puzzle is working.  First, let's make sure the hosts table is working correctly.  Open up a command window.  You might have to run as administrator if you are using Windows 10.  You can type "CMD" in the run box and start the window up.  Then execute the following command:


You should get the following:

If you don't get a response back, then you might need to reboot your PC, or clear your DNS cache.  Start with the DNS cache by typing in this command:

ipconfig /flushdns

Try to ping again.  If it doesn't work, reboot and then try again.  After that, you'll need to select a different URL name to get it to work.  Beyond that, it's time to google.  Don't go any further until you get this problem fixed.

This is a GET method, so let's open a browser and go directly to the path that we think our API is located.  Before we do that, Rebuild the API application and make sure it builds without errors.  Then open the js file and copy the URL that we'll call and paste it into the browser URL.  You should see this:

If you get an error of any type, you can use a tool called Fiddler to analyze what is happening.  Download and install Fiddler.  You might need to change Firefox's configuration for handling proxies (Firefox will block Fiddler, as if we needed another problem to troubleshoot).  For the version of Firefox as of this writing (42.0), go to the Options, Advanced, Network, then click the "Settings" button to the right of the Connection section.  Select "Use system proxy settings".

OK, now you should be able to refresh the browser with your test URL in it and see something pop up in your Fiddler screen.  Obviously, if you have a 404 error, you'll see it long before you notice it on Fiddler (it should report 404 on the web page). This just means your URL is wrong.

If you get a "No HTTP resource was found that matches the request URI" message in your browser, you might have your controller named wrong in the URL.  This is a 404 sent back from the program that it couldn't route correctly.  This error will also return something like "No type was found that matches the controller named [Home2]" where "Home2" was in the URL, but your controller is named "HomeController" (which means your URL should use "Home").

Time to test CORS.  In your test browser setup, CORS will not refuse the connection.  That's because you are requesting your API from the website that the API is hosted on.  However, we want to run this from an HTML page that might be hosted someplace else.  In our test we will run it from the desktop.  So navigate to where you created "Home.html" and double-click on that page.  If CORS is not working you'll get an error.  You'll need Fiddler to figure this out.  In Fiddler you'll see a 405 error.  If you go to the bottom right window (this represents the response), you can switch to "raw" and see a message like this:

HTTP/1.1 405 Method Not Allowed
Cache-Control: no-cache
Pragma: no-cache
Allow: GET
Content-Type: application/xml; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:53:34 GMT
Content-Length: 96

<Error><Message>The requested resource does not support http method 'OPTIONS'.</Message></Error>

The first request from a cross origin request is the OPTIONS request.  This occurs before the GET.  The purpose of the OPTIONS is to determine if the end point will accept a request from your browser.  For the example code, if the CORS section is working inside the WebApiConfig.cs file, then you'll see two requests in Fiddler, one OPTIONS request followed by a GET request.  Here's the OPTIONS response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:58:23 GMT
Content-Length: 0

And the raw GET response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 24
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 01:10:59 GMT

{"Message":"It worked!"}

If you switch your response to JSON for the GET response, you should see something like this:

One more thing to notice.  If you open a browser and paste the URL into it and then change the name of MyMessage action, you'll notice that it still performs a GET operation from the controller, returning the "It worked!" message.  If you create two or more GET methods in the same controller one action will become the default action for all GET operations, no matter which action you specify.  Modify your route inside your WebApiConfig.cs file.  Add an "{action}" to the route like this:

    name: "DefaultApi",
    routeTemplate: "api/{controller}/{action}/{id}",
    defaults: new { id = RouteParameter.Optional }

Now you should see an error in your browser if the the action name in your URL does not exist in your controller:

Finally, you can create two or more GET actions and they will be distinguished by the name of the action in the URL.  Add the following action to your controller inside "HomeController.cs":

public HttpResponseMessage MyMessageTest()
    string result = "This is the second controller";

    var jsonData = JsonConvert.SerializeObject(result);
    var resp = new HttpResponseMessage(HttpStatusCode.OK);
    resp.Content = new StringContent(jsonData, Encoding.UTF8, "application/json");
    return resp;

Rebuild, and test from your browser directly.  First use the URL containing "MyMessage":

Then try MyMessagetest:

Notice how the MyMessageTest action returns a JSON string and the MyMessage returns a JSON message object.

Where to Find the Source Code

You can download the full Visual Studio source code at my GitHub account by clicking here