Sunday, October 23, 2016

Homebrew Computers


I'm going to switch it up a bit and talk about one of my other hobbies.  Electronics.  I haven't worked with digital circuits in a while.  In fact it's been so long that I had to do a lot of research to find out what is new the world of electronics.  Micro-controllers have come a long way.  Components have become dirt cheap over the years and way beyond the capabilities of my test equipment.  As I was looking around the world of technology, I stumbled across an article describing a guy who built a computer out of thousands of discreet transistors.  So that's what I'm going to talk (or ramble) about in this article.


That's the name of this homebrew computer system built by a guy named James Newman.  You can get to the website by clicking here:  

First I was intrigued by the fact that he built an entire system out transistors.  Not just transistors but NMOS transistors that are sensitive to static discharge.  I usually avoid these things, I have a difficult enough time building circuits out of TTL logic and NPN transistors.  However, if you want to build something out of a large number of transistors (like 27,000), you have to be conscious of power consumption and speed.  He has an entire story about his adventure with controlling static and burning out transistors.

As I dug through the website, I discovered that he built little circuits representing logic gates with these transistors, then he treated the circuits as components in a larger structure.  There is an LED on each input and output of every circuit, so it's easy to visually verify and troubleshoot any hardware problems.  Here's a sample picture of a 2-input AND gate:

His website describes that he built this machine as a learning machine.  Anyone who wants to visually see what goes on inside the computer can see the LEDs light up as the program operates.  That is a really good idea.  I think every college should have one of these for their computer engineering class.  Unfortunately, due to maintenance costs and physical space, I don't think too many colleges would be interested in setting one of these up.
In addition to the single logic gate boards, some boards contained repetitive circuitry consisting of many gates.  Those boards are diagrammed accordingly (also with LEDs on inputs and outputs).  Here's an example of an 8-bit logic board:

The next step up is the assembly of circuits into modules.  The connecting wires are diagrammed on the front of the board (see the red lines below) and the circuits are wired from behind.  Here's a state machine module:

Here's what one of these modules looks like from the backside:

The modules are mounted in frames which he has arranged in his living room (though he's looking for a permanent public accessible location for the device).

As I mentioned before, you can follow the link and dig around through his website and learn all the fun details of how he built the machine, how long it took him and how much it cost.   For those of us who have worked in the electronics industry, his section called "Progress" has a lot of interesting stories about problems that he ran into not to mention the "Good, Bad & Ugly".  This story made me cringe: Multiplexor Problem.  Unexpected current flow problems are difficult to understand and troubleshoot. 

So what's the point?  

It's a hobby.  The purpose is to built something or accomplish some task and stretch your abilities.  The goal is to experience what it would be like to construct such a machine.  Think of this as an advanced circuit building exercise.  

I've built microprocessor based circuits in the past (mentioned on my website:,
the megaprocessor is much more complex and more challenging than my project.  If you really want to learn how a computer operates, nothing compares to a project like this.  I have to warn readers that this is not something you jump into out of the blue.  If you have no electronics experience, start small.  I mean, really small.

I would start with a book like this:

You can find this book at Amazon or at this link:  The bookstore that I visited yesterday (Barnes & Noble) has this as well.  I browsed through a lot of the "Make:" series of books and they are very well organized.

You'll need some basic supplies like a breadboard, wire, hand-tools, a volt meter (nothing fancy).  If you move up into faster digital circuits, or you dive into microcontrollers and microprocessors, you'll need to invest in an oscilloscope.  This will probably be the most expensive piece of test equipment you'll ever buy.  I still own an original Heathkit oscilloscope that is rated at up to 10Mhz.  If you understand CPU speeds, you'll notice that this oscilloscope is not able to troubleshoot an i7 processor running at 4Ghz.  In fact, oscilloscopes that can display waveforms of that frequency are beyond most personal budgets of a hobbyist (I think that crosses over to the domain of obsessive).

Other Homebrew Systems

I spent some time searching the Internet for other homebrew computers and stumbled onto the  "Homebuilt CPUs WebRing."  I haven't seen a webring in a long time, so this made me smile.  There are so many cool machines on this list (click here).  There are a couple of relay machines, one in particular has video so you can see and hear the relays clicking as the processor churns through instructions (Video Here, scroll down a bit).  The story behind Zusie the relay computer is fascinating.  Especially his adventures in obtaining 1500 relays to build the machine (and on a budget).  I laughed at his adventures in acquiring and  de-soldering the relays from circuit boards that were built for telephone equipment.

There are a lot of other machines on this webring that are just as interesting.  Great stories, schematics, how to build their machine, etc.  The one machine that really got my attention was the Magic-1 (click here).  This is a mini-computer built by a guy named Bill Buzbee.  He has a running timeline documenting his progress in designing and building the computer.  Reading his notes on designing an emulator and then his issues with wire-wrapping really gives a good picture of what it takes to build a computer out of discreet logic.  Here's a photo of the backside of the controller card:

The final machine schematics are posted here.  He used a microprogrammed architecture.  Microprogrammed architecture is like building a computer to run a computer.  This is one of my favorite CPU designs which I learned about when I bought a book titled "Bit-slice Microprocessor Design".  Coincidentally, this book is listed in his links page under "Useful books".  You can still get this book as new or used.  I would recommend picking up a cheap used book from Amazon.  The computer discussed in this book is based on the AMD 2901 4-bit CPU, which is a bit-slice CPU.  Basically, you buy several of these CPUs and stack them in parallel to form a computer.  For a 32-bit CPU, you would buy 8 chips and wire them in parallel.  Unfortunately, AMD doesn't manufacture these chips any more.  The book, however, is a good read.  He also has PDF postings of another book called "Build a Microcomputer" which is virtually the same book (go here, scroll down).

Building Your Own

If you're looking to build your own computer, just to learn how they work, you can use one of the retro processors from the 70's and 80's.  These are dirt cheap, so if you blow one up by hooking up the wrong power leads, you can just grab another one in your box of 50 spare CPU's.  On the simple side, you can use an 8085 (this is almost identical to the 8080, but doesn't need an additional +12v power supply).  The 8080 CPU was used in the original Space Invaders arcade game (see Space Invaders schematics here).

The 6502 has a lot of information available since it was used by Apple and Commodore computer companies in their earliest designs.  The Z80 is like an souped up 8080 processor.  This CPU has index registers which makes it more flexible.  There are a lot of hobbyists who have built machine around the Z80, and quite a few arcade games were built with this CPU.  The Galaga arcade game used 3 Z80 CPUs to run the game.

I suspect that over time these CPUs will become difficult to find.  Jameco currently lists them as refurbished.  If you build a project around one of these CPUs, be sure and buy extra chips.  That way you'll have spares if the supply chain runs dry.

If you're more advanced, you can still buy 8088 CPU's for $3.95 each at Jameco Electronics.  This is the CPU that the first IBM PC was based on.  At that price, you can get a dozen for under $50 and build a parallel machine.  This CPU can also address 1 Megabytes of memory (which is a lot for assembly language programming), is contained in a 40-pin chip format and there is a huge amount of software and hardware available for it.

If you're not so into soldering, wire-wrapping or circuit troubleshooting, but would like to build a customized system, you can experiment with tiny computers like the Raspberry Pie or the Arduino or Beaglebone.  These devices are cheap and they have ports for network connections, USB devices, HDMI outputs, etc.  There are a lot of books and projects on the Internet to explore.

These are not your only choices either.  There are microcontroller chips that are cheap.  Jameco lists dozens of CPUs with built-in capabilities like this one: ATTINY85-20PU.  It's only $4.49 and you can plug it into a breadboard.  

So Many Resources Available

My website doesn't tell the whole story of my early days of building the 8085 computer board.  I've actually built 2 of these.  My first board was built somewhere around 1978.  At that time, I was a teenager and computers were so expensive that I didn't own one.  So I was determined to build one.  I had an old teletype (donated by an electronic engineer that lived across the street from my family when I was younger).  I built my own EPROM programmer that required dip switch inputs (this was not a very successful way to get a program into EPROM memory).  After I graduated from High School, I joined the Navy and purchased a Macintosh computer in 1984.  I'm talking about THE Macintosh, with 128k of memory.  Before I was honorably discharged from the Navy, I upgraded my Mac several times and had a Mac Plus with 4 Meg of memory.  My old 8085 computer board was lost in one of many moves my parents and I made between 1982 and 1988, so I decided to reconstruct my computer board and that is the board pictured on my website.  I also constructed a better EPROM programmer with a serial connection to the Mac so I can assemble the code and send it to the programmer (I wrote the assembler and the EPROM burner program in Turbo Pascal).  All of this occurred before the World Wide Web and Google changed the way we acquire information.  Needless to say, I have a lot of books!

Those were the "good ole' days".  Now we have the "better new days".  I own so many computers that I can't keep count.  My primary computer is a killer PC with 500 Gigs of M.2 hard drive space (and a 4TB bulk storage SATA drive), 32 gig of memory and a large screen.  I can create a simulator of what I want to build and test everything before I purchase a single component.  I can also get devices like EPROM burners for next to nothing.  There are on-line circuit emulators that can be used to test designs.  I'm currently evaluating this one: Easy EDA.  There are companies that will manufacture printed circuit boards, like this one: Dorkbot PDX.  They typically charge by the square inch of board space needed.  This is nice, because I can prototype a computer with a wirewrap design and then have a board constructed and build another computer that will last forever.


If you're bored and you're looking for a hobby.  This is the bottomless pit of all hobbies.  There is no depth you can go that would conclude your knowledge.  You can always dig deeper and discover new things.  This is not a hobby for everyone.  This hobby takes a significant amount of patience and learning.  Fortunately, you can start off cheap and easy and test your interest levels.  Otherwise, you can read the timelines and blogs of those of us who build circuits and struggle with the tiny details of getting a CPU to perform a basic NOP instruction.  I like the challenge of making something work but I also like reading about other people who have met the challenge and accomplished a complex task.

Never stop learning!


Saturday, October 1, 2016

Dot Net Core Project Renaming Issue


I'm going to quickly demonstrate a bug that can occur in .Net Core and how to fix it quickly.  The error produced is:

The dependency LibraryName >= 1.0.0-* could not be resolved.

Where "LibraryName" is a project in your solution that you have another project linked to.


Create a new .Net Core project and add a library to it named "SampleLibrary".  I named my soluion DotNetCoreIssue01.  Now add a .Net Core console project to the solution and name it "SampleConsole".  Next, right-click on the "references" of the console application and add Reference.  Click the check box next to "SampleLibrary" and click the Ok button.  Now your project should build.

Next, rename your library to "SampleLibraryRenamed" and go into your project.json file for your console and change the dependencies to "SampleLibraryRenamed".  Now rebuild.  The project is now broken.

Your project.json will look like this:

And your Error List box will look like this:

How To Fix This

First, you'll need to close Visual Studio.  Then navigate to the src directory of your solution and rename the SampleLibrary directory to SampleLibraryRenamed.  

Next, you'll need to edit the src file.  This fill is located in the root solution directory (same directory that the src directory is located.  It should be named "DotNetCoreIssue01.sln" if you named your solution the same name as I mentioned above.  Look for a line containing the directory that you just renamed.  It should look something like this (sorry for the word wrap):

Project("{8BB2217D-0F2D-49D1-97BC-3654ED321F3B}") = "SampleLibraryRenamed", "src\SampleLibrary\SampleLibraryRenamed.xproj", "{EEB3F210-4933-425F-8775-F702192E8988}"

As you can see the path to the SampleLibraryRenamed project is src\SampleLibrary\ which was just renamed.  Make that the same as the directory just changed: src\SampleLibraryRenamed\

Now open your solution in Visual Studio and all will be well.

The Trouble with Legacy Code

It's been a long time since I wrote about legacy code.  So I'm going to do a brain-dump of my experience and thoughts on the subject.

Defining Legacy Code

First, I'm going to define what I mean by legacy code.  Many programmers who have just entered the industry in the past 5 years or less view legacy code as anything that was written more than a year ago or code that was written in the previous version of Visual Studio, or the previous minor version of .Net.  When I talk about legacy code, I'm talking about code that is so old that many systems cannot support it anymore.  An example is Classic ASP.  Sometimes I'm talking about VB.Net.  Technically, VB is not a legacy language, sometimes it is.  In the context of VB.Net I'm really talking about the technique used to write the code.  My experience is that Basic is a language that is picked up by new programmers who have no formal education in the subject or are just learning to program for the first time.  I know how difficult it is to ween yourself off your first language.  I was that person once.  Code written by such programmers usually amounts to tightly coupled spaghetti code.  With all the accessories: no unit tests, ill defined methods treated like function calls, difficult to break dependencies, global variables, no documentation, poorly named variables and methods, etc.  That's what I call legacy code.

The Business Dilemma

In the business world the language used and even the technique used can make no difference.  A very successful business can be built around very old, obsolete and difficult to work with code.  This can work in situations where the code is rarely changed, the code is hidden behind a website or the code is small enough to be manageable.  Finally, if the business can sustain the high cost of a lot of developers, QA and other support staff, bad code can work.  It's difficult to make a business case for the conversion of legacy code.

In most companies software is grown.  This is where the legacy problem gets exponentially more costly over time.  Most of the cost is hidden.  It shows up as an increased number of bugs that occur as more enhancements are released (I'm talking bugs in existing code that was disturbed by the new enhancement).  It shows up as an increase in the amount of time it takes to develop an enhancement.  It also shows up as an increase in the amount of time it takes to fix a bug.

Regression testing becomes a huge problem.  The lack of unit testing means the code must be manually tested.  Automated testing with a product like Selenium can automate some of the manual testing, but this technique is very brittle.  The smallest interface change can cause the tests to break and the tests are usually too slow to be executed by each developer or to be used with continuous integration.  

What to do...

Add Unit Tests?

At first, this seems like a feasible task.  However, the man-hours involved are quite high.  First, there's the problem of languages like Classic ASP.  Unit tests are just not possible.  For code written in VB.Net, dependencies must be broken.  The difficulty of breaking dependencies is that refactoring can be complicated and cause a lot of bugs.  It's nearly impossible to make a business case to invest thousands of developer hours into the company product to produce no noticeable outcome for the customer.  Even worse, is if the outcome is an increase in bugs and down-time.  The opposite of what is intended.

Convert Code?

Converting code is also very hazardous.  You could theoretically hold all enhancements for a year, and throw hundreds of programmers at the problem of rewriting your system in the latest technology with the intent to deliver the exact user experience currently in place.  In other words, the underlying technology would change, but the product would look and feel the same.  Business case?  None.

In the case of Classic ASP there are a couple of business cases that can be made for conversion.  However, the conversion must be performed with minimal labor to keep costs down and the outcome must be for the purpose of normalizing your system to be all .Net.  This makes sense if your system consists of a mix of languages.  The downside of such a conversion is the amount of regression testing that would be required.  Depending on the volume of code your system contains, you could break this into small sections and attack it over time.

One other problem with conversion is the issue of certification.  If you are maintaining medical or government software that requires certification when major changes take place, then your software will need to be re-certified after conversion.  This can be an expensive process.

Replace when Possible?

This is one of the more preferred methods of attacking legacy code.  When a new feature is introduced, replace the legacy code that is touched by the new feature with new code.  This has several benefits: The customer expects bugs in new features and the business expects to invest money in a new feature.  The downside of using only this technique is that eventually, your legacy code volume will plateau because of web pages that are little used or are not of interest for upgrading (usually it's the configuration sections that suffer from this).

A downside to this technique is the fact that each enhancement may bring new technologies to the product.  Therefore, the number of technologies used over time grows.  This can be a serious liability if the number of people maintaining the system is small and one or more decide to move on to another company.  Now you have to fill the position with a person that knows a dozen or more odd technologies or the person to be hired will need a lot of time to get up to speed.

The Front-End Dilemma

Another issue with legacy code that is often overlooked is the interface itself.  Over time, interfaces change in style and in usability.  Many systems that are grown end up with an interface that is inconsistent.  Some pages are old-school html with javascript, others use bootstrap and AngularJS.  Many versions of JQuery are sprinkled around your website.  Bundling is an add-on if at all.  If your company hires a designer to make things consistent looking, there is still the problem of re-coding the front-end code.  In Classic ASP, the HTML code is always embedded in the same source file as the Javascript and VB Script.  That makes front-end conversion into a level 10 nightmare!  .Net webpages are not picnic either.  In my experience VB.Net webpages are normally written with a lot of VB code mixed in the front-side code instead of the code-behind.  There are also many situations where code behind emits html and javascript to allow logic to decide which code to send to the customer's browser.

The Database Dilemma

The next issue I want to mention is the database itself.  When Classic ASP was king in the world of developing Microsoft product based websites, the database was used to perform must of the heavy lifting.  Websites did not have a lot of power and MS SQL had CPU cycles that could be used for processing (most straight database functions tax the hard drive but leave the CPU idle).  So many legacy systems will have the business logic performed in stored procedures.  In this day and age, it becomes a license cost issue.  As the number of customers increase, instances of databases must increase to handle the load.  Web servers are much cheaper to license than SQL servers.  It makes more sense to put the business logic in the front end.  In today's API driven environment, this can be scaled to provide CPU, memory and drive space to the processes that need them the most.  In legacy systems, the database is where it all happens and all customers must share the misery of one heavy-duty slow running process.  There is only one path for solving this issue.  New code must move the processing to a front-end source, such as an API.  This code must be developed incrementally as the system is enhanced.  There is no effective business case for "fixing" this issue by itself.

As I mentioned, a lot of companies will use stored procedures to perform their back-end processing.  Once a critical mass of stored procedures have been created, you are locked into the database technology that was chosen from day one.  There will be no cost effective way to convert an MS SQL database into Mongo or Oracle or MySQL.  Wouldn't it have been nice if the data store was broken into small chunks hidden behind APIs?  We can all dream right?

The Data Center Dilemma

Distributed processing and scalability are the next issue that come to mind.  Scaling a system can consist of adding a load-balancer with multiple web servers.  Eventually, the database will max out and you'll need to run parallel instances to try and split the load.  The most pain will come when it is necessary to run a second data center.  The decision to use more than one data center could be redundancy or it could be to reduce latency to customers located in a distant region.  Scaling an application to work in multiple data centers is no trivial task. First, if fail-over redundancy is the goal then the databases must be upgraded to a enterprise licenses.  Which increases the cost of the license, but also doubles that cost because the purpose is to have identical databases at two (or more) locations.

Compounding the database problems that will need to be solved is the problem of the website itself.  More than likely, your application that was "grown" is a monolithic website application that is all or nothing.  This beast must run from two locations and be able to handle users that might have data at one data center or the other.  

If the application was designed using sessions, which was the prevailing technology until APIs become common, then there is the session fail-over problem.  Session issues will rear their ugly head when a web-farm is introduced, but there are cheap and dirty hacks to get around those problems (like fixing the incoming ip to a web server to prevent a user from going to another web server after they log in).  Using a centralized session store is a solution to a web farm.  Another solution is the use of a session-less website design.  Adapting a session-based system to session-less is a monstrous job.  For a setup like JWT, the number of variables in a session must be reduced to something that can be passed to a browser.  Another method is to cache the session variables behind the scenes and pass a token to the browser that identifies who the user is.  Then the algorithm can check to see if the cache contains the variables that match the user.  This caching system would need to be shared between data centers because a variable that is saved from a web page would be lost if the user's next request was directed to the other data center.  To get a rough idea of how big the multi-datacenter problem is, I would recommend browsing over this article: 

Distributed Algorithms in NoSQL Databases

The Developer Knowledge Dilemma

This is a really ugly problem.  Younger developers do not know the older languages and they are being taught techniques that require technologies that didn't exist 10 years ago.  Unit testing is becoming an integral part of the development process.  Object oriented programming is used in almost all current languages.  This problem exposes the company to a shortage of programmers able to fix bugs and solve problems.  Bugs become more expensive to fix because only experienced programmers can fix them.  Hire a dozen interns to fix minor issues with your software?  Not going to happen.  Assign advanced programmers to fix bugs?  Epic waste of money and resources.  Contract the work to an outside company?  Same issues, expensive and difficult to find the expertise.


My take on all of this is that a company must have a plan for mitigating legacy code.  Otherwise the problem will grow until the product is too expensive to maintain or enhance.  Most companies don't recognize the problem until it becomes a serious problem.  Then it's somewhat late to correct and corrective measures become prohibitively expensive.  It's important to take a step back and look at the whole picture.  Count the number of technologies in use.  Count the number of legacy web pages in production.  Get an idea of the scope of the problem.  I would recommend keeping track of these numbers and maybe compare the number of legacy pages to non-legacy pages.  Track your progress in solving this problem.

I suspect that most web-based software being built today will fall into the MVC-like pattern or use APIs.  This is the latest craze.  If developers don't understand the reason they are building systems using these techniques, they will learn when the software grows too large for one data center or even too large for one web server.  Scaling and enhancing a system that is broken into smaller pieces is much easier and cheaper to do.

I wish everyone the best of luck in their battle with legacy code.  I suspect this battle will continue for years to come.


Dot Net Core

I've been spending a lot of time trying to get up to speed on the new .Net Core product.  The product is at version 1.0.1 but everything is constantly changing.  Many NuGet packages are not compatible with .Net Core and the packages that are compatible are still marked as pre-release.  This phase of a software product is called the bleeding edge.  Normally, I like to avoid the bleeding edge, and wait for a product to at least make it to version 1.  However, the advantages of the new .Net Core make the pain and suffering worth it.

The Good

Let's start with some of the good features.  First, the dll's are redesigned to allow better dependency injection.  This is a major feature that is long overdue.  Even the MVC controllers can be unit tested with ease.

Next up is the fact that dll's are no longer added to projects by themselves and the NuGet package manager determines what your project needs.  I have long viewed NuGet as an extra hassle, but Microsoft has finally made this a pleasure to work with.  In the past, NuGet just makes version control hard because you have to remember to exclude the NuGet packages from your check-in.  This has not changed (not in TFS anyway), but the way that NuGet works with projects in .Net Core has changed.  Each time your project loads, the NuGet packages are loaded.  What packages are used is determined by the project.json file in each project (instead of the old nuget packages.config file).  Typing in a package name and saving the project.json will cause the package to load.  This cuts your development time if you need a package loaded into multiple projects.  Just copy the line of code in your project.json file and paste into other project.json files.

It appears that Microsoft is leaning more toward XUnit for unit testing.  I haven't used XUnit much in the past, but I'm starting to really warm up to it.  I like the simplicity.  No attribute is needed on the unit class.  There is a "Theory" attribute that can feed inline data into a unit test multiple times.  This turns a unit test into one test per input set.

The new IOC container is very simple.  In an MVC controller class, you can specify a constructor with parameters using your interfaces.  The built-in IOC container will automatically match your interface with the instance setup in the startup source.

The documentation produced by Microsoft is very nice:  It's clean, simple and explains all the main topics.

The new command line commands are simple to use.  The "dotnet" command can be used to restore NuGet packages with the "dotnet restore" command.  The build is "dotnet build" and "dotnet test" is used to execute the unit tests.  The dotnet command uses the config files to determine what to restore, build or test.  This feature is most important for people who have to setup and deal with continuous integration systems such as Jenkins or Team City.

The Bad

OK, nothing is perfect, and this is a very new product.  Microsoft and many third-party vendors are scrambling to get everything up to speed, but .Net Core still in the early stages of development.  So here is a list of hopefully temporary problems with .Net Core.

The NuGet package manager is very fussy.  Many times I just use the user interface to add NuGet packages, because I'm unsure of the version that is available.  Using a wild-card can cause a package version to be brought in that I don't really want.  I seem to spend a lot more time trying to make the project.json files work without error.  Hopefully, this problem will be diminished after the NuGet packages catch up to .Net Core.

If you change the name of a project that another project is dependent on you'll get a build error.  In order to fix this issue you need to exit from Visual Studio and rename the project directory to match and then fix the sln file to recognize the same directory change.

Many 3rd party products are do not support .Net Core yet.  I'm using Resharper Ultimate and the unit tests do not work with this product.  Therefore, the code coverage tool does not work.  I'm confident that JetBrains will fix this issue within the next month or two, but it's frustrating to have a tool I rely on that doesn't work.

Many of the 3rd party NuGet packages don't work with .Net Core.  Fake it easy is one such package.  There is no .Net Core compatible package as of this blog post.  Eventually, these packages will be updated to work with Core, but it's going to take time.

What to do

I'm old enough to remember when .Net was introduced.  It took me a long time to get used to the new paradigm.  Now there's a new paradigm, and I intend to get on the band-wagon as quick as I can.  So I've done a lot of tests to see how .Net Core works and what has been changed.  I'm also reading a couple of books.  The first book I bought was the .Net Core book:

This is a good book if you want to browse through and learn everything that is new in .Net Core.  The information in this book is an inch deep and a mile wide.  So you can use it to learn what technologies are available, and then zero in on a subject that you want to explore, then go to the Internet and search for research materials.

The other book I bought was this one:

This book is thicker than the former and the subject is narrowed somewhat.  I originally ordered this as an MVC 6 book, but they delayed selling the book and renamed it Core.  I'm very impressed by this book because each chapter shows a different technology to be used with MVC and there are unit tests with explanations for each.  So there is an application that the author builds throughout the book.  Each chapter builds on the previous program and adds some sort of functionality, like site navigation or filtering.  Then the author explains how to write the unit tests for these features in the chapter that contains the feature.  Most books go through chapter by chapter with different features, then there is a chapter on how to use the unit test features of a product.  This is a refreshing change from that technique.

I am currently working through this book to get up to speed as quick as possible.  I would recommend any .Net programmer to get up to speed on Core as soon as possible.