Another Milestone for the BizTalk Product Family



OS & Framework Patching with Docker Containers - a paradigm shift


Thinking of containers, we think of the easy packaging of our app, along with its dependencies. The dockerfile FROM defines the base image used, with the contents copied or restored into the image.
We now have a generic package format that can be deployed onto multiple hosts. No longer must the host know about what version of the stack that's used, nor even the version of the operating system. And, we can deploy any configuration we need, unique to our container.

Patching workloads

In the VM world, we patch workloads by patching the host VM. The developer hands the app over to operations, who deploys it on a VM. Along with the list of dependencies that must be applied to the VM. Ops then takes ownership, hoping the patching of the OS & Frameworks don't negatively impact the running workloads. How often do you think ops does a test run of a patch on a VM workload before deploying it? It's not an easy thing to do, or expectation in the VM world. We basically patch, and hope we don't get a call that something no longer works...

The workflow involves post deployment management. Projects like Spinnaker and Terraform are making great strides to automating the building process of VM, in an immutable infrastructure model. However, are VMs the equivalent to the transition from vhs to dvds?

Are containers simply a better mousetrap?

In our busy lives, we tend to look for quick fixes. We can allocate 30 seconds of dedicated and open ended thought before we latch onto an idea and want to run with it. We see a pattern, figure it's a better drop-in replacement and boom, we're off to applying this new technique.

When recordable dvd players became popular, they were mostly a drop in replacement for vhs tapes. They were a better format, better quality; no need to rewind, but the workflow was generally the same. Did dvds become a drop in replacement for the vhs workflow? Do you remember scheduling a dvd recording, which required setting the clock that was often blinking 12:00am from the last power outage, or was off by an hour as someone forgot how to set it after daylight savings time? At the same time the dvd format was becoming prominent, streaming media became a thing. Although dvds were a better medium to vhs tapes, you could only watch them if you had the physical media. DVRs and streaming media became the primary adopted solution. Not only were they a better quality format, but they solved the usability for integrating with the cable providers schedule. With OnDemand, NetFlix and other video streaming, the entire concept for watching videos changed. I could now watch from my set-top box, my laptop in the bedroom, the hotel, or my phone.

The switch to the dvd format is an example of a better mousetrap that didn't go far enough to solve the larger problem. There were better, broader options that solved a broader set of problems, but it was a paradigm shift.

 

Base image updates

While you could apply a patch to a running container; this falls into the category of: "just because you can, doesn't mean you should". In the container world, we get OS & Framework patches through base image updates. Our base images have stable tags that define a relatively stable version. For instance the microsoft/aspnetcore image has tags for :1.0, 1.1, 2.0. The difference between 1.0, 1.1, 2.0 represent functional and capability changes. Moving between these tags implies there may be a difference in functionality, as well as expanded capabilities. We wouldn't blindly change a tag between these versions and deploy the app into production without some level of validations. Looking at the tags we see the image was last updated some hours or days ago. Even though the 2.0 version was initial shipped months prior.

To maintain a stable base image, owners of these base images maintain the latest OS & Framework patches. They continually monitor and update their images for base image updates. The aspnetcore image updates based on the dotnet image. The dotnet image updates based on the Linux and Windows base images. The Windows and Linux base images update their base images, testing them for the dependencies they know of, before releasing.

Windows, Linux, Dotnet, Java, Node all provide patches for their published images. The paradigm shift here is providing updated base images with the patches already applied. How can we take advantage of this paradigm shift?

Patching containers in the build pipeline

In the container workflow we continually build and deploy containers. Once a container is deployed, the presumption is it's never touched. The deployment is immutable. To make a change, we rebuild the container. We can then test the container, before it's deployed. The container might be tested individually, or in concert with several other containers. Once there's a level of comfort, the container(s) are deployed. Or, scheduled for deployment using an orchestrator. Rebuilding each time, testing before deployment is a change. But, it's a change that enables new capabilities, such as pre-validating the change will function as expected.

Traditional Container Build

Container workflow primitives

To deliver a holistic solution to OS & Framework patching, there are several primitives. One model would provide a closed-loop solution where every scenario is known. A more popular approach involves providing primitives that can be stitched together. If a component needs to be swapped out, for whatever reason, the user isn't blocked.

If we follow the SCC build workflow, we can see the SCC system notifies the build system. When the build system completes, it pushes to a private registry. When the registry receives updates, it can trigger a deployment, or a release management system can sit between, deciding when to release.

The primitives here are:

  • Source that provides notifications
  • Build system that builds an image
  • Registry that stores the results

The only thing missing is the source for base images that can provide notifications?

Azure Container Builder

Over the last year, we've been talking with customers, exploring the problem area for how to handle OS & Framework patching. How can we enable a solution that fits within current workflows? Do we really want to enable patching running containers? Or, can we pickup the existing workflows, or the evolving workflows for containerized builds?

Just as SCC systems provide notifications, the Azure Container Builder will trigger a build based on a base image update? But, where will the notifications come from? We need a means to know when base images are updated. The missing piece here is the base image cache. Or, more specifically, an index to updated image tags. With a cache of public docker hub images, and any ACR based images your container builder has access to, we have the primitives to trigger an automated build. When code changes are committed, a container build will be triggered. When an update to the base image specified in the dockerfile is made, a container build will be triggered.

The additional primitive for base image update notifications is one of the key aspects of the Azure Container Builder. You may choose to use these notifications with your existing CI/CD system. Or, you may choose to use the new Azure Container Builder that can be associated with your Azure Container Registry. All notifications will be available through the Azure Event Grid, providing a common way to communicate asynchronous events across Azure.

Giving it a try...

We're still finalizing the initial public release of the Azure Container Builder. We were going to wait until when we had this working in public form. The more I read posts from our internal Microsoft teams, our field, Microsoft Regional Directors, and customers looking for a solution, the more I realized it's better to get something out for feedback. We've seen the great work the Google Container builder team has done, and the work AWS Code-build has started. Any cloud provider that offers containerized workflows will need private registries to keep their images network-close. We believe containerized builds should be treated the same way.

  • What do you think?
  • Will this fit your needs for OS & Framework patching?
  • How will your ops teams feel about delegating the OS & Framework patching to the development teams workflow?

Steve


Steve Lasker's Web Log - https://SteveLasker.blog


Cheese has been moved to https://SteveLasker.blog

Silverlight TV, LOB Focus and a very cool Silverlight 4 Demo of National Instruments LabView

Last week, John Papa posted this interview we did, discussing in part our Silverlight 5 LOB focus....

Author: Steve Lasker Date: 03/04/2011

Silverlight local databases, what do you use?

Completely acknowledging the lack of an integrated local database story with Silverlight, there are...

Author: Steve Lasker Date: 02/01/2011

Silverlight is hiring - Do you love data controls & LOB?

As was announced at the Firestarter, Silverlight 5 will continue its focus on data and line of...

Author: Steve Lasker Date: 01/14/2011

I'm baaaaackkk

Well, after 2 years away from DevDiv, I just couldn't stay away. While I had a great time working on...

Author: Steve Lasker Date: 01/06/2011

New year, new gig, new blog. Moving to the cloud…

I've moved... Here's the new blog with details on why and what...

Author: Steve Lasker Date: 02/26/2009

Tech Ed EMEA 08 Powerpoints and Demos

Another great year in Barcelona. In addition to the sessions below, we had several nights out on the...

Author: Steve Lasker Date: 11/19/2008

Heading to Tech Ed EMEA Barcelona

I always love going to Barcelona. Sure, it's a great city, and loved riding motorcycles into the...

Author: Steve Lasker Date: 11/08/2008

Compact Tips & Tricks at the Redmond We-Dig users group

This Wednesday, November 5th 2008, I'll be giving doing a local users group presentation that will...

Author: Steve Lasker Date: 11/01/2008

PDC 2008 - Embedding SQL Server Compact In Desktop And Device Applications

This week I presented a session on SQL Server Compact. Video Recording of the session is here. The...

Author: Steve Lasker Date: 10/31/2008

Evolution or Revolution for moving to offline architectures

There's been a lot of talk, momentum and questioning for where and how occasionally connected...

Author: Steve Lasker Date: 10/17/2008

SQL Server Compact 3.5 SP1 available for Win CE 5.0 and 6.0 Platform Builder

After much demand, the latest Platform Builder updates have been released, and they include SQL...

Author: Steve Lasker Date: 08/14/2008

Why doesn't Visual Studio 2008 SP1 installer update SQL Server Compact 3.5 for Devices?

In addition to the 64bit support being a web-only download, I should have also noted that for...

Author: Steve Lasker Date: 08/13/2008

Why does VS 2008 SP1 not include SQL Server Compact 3.5 SP1 64bit?

So you want 64bit support so you can set Target = Any. You just installed VS 2008 SP1, but when you...

Author: Steve Lasker Date: 08/12/2008

SQL Server Compact 3.5 SP1 Released

I'm happy to announce the release of SQL Server Compact - SP1 This release includes several key...

Author: Steve Lasker Date: 08/06/2008

SQL Server Compact Books - now in Spanish

For our Spanish customers, Jose M. Torres just published a book covering SQL Server Compact for our...

Author: Steve Lasker Date: 08/05/2008

Heading to Tech Ed US 08

It's that time of year again. A quick note on some of the sessions/panel discussions I'll be...

Author: Steve Lasker Date: 05/30/2008

SQL Server Compact 3.5 SP1 Beta available

August 6th 2008 UPDATE: SQL Server Compact 3.5 SP1 has shipped: SQL Server Compact 3.5 SP1 Released...

Author: Steve Lasker Date: 05/14/2008

Security and Encryption for SQL Server Compact

The nice thing about SQL Server Compact is the database can be treated as a document. It's a single...

Author: Steve Lasker Date: 05/14/2008

SQL Server Compact 3.5 64bit coming soon in Sp1...

A while back I posted some info about our intention to ship a native 64bit release: SQL Server...

Author: Steve Lasker Date: 05/12/2008

SQL Server Compact Team Coming to your location… (Road Trip)

As part of our vNext release planning, we're heading out on the road to meet some of our customers....

Author: Steve Lasker Date: 05/10/2008

Sync Services for ADO.NET (Devices) CTP 1 Available for Download

They Sync team has just published a CTP release of Sync Services for ADO.NET on Devices. This...

Author: Steve Lasker Date: 03/05/2008

C++ development will speed up with the use of SQL Server Compact

Jim Springfield, an Architect on the Visual C++ team has just posted a great example of how SQL...

Author: Steve Lasker Date: 02/29/2008

MSDN Webcast: Introducing SQL Server Compact 3.5

On Wednesday, Jan 16th '08 from 9am-10am PST time, I'll be doing an MSDN Live webcast covering an...

Author: Steve Lasker Date: 01/15/2008

Connectivity Cross Version Compatibility of Merge Replication and SQL Server Compact

We've been getting a lot of questions regarding mixing versions of SQL Server and SQL Server Compact...

Author: Steve Lasker Date: 01/09/2008

Presentations & Demos from Tech Ed Barcelona 07

Another great year in Barcelona. First I want to thank our database track owner, Gunther Beersaerts....

Author: Steve Lasker Date: 11/07/2007

SQL Server Compact 3.5 Features

Ambrish, one of our great Program Managers for SQL Server Compact has posted some details on the 3.5...

Author: Steve Lasker Date: 08/30/2007

DevConnections Does Data

DevConnections, coming November 5-7, is just around the corner. Maybe you want to gamble a little,...

Author: Steve Lasker Date: 08/24/2007

Sync Services forADO.NET and SQL Server Compact Presentation

At Tech Ed US '07, and several other events I've been giving a presentation discussing how Sync...

Author: Steve Lasker Date: 08/20/2007

Notification to Pull

As developers start architecting their apps with Sync Services for ADO.NET they're starting to ask...

Author: Steve Lasker Date: 08/09/2007

Configuring Visual Studio 2005 for use with SQL Server 2005 Compact Edition

A number of people have been a bit confused how to get SQL Server 2005 Compact Edition (also known...

Author: Steve Lasker Date: 08/06/2007

SQL Server Compact 3.5 B2 released

Release UPDATE: We've since shipped SQL Server Compact, so these B2 links are no longer valid. There...

Author: Steve Lasker Date: 07/31/2007

SQL Server Compact 3.5 B2 & Sync Services for ADO.NET Documentation released

For those that have downloaded Visual Studio 2008 to get the latest version of SQL Server Compact...

Author: Steve Lasker Date: 07/30/2007

SQL Server Compact and 64bit support

We've started to get a number of request whether SQL Server Compact will support 64bit. The answer...

Author: Steve Lasker Date: 07/10/2007

Using RDA with Integrated Authentication

Recently I was asked how well RDA works with Vista. While RDA remains the same, there are some...

Author: Steve Lasker Date: 06/15/2007

Application Services - Optimizing Online, Enabling Offline clients

Saurab, a PM on the UiFX team posted a screencast on the new application services coming in Visual...

Author: Steve Lasker Date: 05/20/2007

Empowering your users with reference data and knowledge

Before joining Microsoft I worked in consulting. We worked hard searching for only the most talented...

Author: Steve Lasker Date: 05/14/2007

Going N Tier w/WCF, Synchronizing data using Sync Services for ADO.NET and SQL Server Compact Edition

In part 1, I used the Visual Studio Orcas Sync Designer to configure and synchronize 3 lookup...

Author: Steve Lasker Date: 03/22/2007

First look at the Visual Studio Orcas Sync Designer

Here's part 1 of the new Sync Designer. In this screen cast I walk through how to cache lookup...

Author: Steve Lasker Date: 03/21/2007

Additional Q&A on the Visual Studio Orcas Sync Designer

Q: Why does the Orcas Feb CTP Typed DataSet designer not work on Vista? A: Visual Studio Orcas is...

Author: Steve Lasker Date: 03/21/2007

Q&A on OCS & Sync Services for ADO.NET

Not surprisingly we've been get a lot of great questions about specific features and scenarios for...

Author: Steve Lasker Date: 03/18/2007

Sync Services for ADO.NET (OCS) CTP Now Available

Today we launched the public CTP of the Sync Services for ADO.NET. This is the current name of...

Author: Steve Lasker Date: 01/22/2007

Why the blackout of info on the blog?

The 5 day blackout, holidays, vacation, house projects, and crippling snow storms that have kept my...

Author: Steve Lasker Date: 01/16/2007

SQL Server Compact Edition 3.1 Released

Well, after several name changes, power outages, the holidays, and crippling winter storms, we've...

Author: Steve Lasker Date: 01/16/2007

SQL Server Compact Edition under ASP.net and IIS

We routinely get requests, complaints, or sometimes even threats, (If you don't enable it, I'll use...

Author: Steve Lasker Date: 11/27/2006

Bill Vaughn writes an eBook on SQL Server Compact Edition

Just as Bill was finishing his Hitchhikers Guide to Visual Studio and SQL Server, SQL Server Compact...

Author: Steve Lasker Date: 11/14/2006

<Previous Next>


Patching Docker Containers - The Balance of Secure and Functional


PaaS, IaaS, SaaS, CaaS, …

The cloud is evolving at a rapid pace. We have increasingly more options for how to host and run the tools that empower our employees, customers, friends and family.
New apps depend on the capabilities of underlying sdks, frameworks, services, platforms which depend on operating systems and hardware. For each layer of this stack, things are constantly moving. We want and need them to move and evolve. And, while our "apps" evolve, bugs surface. Some simple. Some more severe, such as the dreaded vulnerability that must be patched.
We're seeing a new tension where app authors, companies, enterprises want secure systems, but don't want to own the patching. It's great to say the cloud vendor should be responsible for the patching, but how do you know the patching won't break your apps? Just because the problem gets moved down the stack to a different owner doesn't mean the behavior your apps depend upon won't be impacted by the "fix".
I continually hear the tension between IT and devs. IT wants to remove a given version of the OS. Devs need to understand the impact of IT updating or changing their hosting environment. IT wants to patch a set of servers and needs to account for downtime. When does someone evaluate if the pending update will break the apps? Which is more important; a secure platform, or functioning apps? If the platform is secure, but the apps don't work, does your business continue to operate? If the apps continue to operate, but expose a critical vulnerability, there are many a story of a failed company.

So, what to do? Will containers solve this problem?

There are two layers to think about. The app and the infrastructure. We'll start with the app layer

Apps and their OS

One of the major benefits of containers is the packaging of the app and the OS. The app can take dependencies on behaviors and features of a given OS. They package it up in an image, put it in a container registry, and deploy it. When the app needs an update, the developers write the code, submit it to the build system, test it – (that's an important part…) and if the test succeeds, the app is updated. If we look at how containers are defined, we see a lineage of dependencies.
An overly simplified version of our app dockerfile may look something like this:

FROM microsoft/aspnetcore:1.0.1 COPY . . ENTRYPOINT ["dotnet", ["myapp.dll"]

If we look at microsoft/aspnetcore:1.0.1

FROM microsoft/dotnet:1.0.1-core RUN curl packages…
Drilling in further, the dotnet linux image shows:

FROM debian:jessie

At any point, one of these images may get updated. If the updates are functional, the tags should change, indicating a new version that developers can opt into. However, if a vulnerability or some other fix is introduced, the update is applied using the same tag, notifications are sent between the different registries indicating the change. The Debian image takes an update. The dotnet image takes the update and rebuilds. The mycriticalapp gets notified, rebuilds and redeploys; or should it?

Now you might remember that important testing step. At any layer of these automated builds, how do we know the framework, the service or our app will continue to function? Tests. By running automation tests each layered owner can decide if it's ready to proceed. It's incumbent on the public image owners to make sure their dependencies don't break them.

By building an automated build system that not only builds your code when it changes, but also rebuilds when the dependent images change, you're now empowered with the information to decide how to proceed. If the update passes tests and the app just updates, life is good. You might be on vacation, see the news of a critical vulnerability. You check the health of your system, and you can see that a build traveled through, passed its tests and your apps are continuing to report a healthy status. You can go back to your drink at the pool bar knowing your investments in automation and containers have paid off.

What about the underlying infrastructure?

We've covered our app updates, and the dependencies they must react to. But what about the underlying infrastructure that's running our containers? It doesn't really matter who's responsible for them. If the customer maintains them, they're annoyed that they must apply patches, but they're empowered to test their apps before rolling out the patches. If we move the responsibility to the cloud provider, how do they know if the update will impact the apps? Salesforce has a great model for this as they continually update their infrastructure. If your code uses their declartive model, they can inspect your code to know if it will continue to function. If you write custom code, you must provide tests that have 75% code coverage. Why? So Salesforce can validate that their updates won't break your custom apps.
Containers are efficient in size and start up performance because they share core parts of the kernel with the host OS. When a host OS is updated, how does anyone know it will not impact the running apps in a bad way? And, how would they be updated? Does each customer need to schedule down time? In the cloud, the concept of down time shouldn't exist.

Enter the orchestrator…

A basic premise of containerized apps is they're immutable. Another aspect developers should understand: any one container can and will be moved. It may fail, the host may fail, or the orchestrator may simply want to shuffle workloads to balance the overall cluster. A specific node may get over utilized by one of many processes. Just as your hard drive defrags and moves bits without you ever knowing, the container orchestrator should be able to move containers throughout the cluster. It should be able to expand and shrink the cluster on demand.  And that is the next important part.

Rolling Updates of Nodes

If the apps are designed to have individual containers moved at any time, and if nodes are generic and don't have app centric dependencies, then the same infrastructure used to expand and shrink the cluster can be used to roll out updates to nodes. Imagine the cloud vendor is aware of, or owns the nodes. The cloud vendor wants/needs to roll out an underlying OS update or perhaps even a hardware update. It asks the orchestrator to stand up some new nodes, which have the new OS and/or hardware updates. The orchestrator starts to shift workloads to the new node. While we can't really run automated tests on the image, the app can report its health status. As the cloud vendor updates nodes, it's monitoring the health status. If it's seeing failures, we now have the clue that the update must stop, de-provision the node and resume on the previous nodes. The cloud vendor now has a choice to understand if it's something they must fix, or they must notify the customer that update x is attempting to be applied, but the apps aren't functioning. The cloud vendor provides information for the customer to test, identify and fix their app.

Dependencies

The dependencies to build such a system look something like this:

  • Unit and functional tests for each app
  • A container registry with notifications
  • Automated builds that can react to image update notifications as well as app updates
  • Running the automated functional tests as part of the build and deploy pipeline
  • Apps designed to fail and be moved at any time
  • Orchestrators that can expand and contract on demand
  • Health checks for the apps to report their state as they're moved
  • Monitoring systems to notify the cloud vendor and customer of the impact of underlying changes
  • Cloud vendors to interact with their orchestrators to roll out updates, monitor the impact, roll forward or roll back

The challenges of software updates, vulnerabilities, bugs will not go away. The complexity of the layers will likely only increase the possibility of update failures. However, by putting the right automation in place, customers can be empowered to react, the apps will be secure and the lights will remain on.

Steve


Relaxing ACR storage limits, with tools to self manage


When we created the tiered SKUs for ACR, we built the three tiers with the following scenarios in mind:

  • Basic - the entry point to get started with ACR. Not intended for production, due to the size, limited webhooks and throughput SLA. Basic registries are encrypted at rest and geo-redundantly stored using Azure Blob Storage, as we believe these are standards that should never be skipped
  • Standard - the most common registry, where most customers will be fine with the webhooks, storage amount and throughput.
  • Premium - for the larger companies that have more concurrent throughput requirements, and global deployments.

The goal was never to force someone up a tier, beyond the basic tier. Or worse, cause a build failure if a registry filled up. Well, we all learn. :) It seems customers were quick to enable automation - awesome!. And are quickly filling up their registries.

There are two fundamental things we're doing:

  1. ACR will relax the hard constraints on the size of storage. When you exceed the storage associated with your tier, we will charge an overage fee at $0.10/GiB. We did put some new safety limits in place. For instance Premium has a safety of 5tb. If you think you'll really need more, just let us know, and help us understand. We're trying to optimize the experiences, and there are differences we'd need to do for very large registries.
  2. We have pulled up the Auto-Purge feature, allowing customers to manage their dead image pool. Using Auto-purge, customers will be able to declare a policy by which images can be automatically deleted after a certain time. You'll also be able to set the TTL on images. We've heard customers say they need to keep any deployed artifact for __ years. Wow, long time... Eventually, we'll be able to know when it's no longer deployed, and you'll be able to set the TTL for __ units, after the last use.

As customers hit the limits, we wanted to allow customers to store as much as they want, simply charging them for their usage. And enable them to manage their storage, with automated features.

When, when, will we get this?

The overage meters will start the end of February. Early next week, we'll start the design for the policies on auto-purge. As we know more, I'll post an update.

Thanks for the continued feedback,

Steve


Some great docker tools


Here's a few docker tools I've started using to help diagnose issues:

Simple Docker UI

offered by felix

A Google Chrome Plugin that allows you to view your images and running containers - including the logs. No more docker ps, docker logs [container id]

ChromeSimpleDocker

 

 

DockerUI

by  Michael Crosby (crosbymichael.com), Kevan Ahlquist (kevanahlquist.com)

Run with the following command:

docker run -d -p 10.20.30.1:80:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui

I've started deploying this on all my nodes when I'm looking to understand how containers are deployed and interacting

DockerUI

 

I'll keep updating this as I find new tools.

If you have your favorite, comment away...

Steve


Visual Studio 2015 Connected Services in 7 minutes


A quick overview in prep for //build & ignite

https://channel9.msdn.com/Series/ConnectOn-Demand/227

With a message of retiring the "Can't Touch This" style of coding. 

Thanks to the MS Studios folks who took the extra effort to help put this together.

Steve


Visual Studio 2015 CTP6 and Salesforce Connected Services


Just a quick note that with the release of Visual Studio 2015 CTP 6, we've updated the Connected Service Provider for Salesforce.

Since Preview, we've:

  • Improved the OAuth refresh token code to support a pattern that should handle more issues with less code. We've been working with the Azure AD folks on this common pattern.
  • Now retrieve the ConsumerKey avoiding the manual trip to the Salesforce management UI to copy/paste the value
  • Improved the T4 template support to save any changes you've made to your customized templates. Be sure to read the new Customizing section in the guidance docs
  • Improved performance by caching the Salesforce .NET NuGet packages along with the installation of the connected services provider

Thanks for all the feedback that's gotten us this far. 

We're still looking for feedback before we wrap up the Release Candidate. There are a number of channels:

Connected Services SDK

For those looking to build your own Connected Service Provider, we are also getting ready to make the Connected Services SDK available soon as well

Thanks, and happy service coding,

Steve