If you would like to read the other parts in this article series please go to:
- Docker and Containers (Part 1) – Understanding Containers
- Docker and Containers (Part 2) – Understanding Docker
- Docker and Containers (Part 3) – Containers and Windows Server
- Docker and Containers (Part 4) – Implementing Windows Server Containers
- Docker and containers (Part 6)
Earlier this series we examined the basics of how container virtualization is implemented in the soon to be released Windows Server 2016 operating system. As we saw there are two “flavors” of container technology coming in Windows Server 2016, namely:
- Windows Server Containers
- Hyper-V Containers
In the previous article of this series we examined the first approach and saw how one could implement Windows Server Containers in the Windows Server 2016 Technical Preview 4 (TP4) preview release. We demonstrated this by first creating a virtual machine container host in the Microsoft Azure IaaS cloud using Resource Manager as our deployment method. We then created a container and customized it by adding the Web Server role so we could run web apps on our container host. We then captured a container image from our container, then we created a new “work” container from the captured image. This new container can then be used for running a containerized web app on our container host in Azure. We ended the previous article by indicating that we would need to use a Network Security Group (NSG) to allow access to our containerized web apps from the outside world, but we’re going to defer demonstrating this until later as I’m getting a colleague at Microsoft who worked with me on a System Center ebook for Microsoft Press who is going to be writing up something for us about NSGs to explain what they are and how they work.
Meanwhile, let’s continue our discussion of the two “flavors” of container technology coming in Windows Server 2016 by examining how you can implement the section type of container, namely Hyper-V Containers. I’ve asked John McCabe to walk us through how you can implement Hyper-V Containers in Windows Server 2016 TP4. John is a Senior Premier Field Engineer (PFE) working with Microsoft Services Support Delivery in Ireland, and his blog Parallel Universe can be found at this link. John starts off below by highlighting the similarities and differences between Windows Server Containers and Hyper-V Containers, and then he demonstrates how to get started with containers using Windows PowerShell. Finally, John will end his walkthrough with a brief recap of why one would even bother to consider using containers instead of virtual machines.
Hyper-V Containers vs. Windows Server Containers
In Windows Server 2016 and in Windows 10, two types of containers will be available:
- Windows Server containers
- Hyper-V containers
Windows Server containers are equivalent to Linux containers like Docker. These container types isolate applications on the same container host. Each container has its own view of the host system including the kernel, processes, file systems, the registry, and other components. In the case of Windows Server containers, they work between the user mode level and the kernel mode level.
Hyper-V containers are based on a container technology that takes advantage of hardware-assisted virtualization. With hardware-assisted virtualization, Hyper-V containers’ applications are provided a highly isolated environment in which to operate, where the host operating system cannot be impacted in any way by any running container. This is the exact isolation achieved in Hyper-V and Virtual Machine isolation.
Figure 1 shows a combination of Hyper-V Containers and Windows Server containers on the same host
Figure 1: Windows Server Containers and Hyper-V Containers on the same physical machine
The figure shows a single physical host that can offer Hyper-V containers, virtual machines, and Windows Server containers at the same time. So, when should you use either container option? The deciding criteria comes when identifying scale and certified, hardware-assisted isolation requirements for the application or customer using containers.
For example, if scale is your defining needs, then Windows Server containers can achieve greater scale than Hyper-V Containers. However, if you hardware assisted isolation is required and scale is not as important, Hyper-V containers should be used.
Critically, no matter what container technology you select, the application you deploy into a container is compatible between both technologies. This essentially means that a developer can easily build the application in a container hosted on Windows Server containers and move it to a Hyper-V container with no changes required. This gives immense flexibility for ever changing requirements in today’s modern infrastructure.
How to get started with containers
So, how do get started with Containers in Windows Server? The following list details the process for deploying containers:
- Enable Windows Feature ‘containers’
- Create a VM Switch
- [Optional] Configure NAT if required
- Install a Container OS Image
- [Optional] Deploy Docker
- [Optional] Enable Hyper-V
- [Optional] Enable Nested Virtualization
- [Optional] Configure VP’s for Nested VM
- [Optional] Disable Dynamic Memory for Nested VM
- [Optional] Enabled MAC Spoofing for Nested VM
Note that there are various optional steps depending on the scenario’s you are deploying. The first thing we need to do is install the Windows Feature to enable Container Support. This stands true for Hyper-V Containers as well as Windows Server Containers.
For Hyper-V Containers we need to ensure that Hyper-V is installed first. If you plan to do a combination of Hyper-V containers and Nested virtual machine software containers we need to run the following cmdlets to configure the nested virtual machine. These cmdlets will enable nested virtualization, ensure a minimum CPU count of 2 for the Windows Server Container Host VM. Disable Dynamic memory and Enable mac spoofing for the Container VM.
Set-VMProcessor -VMName ContainerVM01 -ExposeVirtualizationExtensions $true
Set-VMProcessor -VMName ContainerVM01 -Count 2
Set-VMMemory ContainerVM01 -DynamicMemoryEnabled $false
Get-VMNetworkAdapter -VMName ContainerVM01 | Set-VMNetworkAdapter -MacAddressSpoofing On
The next section discusses the general approach for deploying and configuring containers regardless of whether they are Hyper-V Containers or Windows Server Containers.
From a PowerShell CLI use the following command to install the Containers feature:
After the components have been installed restart the machine to ensure they are fully enabled.
From a PowerShell CLI, Verify the installation after they reboot by using the Get-ContainerHost cmdlet:
Host01 C:\ProgramData\Microsoft\Windows\Hyper-V\Container Image Store
Containers will require some networking connectivity in order for clients to connect to the applications inside the container. This means we need to create a switch for the containers to use. The New-VMSwitch cmdlet can be used to create a VM switch which can be used to give container images network connectivity! A VM switch for container use can be configured for Network Address Translation (NAT) or External. The External switch type will allow you to configure your container image to be assigned an IP address from the enterprise DHCP server or statically configure it. If you want to provide a layer of network isolation for the containers, NAT can be configured and then you can expose endpoints to the container images.
Using PowerShell CLI, create an External Switch using the New-VMSwitch cmdlet (note that you will need to know the name of the adapter you want to bind the external VM switch to):
New-vmswitch -Name ContainerSW -NetAdapterName Ethernet
Using PowerShell CLI we can create a NAT enabled switch using New-VMSwitch cmdlet as follows:
New-VMSwitch -Name “NAT” -SwitchType NAT -NATSubnetAddress 192.168.0.0/24
In the NAT VM Switch scenario an object will need to be created which will allow the translation to happen. This is called the NAT Object and we can use the New-NetNat cmdlet to achieve this as follows:
New-NetNat -Name ‘NAT’ -InternalIPInterfaceAddressPrefix ‘192.168.0.0/24’
The basics are now configured but we need images to start with. Microsoft has 2 images available in a source repository for Nano Server and Windows Server Core. To access these images we need to install a provider.
Using PowerShell CLI we can use the Install-PackageProvider cmdlet to install the container provider as follows:
Install-PackageProvider ContainerProvider -Force
Next we need to browse the source repository for container images. Using PowerShell CLI we use the Find-ContainerImage cmdlet to achieve this as follows:
Name Version Description
—- ——- ———–
NanoServer 10.0.10586.0 Container OS Image of Windows Se…
WindowsServerCore 10.0.10586.0 Container OS Image of Windows Se…
The Find-ContainerImage cmdlet uses the PowerShell OneGet package manager in the background to retrieve the listings. To avoid confusion ensure you only select the container image for the host type you are deploying it to. For example, don’t download and install a Nano Server Image on a Windows Server Hyper-V Host, but you can have a nested Nano Server and download a Nano Server Image to that nest container host.
Once you have identified the image you want it will need to be installed on your container host using the Install-ContainerImage cmdlet as follows:
Install-ContainerImage -Name NanoServer -Version 10.0.10586.0
The image will be downloaded from the repository and installed, to verify it is in place you can use the Get-ContainerImage cmdlet as follows:
Name Publisher Version IsOSImage
—- ——— ——- ———
NanoServer CN=Microsoft 10.0.10586.0 True
To deploy the container image use the New-Container cmdlet as shown in the following example to build a container:
$container = get-containerimage -name “NanoServer”
New-Container -ContainerImage $container -Name Container01 -ContainerComputerName Host01
Name State Uptime ParentImageName
—- —– —— —————
Container01 Off 00:00:00 NanoServer
When you deploy a container it will not have network connectivity we need to use the Add-ContainerNetworkAdapter so we can create a network card in the container:
Add-ContainerNetworkAdapter -ContainerName Container01
Use the Connect-ContainerNetworkAdapter cmdlet to attach the NIC to the switch:
Connect-ContainerNetworkAdapter -ContainerName Container01 -SwitchName NAT
Store the new container into a variable and then start our container using the Start-Container cmdlet:
$container = get-container -name Container01
You can use the Stop-Container cmdlet to stop the Container as well.
Now the container is up and running we can remotely manage it using Enter-PSSession with a new parameter called ContainerName which will allow us to start a Remote PSSession to the Container:
Enter-PSSession -ContainerName Container01
The session is started then with the container we have running, for example you can run an IPConfig and validate you are indeed in the container and running on the right IP address space. You can now install and configure your application as necessary inside the container. Then you can use the Save-ContainerImage cmdlet to store the changes for rapid deployment in the future:
Save-Containerimage -Name Container01 -Destination C:\ContainerStore
A question that might be forefront in your mind is why even bother to use containers. We have virtual machines and/or physical hardware and they work perfectly fine. But these models have some disadvantages which containers overcomes.
Take the scenario were a developer asks the IT department for a virtual machine to develop a line of business application. The developer will setup the virtual machine to their requirements and then proceed to develop the application. During this course of time the developer will make modifications to the virtual machine and environment and might not remember to capture every setting or binary they have referenced. When they proceed to deploy this new application to a production virtual machine there is often a lot of troubleshooting involved to get the application working.
Now take this scenario and move it to a containerized scenario. The developer would simply publish the container image into production and clients would connect. All the dependencies are already in place. This greatly simplifies the deployment and transition from development to test to production.
Highlighting the need for multiple environments in the development process gives us another reason for using containers. To have these different environments of development, test, and production would generally require at least three virtual machines in the traditional sense. In a container model you need only one. This would be a single virtual machine, this virtual machine would run a container manager, this can run three (or more!) containers simulating development, test, and production. Simple, you can achieve even better efficiency in your hardware usage and virtualized environments using containers.
Containers also allow for rapid deployment and operation of applications. Unlike virtual machines, containers don’t have an underlying operating system as such. When you want to deploy an application, you create a virtual machine, deploy an operating system and then deploy the application. If you need scale, then it gets more complicated as you have to repeat the process. With Containers the operating system is technically already in place. This means the time spent waiting for a container to deploy or scale up is significantly shorter than with a virtual machine because you are never waiting for the operating system to boot.
If you would like to read the other parts in this article series please go to: