* Add Virtual Kubelet provider for VIC Initial virtual kubelet provider for VMware VIC. This provider currently handles creating and starting of a pod VM via the VIC portlayer and persona server. Image store handling via the VIC persona server. This provider currently requires the feature/wolfpack branch of VIC. * Added pod stop and delete. Also added node capacity. Added the ability to stop and delete pod VMs via VIC. Also retrieve node capacity information from the VCH. * Cleanup and readme file Some file clean up and added a Readme.md markdown file for the VIC provider. * Cleaned up errors, added function comments, moved operation code 1. Cleaned up error handling. Set standard for creating errors. 2. Added method prototype comments for all interface functions. 3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder. * Add mocking code and unit tests for podcache, podcreator, and podstarter Used the unit test framework used in VIC to handle assertions in the provider's unit test. Mocking code generated using OSS project mockery, which is compatible with the testify assertion framework. * Vendored packages for the VIC provider Requires feature/wolfpack branch of VIC and a few specific commit sha of projects used within VIC. * Implementation of POD Stopper and Deleter unit tests (#4) * Updated files for initial PR
vSphere Integrated Containers Engine
vSphere Integrated Containers Engine (VIC Engine) is a container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.
See VIC Engine Architecture for a high level overview.
Project Status
VIC Engine now provides:
- support for most of the Docker commands for core container, image, volume and network lifecycle operations. Several
docker composecommands are also supported. See the complete list of supported commands here. - vCenter support, leveraging DRS for initial placement. vMotion is also supported.
- volume support for standard datastores such as vSAN and iSCSI datastores. NFS shares are also supported. See --volume-store - SIOC is not integrated but can be set as normal.
- direct mapping of vSphere networks --container-network - NIOC is not integrated but can be set as normal.
- dual-mode management - IP addresses are reported as normal via vSphere UI, guest shutdown via the UI will trigger delivery of container STOPSIGNAL, restart will relaunch container process.
- client authentication - basic authentication via client certificates known as tlsverify.
- integration with the VIC Management Portal (Admiral) for Docker image content trust.
- integration with the vSphere Platform Services Controller (PSC) for Single Sign-on (SSO) for docker commands such as
docker login. - an install wizard in the vSphere HTML5 client, as a more interactive alternative to installing via the command line. See details here.
- support for a standard Docker Container Host (DCH) deployed and managed as a container on VIC Engine. This can be used to run docker commands that are not currently supported by VIC Engine (
docker build, docker push). See details here.
We are working hard to add functionality while building out our foundation so continue to watch the repo for new features. Initial focus is on the production end of the CI pipeline, building backwards towards developer laptop scenarios.
Installing
After building the binaries (see the Building section), pick up the correct binary based on your OS, and install the Virtual Container Host (VCH) with the following command. For Linux:
bin/vic-machine-linux create --target <target-host>[/datacenter] --image-store <datastore name> --name <vch-name> --user <username> --password <password> --thumbprint <certificate thumbprint> --compute-resource <cluster or resource pool name> --tls-cname <FQDN, *.wildcard.domain, or static IP>
See vic-machine-$OS create --help for usage information. A more in-depth example can be found here.
Deleting
The installed VCH can be deleted using vic-machine-$OS delete.
See vic-machine-$OS delete --help for usage information. A more in-depth example can be found here.
Contributing
See CONTRIBUTING for details on submitting changes and the contribution workflow.
Building
Building the project is done with a combination of make and containers, with golang:1.8 being the common container base. This is done so that it's possible to build directly, without a functional docker, if using a Debian based system with the Go 1.8 toolchain and Drone.io installed.
To build as closely as possible to the formal build:
drone exec
To build inside a Docker container:
docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang:1.8 make all
To build directly:
make all
There are three primary components generated by a full build, found in $BIN (the ./bin directory by default). The make targets used are the following:
- vic-machine -
make vic-machine - appliance.iso -
make appliance - bootstrap.iso -
make bootstrap
Building binaries for development
Some of the project binaries can only be built on Linux. If you are developing on a Mac or Windows OS, then the easiest way to facilitate a build is by utilizing the project's Vagrantfile. The Vagrantfile will share the directory where the file is executed and set the GOPATH based on that share.
To build the component binaries, ensure GOPATH is set, then issue the following command in the root directory:
make components
This will install required tools and build the component binaries tether-linux, rpctool and server binaries docker-engine-server, port-layer-server. The binaries will be created in the $BIN directory, ./bin by default.
To run unit tests after a successful build, issue the following:
make test
Running "make" every time causes Go dependency regeneration for each component, so that "make" can rebuild only those components that are changed. However, such regeneration may take significant amount of time when it is not really needed. To fight that developers can use cached dependencies that can be enabled by defining the environment variable VIC_CACHE_DEPS. As soon as it is set, infra/scripts/go-deps.sh will read cached version of dependencies if those exist.
export VIC_CACHE_DEPS=1
This is important to note that as soon as you add a new package or an internal project dependency that didn't exist before, those dependencies should be regenerated to reflect latest changes. It can be done just by running:
make cleandeps
After that next "make" run will regenerate dependencies from scratch.
Managing vendor/ directory
To build the VIC Engine dependencies, ensure GOPATH is set, then issue the following.
make gvt vendor
This will install the gvt utility and retrieve the build dependencies via gvt restore.
Building the ISOs
The component binaries above are packaged into ISO files, appliance.iso and bootstrap.iso, that are used by the installer. The generation of the ISOs is split into the following targets: iso-base, appliance-staging, bootstrap-staging, appliance, and bootstrap. Generation of the ISOs involves authoring a new root filesystem, meaning running a package manager (currently yum) and packing/unpacking archives. To install packages and preserve file permissions while unpacking these steps should be run as root, whether directly or in a container. To generate the ISOs:
make isos
The appliance and bootstrap ISOs are bootable CD images used to start the VMs that make up VIC Engine. To build the image using docker, ensure GOPATH is set and docker is installed, then issue the following.
docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang:1.8 make isos
Alternatively, the iso image can be built locally. Again, ensure GOPATH is set, but also ensure the following packages are installed. This will attempt to install the following packages if not present using apt-get:
apt-get install \
curl \
cpio \
tar \
xorriso \
rpm \
ca-certificates \
yum
Package names may vary depending on the distribution being used. Once installed, issue the following (the targets listed here are those executed when using the iso target.
make iso-base appliance-staging appliance bootstrap-staging bootstrap
The iso image will be created in $BIN
Building with CI
PRs to this repository will trigger builds on our Drone CI.
To build locally with Drone:
Ensure that you have Docker 1.6 or higher installed.
Install the Drone command line tools.
From the root directory of the vic repository run drone exec
Common Build Problems
-
Builds may fail when building either the appliance.iso or bootstrap.iso with the error:
cap_set_file failed - Operation not supportedCause: Some Ubuntu and Debian based systems ship with a defective
aufsdriver, which Docker uses as its default backing store. This driver does not support extended file capabilities such ascap_set_fileSolution: Edit the
/etc/default/dockerfile, add the option--storage-driver=overlayto theDOCKER_OPTSsettings, and restart Docker. -
go vetfails when doing amake allCause: Apparently some caching takes place in
$GOPATH/pkg/linux_amd64/github.com/vmware/vicand can causego vetto fail when evaluating outdated files in this cache.Solution: Delete everything under
$GOPATH/pkg/linux_amd64/github.com/vmware/vicand re-runmake all. -
vic-machine upgradeintegration tests fail due toBUILD_NUMBERbeing set incorrectly when building locallyCause:
vic-machinechecks the build number of its binary to determine upgrade status and a locally-builtvic-machinebinary may not have theBUILD_NUMBERset correctly. Upon runningvic-machine upgrade, it may fail with the messagefoo-VCH has same or newer version x than installer version y. No upgrade is available.Solution: Set
BUILD_NUMBERto a high number at the top of theMakefile-BUILD_NUMBER ?= 9999999999. Then, re-build binaries -sudo make distclean && sudo make clean && sudo make alland runvic-machine upgradewith the new binary.
Integration Tests
VIC Engine Integration Test Suite includes instructions to run locally.
License
VIC Engine is available under the Apache 2 license.