* Add Virtual Kubelet provider for VIC Initial virtual kubelet provider for VMware VIC. This provider currently handles creating and starting of a pod VM via the VIC portlayer and persona server. Image store handling via the VIC persona server. This provider currently requires the feature/wolfpack branch of VIC. * Added pod stop and delete. Also added node capacity. Added the ability to stop and delete pod VMs via VIC. Also retrieve node capacity information from the VCH. * Cleanup and readme file Some file clean up and added a Readme.md markdown file for the VIC provider. * Cleaned up errors, added function comments, moved operation code 1. Cleaned up error handling. Set standard for creating errors. 2. Added method prototype comments for all interface functions. 3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder. * Add mocking code and unit tests for podcache, podcreator, and podstarter Used the unit test framework used in VIC to handle assertions in the provider's unit test. Mocking code generated using OSS project mockery, which is compatible with the testify assertion framework. * Vendored packages for the VIC provider Requires feature/wolfpack branch of VIC and a few specific commit sha of projects used within VIC. * Implementation of POD Stopper and Deleter unit tests (#4) * Updated files for initial PR
2.7 KiB
How VIC manages filesystem images
A big part of a container runtime is how it uses and manages filesystem images. There are various approaches to managing container images, but the net result needs to be enough binaries to run the main container process. Those binaries may infact be a single statically-linked executable, or it could be operating system dependencies, third party libraries etc.
Given that each containerVM in VIC does not share a filesystem with anything else, for a container VM to be functional, it requires the following:
- OS binaries from which to boot. These are contained in the ISO attached to the containerVM
- A disk, representing the container filesystem, mounted at "/". The disk may contain OS libraries, but must contain the binaries corresponding to the entry point of the container. The disk will typically be R/W and the underlying implementation may capture all reads into a layer which can be committed into a snapshot
- 0 or more R/W disks which persist beyond the lifespan of the container
Port Layer Storage
Imagec
Image resolution and the transfer of image data between an external repository and a local image store is something that should be able to happen out-of-band. This also means that it does not need to be a service that is tightly-coupled to any one VCH. A VCH should be able to make asynchronous requests to an image service which maintains a local image store. There is no reason the image store could not be shared between a number of VCHs. When an image is added to the store, the VCH should then be able to query the image store via the PortLayer for metadata about the image.
Given the out-of-band nature of this transaction, plus the potential for the exchange of image data to be be automated or even managed by a different control plane, it makes sense to provide a binary to allow for this. This binary is imagec described here
Image metadata
Image metadata arguably consists of two distinct types: Immutable metadata that is added to the image on creation/push and contextual data that can be added by the user of an image to facilitate publishing/referencing. Given that a single image can be deployed by multiple users, there is also a scoping aspect to this distinction: Immutable image metadata is coupled to the image and is consumed by all users of the image; whereras user metadata exists within the context of a particular use of the image. As such, user metadata is scoped to a VCH and image metatdata is scoped to an image store. Both need to be persisted.
High-level control flow
This diagram gives a high-level overview of the control flow between the various components.
