Compare commits

..

76 Commits

Author SHA1 Message Date
Brian Goff
50f360d798 Merge pull request #795 from cpuguy83/cherry_picks_1.1.1
Cherry picks 1.1.1
2019-12-02 11:27:21 -08:00
Thomas Hartland
64873c0816 Add test for node ping interval
(cherry picked from commit 3783a39b26)
2019-11-15 14:33:02 -08:00
Thomas Hartland
c660940a7b After handling status update, reset update timer with correct duration
If the ping timer is being used, it should be reset with the ping update
interval. If the status update interval is used then Ping stops being
called for long enough to cause kubernetes to mark the node as NotReady.

(cherry picked from commit c258614d8f)
2019-11-15 14:32:56 -08:00
Brian Goff
9510b370cf Merge pull request #763 from sargun/wait-for-worker-shutdown-v2
Wait for Workers to exit prior to returning from PodController.Run
2019-09-12 14:33:59 -07:00
Sargun Dhillon
ea8495c3a1 Wait for Workers to exit prior to returning from PodController.Run
This changes the behaviour slightly, so rather than immediately exiting on
context cancellation, this calls shutdown, and waits for the current
items to finish being worked on before returning to the user.
2019-09-12 11:04:32 -07:00
Brian Goff
334baa73cf Merge pull request #743 from chewong/pod-status-nil-pointer
Add unit tests for #584
2019-09-11 14:49:55 -07:00
Brian Goff
bb9ff1adf3 Adds Done() and Err() to pod controller (#735)
Allows callers to wait for pod controller exit in addition to readiness.
This means the caller does not have to deal handling errors from the pod
controller running in a gorutine since it can wait for exit via `Done()`
and check the error with `Err()`
2019-09-10 17:44:19 +01:00
Brian Goff
db146a0e01 Merge pull request #761 from sargun/cache-deps
Cache Downloaded Go Modules
2019-09-06 15:20:37 -07:00
Ernest Wong
fdb0c805f7 Add more unit test to #584 2019-09-05 10:48:35 -07:00
Ernest Wong
dc7ff44303 Add unit tests for #584 2019-09-05 09:49:41 -07:00
Sargun Dhillon
e7a36c3505 Cache Downloaded Go Modules
This caches the downloaded go modules. It invalidates them based on
a hash of the go.mod, and go.sum. The test step showed a reduction
from 1:30 -> 1:00, and the e2e tests from 8:30 to 5 minutes.
2019-09-05 09:23:13 -07:00
Ernest Wong
f10a16aed7 Importable End-To-End Test Suite (#758)
* Rename VK to chewong for development purpose

* Rename basic_test.go to basic.go

* Add e2e.go and suite.go

* Disable tests in node.go

* End to end tests are now importable as a testing suite

* Remove 'test' from test files

* Add documentations

* Rename chewong back to virtual-kubelet

* Change 'Testing Suite' to 'Test Suite'

* Add the ability to skip certain testss

* Add unit tests for suite.go

* Add README.md for importable e2e test suite

* VK implementation has to be based on VK v1.0.0

* Stricter checks on validating test functions

* Move certain files back to internal folder

* Add WatchTimeout as a config field

* Add slight modifications
2019-09-04 22:25:43 +01:00
Sargun Dhillon
da57373abb Test pods going missing while they're running in legacy providers (#759)
We poll legacy providers for their pod(s) status periodically. This is because
we have no way of knowing when the pod is updated. If the pod somehow goes
missing in the provider, that state must be handled. Currently, we update
API server, and mark the pod as failed, or ignore it.
2019-09-04 22:16:14 +01:00
Sargun Dhillon
33df981904 Have NotifyPods store the pod status in a map (#751)
We introduce a map that can be used to store the pod status. In this,
we do not need to call GetPodStatus immediately after NotifyPods
is called. Instead, we stash the pod passed via notifypods
as in a map we can access later. In addition to this, for legacy
providers, the logic to merge the pod, and the pod status is
hoisted up to the loop.

It prevents leaks by deleting the entry in the map as soon
as the pod is deleted from k8s.
2019-09-04 20:14:34 +01:00
Brian Goff
ecf6e45bfc Merge pull request #755 from sargun/fix-golang-lint
Fix golang lint
2019-09-03 11:25:21 -07:00
Sargun Dhillon
3f85705461 Upgrade linter, and move away from incremental linting
Incremental linting doesn't seem to catch issues correctly. This
runs the linters in a more standard way.
2019-09-03 11:00:33 -07:00
Sargun Dhillon
7133a372d6 Mark current linting errors as non-errors
This is basically claiming linting bankruptcy. It marks all of the
issues we had up until this point as nolint.
2019-09-03 11:00:33 -07:00
Sargun Dhillon
5949e6279d Miscellaneous cleanup for linting 2019-09-03 11:00:33 -07:00
Sargun Dhillon
9cce8640a5 Fix linting errors in node/pod_test.go
This moves away from defining pods independently. It moves pod (spec)
generation to an independent function.
2019-09-03 11:00:33 -07:00
Sargun Dhillon
7accddcaf4 Fix linting errors in node/podcontroller.go 2019-09-03 11:00:33 -07:00
Ernest Wong
ee31118596 Update docs on virtual-kubelet.io (#754)
* Update website content

* Add PodLifecycleHandler
2019-09-03 10:52:23 -07:00
Brian Goff
2507f57f97 Merge pull request #732 from sargun/move-around-reactor
Move location of eventhandler registration
2019-09-03 10:44:52 -07:00
Sargun Dhillon
9a461a61ad Bump the Circle CI build job to an resource_class of xlarge (#722) 2019-09-02 07:11:11 +01:00
Sargun Dhillon
9443e32ae7 Merge pull request #742 from sargun/fix-mock-provider
Fix mock_test DeletePod to store updated pod status
2019-08-25 10:52:56 -07:00
Sargun Dhillon
43ee086360 Fix mock_test DeletePod to store updated pod status 2019-08-25 10:42:35 -07:00
Sargun Dhillon
0c6de30684 Merge pull request #746 from 928234269/patch2
fix tyop in doc.go
2019-08-21 08:29:46 -07:00
928234269
7305c08d7e fix tyop in doc.go
Signed-off-by: 928234269 <longfei.shang@daocloud.io>
2019-08-20 18:44:11 +08:00
Sargun Dhillon
ccb6713b86 Move location of eventhandler registration
This moves the event handler registration until after the cache
is in-sync.

It makes it so we can use the log object from the context,
rather than having to use the global logger

The cache race condition of the cache starting while the reactor
is being added wont exist because we wait for the cache
to startup / go in sync prior to adding it.
2019-08-18 08:20:49 -07:00
Brian Goff
2f2625c8e2 Merge pull request #734 from sargun/do-not-change-pods
Do not mutate pods, nor hand off pod references to provider
2019-08-15 10:58:39 -07:00
Sargun Dhillon
69f1186713 Do not mutate pods, nor hand off pod references to provider
This moves to a model where any time that pods are given to a
provider, it uses a DeepCopy, as opposed to a reference. If the
provider mutates the pod, it prevents it from causing issues
with the informer cache.

It has to use reflect instead of comparing the hashes because
spew prints DeepCopy'd data structures ever so slightly differently.
2019-08-15 09:59:01 -07:00
Sargun Dhillon
89d88a17ed Add a generic reactor to lifecycle_test to bump resource version (#733)
All updates in our tests should have the behaviour that best
reflects what API server does.
2019-08-15 08:46:38 +01:00
Brian Goff
cad19238fd Merge pull request #736 from sargun/fix-race
Wait for the informer to become in sync before starting tests
2019-08-14 11:44:21 -07:00
Sargun Dhillon
bc2f6e0dc4 Wait for the informer to become in sync before starting tests
If the informers are starting at the same time as createPods,
then we can get into a situation where the pod seems to get
"lost". Instead, we wait for the informer to get into sync
prior to the createpod event.

This also moves to one informer as a microoptimization in
the tests.
2019-08-14 07:03:53 -07:00
Brian Goff
47f5aa45df Merge pull request #727 from ethan-daocloud/patch-2
cleanup: fix some typos in node.go
2019-08-13 12:00:43 -07:00
Sargun Dhillon
de238ee280 Merge pull request #731 from sargun/document-api
Add documentation to the provider API about concurrency / mutability
2019-08-13 11:58:00 -07:00
Brian Goff
569706f371 Merge branch 'master' into document-api 2019-08-13 11:47:04 -07:00
Guangming Wang
cb307df71e cleanup: fix some typos in node.go
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-13 11:39:00 -07:00
Sargun Dhillon
40a4b54ca7 Merge pull request #728 from sargun/im-an-idiot
Remove usage of atomics in tests
2019-08-13 11:34:55 -07:00
Sargun Dhillon
edc0991c0c Fix hotloop around scheduling in lifecycle_test
Lifecycle test had a hotloop, where it would run a never-yielding
function while processing was going on elsewhere. This inserts
a sleep. A sleep is used rather than a yield to be kind to
people's battery life.
2019-08-13 11:25:21 -07:00
Sargun Dhillon
fbed4ca702 Remove usage of atomics
It turns out that running atomic.Read(...) in a tight loop breaks
Golang. The goroutine would never yield control over the scheduler,
so we ended up getting into a situation where the test would get
stuck forever. This moves to a different model, in which
there is a condition var, instead of atomics in loops.
2019-08-13 11:25:21 -07:00
Sargun Dhillon
9b27eb83fe Make mock_test follow the aformentioned documentation 2019-08-13 10:30:02 -07:00
Sargun Dhillon
3b3bf3ff20 Add documentation to the provider API about concurrency / mutability
This adds documentation around what is allowed to be mutated and
what may be accessed concurrently from the provider API. Previously,
the API was ambigious, and that meant providers could return pods
and change them. This resulted in data races occuring.
2019-08-13 10:29:12 -07:00
Sargun Dhillon
75a399f6f4 Merge pull request #724 from sargun/upgrade-k8s-v2
Upgrade k8s
2019-08-13 03:08:37 -07:00
Pires
f0a0e8cbfe Merge branch 'master' into upgrade-k8s-v2 2019-08-13 10:43:00 +01:00
Sargun Dhillon
32ff40eb56 Merge pull request #720 from sargun/set-test-timeout
Set timeout for tests on CI to  9 minutes
2019-08-12 14:53:09 -07:00
Sargun Dhillon
65c5446c94 Set timeout for tests on CI to 9 minutes
Right now, if the tests get stuck (on CI), they are terminated
after 10 minutes. This means as well that we get 0 output about
what went wrong.

Instead, this triggers a panic after 9 minutes on CI.
2019-08-12 13:45:30 -07:00
Brian Goff
cafcdeeefa Merge pull request #723 from sargun/lifecycle-test-fixes
Array of minor fixups to lifecycle tests
2019-08-12 13:22:51 -07:00
Sargun Dhillon
5c2b682cdc Array of minor fixups to lifecycle tests
* Fix the deletion test to actually test the pod is deleted
 * Fix the update pods test to update a value which is allowed
   to be updated
 * Shut down watches after tests
 * Do not delete pod statuses on DeletePod in mock_test

This intentionally leaks pod statuses, but it makes the situation
a lot less complicated around handling race conditions with
the GetPodStatus callback
2019-08-12 12:10:29 -07:00
Sargun Dhillon
e1c3bc3151 Merge pull request #725 from sargun/fix-race-conditions-in-node-test
Fix race conditions in node_test
2019-08-12 11:43:06 -07:00
Sargun Dhillon
5ac33e4b0a Fix race conditions in node_test 2019-08-12 11:33:48 -07:00
Sargun Dhillon
42656aae2f Merge pull request #719 from ethan-daocloud/patch-1
cleanup: fix misspelled words in error message
2019-08-12 11:09:35 -07:00
Brian Goff
10b291dba1 Merge branch 'master' into patch-1 2019-08-12 10:48:15 -07:00
Brian Goff
9d90c599e7 Merge pull request #721 from sargun/fix-race-condition
Fix race condition around worker ID generation in podcontroller.go
2019-08-12 10:43:32 -07:00
Sargun Dhillon
82de7f02c4 Upgrade Kubernetes e2e test cluster to 1.15.2 2019-08-12 10:30:04 -07:00
Sargun Dhillon
ad6cd7d552 Upgrade K8s
* Upgrade k8s.io/api
   go get k8s.io/api@kubernetes-1.15.2
 * Upgrade k8s.io/apimachinery
   go get k8s.io/apimachinery@kubernetes-1.15.2
 * Upgrade kubernetes-1.15.2
   go get k8s.io/client-go@kubernetes-1.15.2
 * Upgrade kk8s.io/kubernetes to v1.15.2
   go get k8s.io/kubernetes@v1.15.2

This also locks the the dependency for
github.com/prometheus/client_golang/prometheus due to a golang bug, and to
please the validation scripts.

The replaces were generated by:
go get k8s.io/kubernetes@v1.15.2 2> fail
for i in $(cat fail|grep unknown|cut -f1 -d@|cut -f2 -d" ")
  do echo "replace ${i} => ${i} kubernetes-1.15.2"
done
2019-08-12 10:29:19 -07:00
Sargun Dhillon
a28969355e Fix race condition around worker ID generation in podcontroller.go 2019-08-12 10:27:21 -07:00
ethan
75a1877d9f cleanup: fix misspelled words in error message
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-10 19:03:44 +08:00
Sargun Dhillon
a87af0818f Merge pull request #708 from sargun/better-docs
Add a little bit of documentation to NotifyPods
2019-08-08 03:10:15 -07:00
Sargun Dhillon
3efc9229ba Add a little bit of documentation to NotifyPods
As far as I can tell, based on the implementation in MockProvider
NotifyPods is called with the mutated pod. This allows us to
take a copy of the Pod object in NotifyPods, and make it so
(eventually) we don't need to do a callback to GetPodStatus.
2019-08-06 20:20:59 -07:00
choury
d0c91a1933 Fix log.Infof in mock (#714) 2019-08-05 20:30:59 +01:00
Sakura
7188238caa fix a to an in annotation (#715) 2019-08-05 20:13:40 +01:00
Brian Goff
9a7698b09f Merge pull request #706 from virtual-kubelet/better-test
Add a test which tests the e2e lifecycle of the pod controller
2019-07-31 11:05:29 -07:00
Sargun Dhillon
50bbc3d1d4 Add tests around updates
This makes sure the update function works correctly after the pod
is running if the podspec is changed. Upon writing the test, I realized
we were accessing the variables outside of the goroutine that the
workers with tests were running in, and we had no locks. Therefore,
I converted all of those numbers to use atomics.
2019-07-30 09:13:43 -07:00
Sargun Dhillon
bd8e39e3f9 Add a benchmark for pod creation
This adds a benchmark for pod creation and makes the mock_test
provider actually work correctly in concurrent situations.
2019-07-30 09:12:56 -07:00
Sargun Dhillon
ce38d72c0e Add additional lifecycle tests
* Don't scheduled failed, or succeeded pods
 * Delete dangling pods
2019-07-30 06:56:54 -07:00
Sargun Dhillon
4a270fea08 Add a test which tests the e2e lifecycle of the pod controller
This uses the mock provider, so I moved the mock provider to a
location where the node test can use it.
2019-07-30 06:56:54 -07:00
Sargun Dhillon
2974de3961 Merge pull request #711 from sargun/avoid-startup-race
Setup event handler at Pod Controller creation time
2019-07-29 09:37:28 -07:00
Sargun Dhillon
4d60fc2049 Setup event handler at Pod Controller creation time
This seems to avoid a race conditions where at pod informer
startup time, the reactor doesn't properly get setup.

It also refactors the root command example to start up
the informers after everything is wired up.
2019-07-26 13:57:00 -07:00
Brian Goff
28dac027ce Merge pull request #700 from cpuguy83/jaeger_exporter_import
Update jaeger exporter import path
2019-07-24 08:44:58 -07:00
Brian Goff
732c0a82d6 Merge branch 'master' into jaeger_exporter_import 2019-07-23 11:15:42 -07:00
Brian Goff
b056ac08bb Merge pull request #705 from virtual-kubelet/fix-new-pod-controller
Make NewPodController function validate that provider is set
2019-07-23 11:15:01 -07:00
Sargun Dhillon
ce60fb81d4 Make NewPodController function validate that provider is set
In NewPodController we validate that the rest of the config is
set to non-nil values. The provider must be non-nil as well.
2019-07-21 16:19:00 -07:00
Brian Goff
46591ad811 Merge pull request #703 from zhuangqh/fix-typo
fix several typo
2019-07-19 07:15:07 -07:00
jerryzhuang
0ba0200067 fix several typo
Signed-off-by: zhuangqh <zhuangqhc@gmail.com>
2019-07-17 10:36:17 +08:00
Brian Goff
29d2bd251d Merge branch 'master' into jaeger_exporter_import 2019-07-09 11:39:39 -07:00
Brian Goff
e7e692bcb6 Update jaeger exporter import path 2019-07-05 10:22:32 -07:00
46 changed files with 2628 additions and 610 deletions

View File

@@ -1,6 +1,7 @@
version: 2
jobs:
validate:
resource_class: xlarge
docker:
- image: circleci/golang:1.12
environment:
@@ -9,20 +10,28 @@ jobs:
working_directory: /go/src/github.com/virtual-kubelet/virtual-kubelet
steps:
- checkout
- restore_cache:
keys:
- validate-{{ checksum "go.mod" }}-{{ checksum "go.sum" }}
- run:
name: go vet
command: V=1 CI=1 make vet
- run:
name: Install linters
command: curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | bash -s v1.15.0 && mv ./bin/* /go/bin/
command: curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | bash -s v1.17.1 && mv ./bin/* /go/bin/
- run:
name: Lint
command: golangci-lint run --new-from-rev "HEAD~$(git rev-list master.. --count)" ./...
command: golangci-lint run ./...
- run:
name: Dependencies
command: scripts/validate/gomod.sh
- save_cache:
key: validate-{{ checksum "go.mod" }}-{{ checksum "go.sum" }}
paths:
- "/go/pkg/mod"
test:
resource_class: xlarge
docker:
- image: circleci/golang:1.12
environment:
@@ -30,12 +39,19 @@ jobs:
working_directory: /go/src/github.com/virtual-kubelet/virtual-kubelet
steps:
- checkout
- restore_cache:
keys:
- test-{{ checksum "go.mod" }}-{{ checksum "go.sum" }}
- run:
name: Build
command: V=1 make build
- run:
name: Tests
command: V=1 CI=1 make test
- save_cache:
key: test-{{ checksum "go.mod" }}-{{ checksum "go.sum" }}
paths:
- "/go/pkg/mod"
e2e:
machine:
@@ -45,7 +61,7 @@ jobs:
CHANGE_MINIKUBE_NONE_USER: true
GOPATH: /home/circleci/go
KUBECONFIG: /home/circleci/.kube/config
KUBERNETES_VERSION: v1.13.7
KUBERNETES_VERSION: v1.15.2
MINIKUBE_HOME: /home/circleci
MINIKUBE_VERSION: v1.2.0
MINIKUBE_WANTUPDATENOTIFICATION: false
@@ -93,6 +109,9 @@ jobs:
name: Watch nodes
command: kubectl get nodes -o json --watch
background: true
- restore_cache:
keys:
- e2e-{{ checksum "go.mod" }}-{{ checksum "go.sum" }}-2
- run:
name: Run the end-to-end test suite
command: |
@@ -102,6 +121,10 @@ jobs:
tar -C $HOME/.go --strip-components=1 -xzf "/tmp/go.tar.gz"
go version
make e2e
- save_cache:
key: e2e-{{ checksum "go.mod" }}-{{ checksum "go.sum" }}-2
paths:
- "/home/circleci/go/pkg/mod"
- run:
name: Collect logs on failure from vkubelet-mock-0
command: |

View File

@@ -83,7 +83,7 @@ ifndef CI
else
@echo "Testing in CI..."
$Q mkdir -p test
$Q ( GODEBUG=cgocheck=2 go test -v $(allpackages); echo $$? ) | \
$Q ( GODEBUG=cgocheck=2 go test -timeout=9m -v $(allpackages); echo $$? ) | \
tee test/output.txt | sed '$$ d'; exit $$(tail -1 test/output.txt)
endif

View File

@@ -179,7 +179,7 @@ type PodLifecycleHandler interface {
```
There is also an optional interface `PodNotifier` which enables the provider to
asyncronously notify the virtual-kubelet about pod status changes. If this
asynchronously notify the virtual-kubelet about pod status changes. If this
interface is not implemented, virtual-kubelet will periodically check the status
of all pods.
@@ -259,49 +259,7 @@ Running the unit tests locally is as simple as `make test`.
### End-to-end tests
Virtual Kubelet includes an end-to-end (e2e) test suite which is used to validate its implementation.
The current e2e suite **does not** run for any providers other than the `mock` provider.
To run the e2e suite, three things are required:
- a local Kubernetes cluster (we have tested with [Docker for Mac](https://docs.docker.com/docker-for-mac/install/) and [Minikube](https://github.com/kubernetes/minikube));
- Your _kubeconfig_ default context points to the local Kubernetes cluster;
- [`skaffold`](https://github.com/GoogleContainerTools/skaffold).
Since our CI uses Minikube, we describe below how to run e2e on top of it.
To create a Minikube cluster, run the following command after [installing Minikube](https://github.com/kubernetes/minikube#installation):
```console
$ minikube start
```
The e2e suite requires Virtual Kubelet to be running as a pod inside the Kubernetes cluster.
In order to make the testing process easier, the build toolchain leverages on `skaffold` to automatically deploy the Virtual Kubelet to the Kubernetes cluster using the mock provider.
To run the e2e test suite, you can run the following command:
```console
$ make e2e
```
When you're done testing, you can run the following command to cleanup the resources created by `skaffold`:
```console
$ make skaffold MODE=delete
```
Please note that this will not unregister the Virtual Kubelet as a node in the Kubernetes cluster.
In order to do so, you should run:
```console
$ kubectl delete node vkubelet-mock-0
```
To clean up all resources you can run:
```console
$ make e2e.clean
```
Check out [`test/e2e`](./test/e2e) for more details.
## Known quirks and workarounds

View File

@@ -41,7 +41,7 @@ func NewCommand(s *provider.Store) *cobra.Command {
fmt.Fprintln(cmd.OutOrStderr(), "no such provider", args[0])
// TODO(@cpuuy83): would be nice to not short-circuit the exit here
// But at the momemt this seems to be the only way to exit non-zero and
// But at the moment this seems to be the only way to exit non-zero and
// handle our own error output
os.Exit(1)
}

View File

@@ -69,7 +69,7 @@ func installFlags(flags *pflag.FlagSet, c *Opts) {
flags.StringVar(&c.TaintKey, "taint", c.TaintKey, "Set node taint key")
flags.BoolVar(&c.DisableTaint, "disable-taint", c.DisableTaint, "disable the virtual-kubelet node taint")
flags.MarkDeprecated("taint", "Taint key should now be configured using the VK_TAINT_KEY environment variable")
flags.MarkDeprecated("taint", "Taint key should now be configured using the VK_TAINT_KEY environment variable") //nolint:errcheck
flags.IntVar(&c.PodSyncWorkers, "pod-sync-workers", c.PodSyncWorkers, `set the number of pod synchronization workers`)
flags.BoolVar(&c.EnableNodeLease, "enable-node-lease", c.EnableNodeLease, `use node leases (1.13) for node heartbeats`)

View File

@@ -18,7 +18,6 @@ import (
"context"
"os"
"path"
"time"
"github.com/pkg/errors"
"github.com/spf13/cobra"
@@ -102,9 +101,6 @@ func runRootCommand(ctx context.Context, s *provider.Store, c Opts) error {
configMapInformer := scmInformerFactory.Core().V1().ConfigMaps()
serviceInformer := scmInformerFactory.Core().V1().Services()
go podInformerFactory.Start(ctx.Done())
go scmInformerFactory.Start(ctx.Done())
rm, err := manager.NewResourceManager(podInformer.Lister(), secretInformer.Lister(), configMapInformer.Lister(), serviceInformer.Lister())
if err != nil {
return errors.Wrap(err, "could not create resource manager")
@@ -194,6 +190,9 @@ func runRootCommand(ctx context.Context, s *provider.Store, c Opts) error {
return errors.Wrap(err, "error setting up pod controller")
}
go podInformerFactory.Start(ctx.Done())
go scmInformerFactory.Start(ctx.Done())
cancelHTTP, err := setupHTTPServer(ctx, p, apiConfig)
if err != nil {
return err
@@ -207,11 +206,16 @@ func runRootCommand(ctx context.Context, s *provider.Store, c Opts) error {
}()
if c.StartupTimeout > 0 {
// If there is a startup timeout, it does two things:
// 1. It causes the VK to shutdown if we haven't gotten into an operational state in a time period
// 2. It prevents node advertisement from happening until we're in an operational state
err = waitFor(ctx, c.StartupTimeout, pc.Ready())
if err != nil {
ctx, cancel := context.WithTimeout(ctx, c.StartupTimeout)
log.G(ctx).Info("Waiting for pod controller / VK to be ready")
select {
case <-ctx.Done():
cancel()
return ctx.Err()
case <-pc.Ready():
}
cancel()
if err := pc.Err(); err != nil {
return err
}
}
@@ -228,21 +232,6 @@ func runRootCommand(ctx context.Context, s *provider.Store, c Opts) error {
return nil
}
func waitFor(ctx context.Context, time time.Duration, ready <-chan struct{}) error {
ctx, cancel := context.WithTimeout(ctx, time)
defer cancel()
// Wait for the VK / PC close the the ready channel, or time out and return
log.G(ctx).Info("Waiting for pod controller / VK to be ready")
select {
case <-ready:
return nil
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "Error while starting up VK")
}
}
func newClient(configPath string) (*kubernetes.Clientset, error) {
var config *rest.Config

View File

@@ -39,7 +39,7 @@ func RegisterTracingExporter(name string, f TracingExporterInitFunc) {
}
// GetTracingExporter gets the specified tracing exporter passing in the options to the exporter init function.
// For an exporter to be availbale here it must be registered with `RegisterTracingExporter`.
// For an exporter to be available here it must be registered with `RegisterTracingExporter`.
func GetTracingExporter(name string, opts TracingExporterOptions) (trace.Exporter, error) {
f, ok := tracingExporters[name]
if !ok {

View File

@@ -20,7 +20,7 @@ import (
"errors"
"os"
"go.opencensus.io/exporter/jaeger"
"contrib.go.opencensus.io/exporter/jaeger"
"go.opencensus.io/trace"
)

View File

@@ -42,7 +42,7 @@ var (
*/
// MockV0Provider implements the virtual-kubelet provider interface and stores pods in memory.
type MockV0Provider struct {
type MockV0Provider struct { //nolint:golint
nodeName string
operatingSystem string
internalIP string
@@ -54,12 +54,12 @@ type MockV0Provider struct {
}
// MockProvider is like MockV0Provider, but implements the PodNotifier interface
type MockProvider struct {
type MockProvider struct { //nolint:golint
*MockV0Provider
}
// MockConfig contains a mock virtual-kubelet's configurable parameters.
type MockConfig struct {
type MockConfig struct { //nolint:golint
CPU string `json:"cpu,omitempty"`
Memory string `json:"memory,omitempty"`
Pods string `json:"pods,omitempty"`
@@ -308,7 +308,7 @@ func (p *MockV0Provider) GetContainerLogs(ctx context.Context, namespace, podNam
// Add pod and container attributes to the current span.
ctx = addAttributes(ctx, span, namespaceKey, namespace, nameKey, podName, containerNameKey, containerName)
log.G(ctx).Info("receive GetContainerLogs %q", podName)
log.G(ctx).Infof("receive GetContainerLogs %q", podName)
return ioutil.NopCloser(strings.NewReader("")), nil
}
@@ -355,7 +355,7 @@ func (p *MockV0Provider) GetPods(ctx context.Context) ([]*v1.Pod, error) {
}
func (p *MockV0Provider) ConfigureNode(ctx context.Context, n *v1.Node) {
ctx, span := trace.StartSpan(ctx, "mock.ConfigureNode")
ctx, span := trace.StartSpan(ctx, "mock.ConfigureNode") //nolint:ineffassign
defer span.End()
n.Status.Capacity = p.capacity()
@@ -453,7 +453,8 @@ func (p *MockV0Provider) nodeDaemonEndpoints() v1.NodeDaemonEndpoints {
// GetStatsSummary returns dummy stats for all pods known by this provider.
func (p *MockV0Provider) GetStatsSummary(ctx context.Context) (*stats.Summary, error) {
ctx, span := trace.StartSpan(ctx, "GetStatsSummary")
var span trace.Span
ctx, span = trace.StartSpan(ctx, "GetStatsSummary") //nolint: ineffassign
defer span.End()
// Grab the current timestamp so we can report it as the time the stats were generated.

View File

@@ -38,7 +38,7 @@ import (
var (
buildVersion = "N/A"
buildTime = "N/A"
k8sVersion = "v1.13.7" // This should follow the version of k8s.io/kubernetes we are importing
k8sVersion = "v1.15.2" // This should follow the version of k8s.io/kubernetes we are importing
)
func main() {

View File

@@ -6,7 +6,7 @@ import (
)
func registerMock(s *provider.Store) {
s.Register("mock", func(cfg provider.InitConfig) (provider.Provider, error) {
s.Register("mock", func(cfg provider.InitConfig) (provider.Provider, error) { //nolint:errcheck
return mock.NewMockProvider(
cfg.ConfigPath,
cfg.NodeName,
@@ -16,7 +16,7 @@ func registerMock(s *provider.Store) {
)
})
s.Register("mockV0", func(cfg provider.InitConfig) (provider.Provider, error) {
s.Register("mockV0", func(cfg provider.InitConfig) (provider.Provider, error) { //nolint:errcheck
return mock.NewMockProvider(
cfg.ConfigPath,
cfg.NodeName,

2
doc.go
View File

@@ -14,7 +14,7 @@ code wrapping what is provided in the node package is what consumers of this
project would implement. In the interest of not duplicating examples, please
see that package on how to get started using virtual kubelet.
Virtual Kubelet supports propgagation of logging and traces through a context.
Virtual Kubelet supports propagation of logging and traces through a context.
See the "log" and "trace" packages for how to use this.
Errors produced by and consumed from the node package are expected to conform to

View File

@@ -1,4 +1,4 @@
// Package errdefs defines the error types that are understood by other packages
// in this project. Consumers of this project should look here to know how to
// produce and consume erors for this project.
// produce and consume errors for this project.
package errdefs

72
go.mod
View File

@@ -3,59 +3,85 @@ module github.com/virtual-kubelet/virtual-kubelet
go 1.12
require (
contrib.go.opencensus.io/exporter/jaeger v0.1.0
contrib.go.opencensus.io/exporter/ocagent v0.4.12
github.com/davecgh/go-spew v1.1.1
github.com/docker/spdystream v0.0.0-20170912183627-bc6354cbbc29 // indirect
github.com/elazarl/goproxy v0.0.0-20190421051319-9d40249d3c2f // indirect
github.com/elazarl/goproxy/ext v0.0.0-20190421051319-9d40249d3c2f // indirect
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2 // indirect
github.com/evanphx/json-patch v4.1.0+incompatible // indirect
github.com/gogo/protobuf v1.2.1 // indirect
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff // indirect
github.com/google/btree v1.0.0 // indirect
github.com/google/go-cmp v0.2.0
github.com/google/go-cmp v0.3.1
github.com/google/gofuzz v1.0.0 // indirect
github.com/googleapis/gnostic v0.1.0 // indirect
github.com/gorilla/mux v1.6.2
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/gorilla/mux v1.7.0
github.com/hashicorp/golang-lru v0.5.1 // indirect
github.com/imdario/mergo v0.3.4 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/json-iterator/go v1.1.6 // indirect
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
github.com/mitchellh/go-homedir v1.1.0
github.com/modern-go/reflect2 v1.0.1 // indirect
github.com/onsi/ginkgo v1.8.0 // indirect
github.com/onsi/gomega v1.5.0 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829
github.com/sirupsen/logrus v1.4.1
github.com/spf13/cobra v0.0.2
github.com/spf13/pflag v1.0.3
github.com/stretchr/testify v1.3.0 // indirect
go.opencensus.io v0.20.2
go.opencensus.io v0.21.0
golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c // indirect
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3 // indirect
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a // indirect
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e // indirect
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db // indirect
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 // indirect
google.golang.org/api v0.3.2 // indirect
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107 // indirect
google.golang.org/grpc v1.20.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.2.2 // indirect
gotest.tools v0.0.0-20181223230014-1083505acf35
k8s.io/api v0.0.0-20190222213804-5cb15d344471
k8s.io/apimachinery v0.0.0-20190221213512-86fb29eff628
k8s.io/apiserver v0.0.0-20181213151703-3ccfe8365421 // indirect
gotest.tools v2.2.0+incompatible
k8s.io/api v0.0.0
k8s.io/apimachinery v0.0.0
k8s.io/client-go v10.0.0+incompatible
k8s.io/klog v0.1.0
k8s.io/klog v0.3.1
k8s.io/kube-openapi v0.0.0-20190510232812-a01b7d5d6c22 // indirect
k8s.io/kubernetes v1.13.7
k8s.io/utils v0.0.0-20180801164400-045dc31ee5c4 // indirect
sigs.k8s.io/yaml v1.1.0 // indirect
k8s.io/kubernetes v1.15.2
)
replace k8s.io/api => k8s.io/api v0.0.0-20190222213804-5cb15d344471
replace k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.0.0-20190805144654-3d5bf3a310c1
replace k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20190221213512-86fb29eff628
replace k8s.io/cloud-provider => k8s.io/cloud-provider v0.0.0-20190805144409-8484242760e7
replace k8s.io/cli-runtime => k8s.io/cli-runtime v0.0.0-20190805143448-a07e59fb081d
replace k8s.io/apiserver => k8s.io/apiserver v0.0.0-20190805142138-368b2058237c
replace k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.0.0-20190805144531-3985229e1802
replace k8s.io/cri-api => k8s.io/cri-api v0.0.0-20190531030430-6117653b35f1
replace k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.0.0-20190805142416-fd821fbbb94e
replace k8s.io/kubelet => k8s.io/kubelet v0.0.0-20190805143852-517ff267f8d1
replace k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.0.0-20190805144128-269742da31dd
replace k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719
replace k8s.io/api => k8s.io/api v0.0.0-20190805141119-fdd30b57c827
replace k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.0.0-20190805144246-c01ee70854a1
replace k8s.io/kube-proxy => k8s.io/kube-proxy v0.0.0-20190805143734-7f1675b90353
replace k8s.io/component-base => k8s.io/component-base v0.0.0-20190805141645-3a5e5ac800ae
replace k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.0.0-20190805144012-2a1ed1f3d8a4
replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.0.0-20190805143126-cdb999c96590
replace k8s.io/metrics => k8s.io/metrics v0.0.0-20190805143318-16b07057415d
replace k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.0.0-20190805142637-3b65bc4bb24f
replace k8s.io/code-generator => k8s.io/code-generator v0.0.0-20190612205613-18da4a14b22b
replace k8s.io/client-go => k8s.io/client-go v0.0.0-20190805141520-2fe0317bcee0

312
go.sum
View File

@@ -1,181 +1,384 @@
bitbucket.org/bertimus9/systemstat v0.0.0-20180207000608-0eeff89b0690/go.mod h1:Ulb78X89vxKYgdL24HMTiXYHlyHEvruOj1ZPlqeNEZM=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
contrib.go.opencensus.io/exporter/jaeger v0.1.0 h1:WNc9HbA38xEQmsI40Tjd/MNU/g8byN2Of7lwIjv0Jdc=
contrib.go.opencensus.io/exporter/jaeger v0.1.0/go.mod h1:VYianECmuFPwU37O699Vc1GOcy+y8kOsfaxHRImmjbA=
contrib.go.opencensus.io/exporter/ocagent v0.4.12 h1:jGFvw3l57ViIVEPKKEUXPcLYIXJmQxLUh6ey1eJhwyc=
contrib.go.opencensus.io/exporter/ocagent v0.4.12/go.mod h1:450APlNTSR6FrvC3CTRqYosuDstRB9un7SOx2k/9ckA=
github.com/Azure/azure-sdk-for-go v21.4.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-autorest v11.1.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/BurntSushi/toml v0.3.0/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/GoogleCloudPlatform/k8s-cloud-provider v0.0.0-20181220005116-f8e995905100/go.mod h1:iroGtC8B3tQiqtds1l+mgk/BBOrxbqjH+eUfFQYRc14=
github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab/go.mod h1:3VYc5hodBMJ5+l/7J4xAyMeuM2PNuepvHlGs8yilUCA=
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd/go.mod h1:64YHyfSL2R96J44Nlwm39UHepQbyR5q10x7iYa1ks2E=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Rican7/retry v0.1.0/go.mod h1:FgOROf8P5bebcC1DS0PdOQiqGUridaZvikzUmkFW6gg=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/apache/thrift v0.12.0 h1:pODnxUFNcjP9UTLZGTdeh+j16A8lJbRvD3rOtrk/7bs=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/auth0/go-jwt-middleware v0.0.0-20170425171159-5493cabe49f7/go.mod h1:LWMyo4iOLWXHGdBki7NIht1kHru/0wM179h+d3g8ATM=
github.com/aws/aws-sdk-go v1.16.26/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/bazelbuild/bazel-gazelle v0.0.0-20181012220611-c728ce9f663e/go.mod h1:uHBSeeATKpVazAACZBDPL/Nk/UhQDDsJWDlqYJo8/Us=
github.com/bazelbuild/buildtools v0.0.0-20180226164855-80c7f0d45d7e/go.mod h1:5JP0TXzWDHXv8qvxRC4InIazwdyDseBDbzESUMKk1yU=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=
github.com/census-instrumentation/opencensus-proto v0.2.0 h1:LzQXZOgg4CQfE6bFvXGM30YZL1WW/M337pXml+GrcZ4=
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/prettybench v0.0.0-20150116022406-03b8cfe5406c/go.mod h1:Xe6ZsFhtM8HrDku0pxJ3/Lr51rwykrzgFwpmTzleatY=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
github.com/client9/misspell v0.0.0-20170928000206-9ce5d979ffda/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/cfssl v0.0.0-20180726162950-56268a613adf/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
github.com/clusterhq/flocker-go v0.0.0-20160920122132-2b8b7259d313/go.mod h1:P1wt9Z3DP8O6W3rvwCt0REIlshg1InHImaLW0t3ObY0=
github.com/codedellemc/goscaleio v0.0.0-20170830184815-20e2ce2cf885/go.mod h1:JIHmDHNZO4tmA3y3RHp6+Gap6kFsNf55W9Pn/3YS9IY=
github.com/codegangsta/negroni v1.0.0/go.mod h1:v0y3T5G7Y1UlFfyxFn/QLRU4a2EuNau2iZY63YTKWo0=
github.com/container-storage-interface/spec v1.1.0/go.mod h1:6URME8mwIBbpVyZV93Ce5St17xBiQJQY67NDsuohiy4=
github.com/containerd/console v0.0.0-20170925154832-84eeaae905fa/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/containerd v1.0.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containernetworking/cni v0.6.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/coreos/bbolt v1.3.1-coreos.6/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-oidc v0.0.0-20180117170138-065b426bd416/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.0.0-20180108230905-e214231b295a/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/rkt v1.30.0/go.mod h1:O634mlH6U7qk87poQifK6M2rsFNt+FyUTWNMnP1hF1U=
github.com/cpuguy83/go-md2man v1.0.4/go.mod h1:N6JayAiVKtlHSnuTCeuLSQVs75hb8q+dYQLjr7cDsKY=
github.com/cyphar/filepath-securejoin v0.0.0-20170720062807-ae69057f2299/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
github.com/d2g/dhcp4client v0.0.0-20170829104524-6e570ed0a266/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
github.com/dgrijalva/jwt-go v0.0.0-20160705203006-01aeca54ebda/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/docker/distribution v0.0.0-20170726174610-edc3ab29cdff/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/libnetwork v0.0.0-20180830151422-a9cd636e3789/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docker/spdystream v0.0.0-20170912183627-bc6354cbbc29 h1:llBx5m8Gk0lrAaiLud2wktkX/e8haX7Ru0oVfQqtZQ4=
github.com/docker/spdystream v0.0.0-20170912183627-bc6354cbbc29/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/elazarl/goproxy v0.0.0-20190421051319-9d40249d3c2f h1:8GDPb0tCY8LQ+OJ3dbHb5sA6YZWXFORQYZx5sdsTlMs=
github.com/elazarl/goproxy v0.0.0-20190421051319-9d40249d3c2f/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/elazarl/goproxy/ext v0.0.0-20190421051319-9d40249d3c2f h1:AUj1VoZUfhPhOPHULCQQDnGhRelpFWHMLhQVWDsS0v4=
github.com/elazarl/goproxy/ext v0.0.0-20190421051319-9d40249d3c2f/go.mod h1:gNh8nYJoAm43RfaxurUnxr+N1PwuFV3ZMl/efxlIlY8=
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2 h1:dWB6v3RcOy03t/bUadywsbyrQwCqZeNIEX6M1OtSZOM=
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2/go.mod h1:gNh8nYJoAm43RfaxurUnxr+N1PwuFV3ZMl/efxlIlY8=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
github.com/euank/go-kmsg-parser v2.0.0+incompatible/go.mod h1:MhmAMZ8V4CYH4ybgdRwPr2TU5ThnS43puaKEMpja1uw=
github.com/evanphx/json-patch v0.0.0-20190203023257-5858425f7550/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.1.0+incompatible h1:K1MDoo4AZ4wU0GIU/fPmtZg7VpzLjCxu+UwBD1FvwOc=
github.com/evanphx/json-patch v4.1.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/fatih/camelcase v0.0.0-20160318181535-f6a740d52f96/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v0.0.0-20180820084758-c7ce16629ff4/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI=
github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
github.com/go-openapi/analysis v0.17.2/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
github.com/go-openapi/errors v0.17.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0=
github.com/go-openapi/errors v0.17.2/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonpointer v0.19.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=
github.com/go-openapi/jsonreference v0.17.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/jsonreference v0.19.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/loads v0.17.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU=
github.com/go-openapi/loads v0.17.2/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU=
github.com/go-openapi/runtime v0.0.0-20180920151709-4f900dc2ade9/go.mod h1:6v9a6LTXWQCdL8k1AO3cvqx5OtZY/Y9wKTgaoP6YRfA=
github.com/go-openapi/runtime v0.17.2/go.mod h1:QO936ZXeisByFmZEO1IS1Dqhtf4QV1sYYFtIq6Ld86Q=
github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
github.com/go-openapi/spec v0.17.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/spec v0.17.2/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/strfmt v0.17.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.17.2/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/validate v0.17.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4=
github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4=
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible/go.mod h1:gsEKFIVnabGBt6mXmxK0MoFy+cZoTJY6mu5Ll3LVLBU=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/protobuf v0.0.0-20171007142547-342cbe0a0415/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff h1:kOkM9whyQYodu09SJ6W3NCsHG7crFaJILQ22Gozp3lg=
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v0.0.0-20160127222235-bd3c8e81be01/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
github.com/golangplus/bytes v0.0.0-20160111154220-45c989fe5450/go.mod h1:Bk6SMAONeMXrxql8uvOKuAZSu8aM5RUGv+1C6IJaEho=
github.com/golangplus/fmt v0.0.0-20150411045040-2a5d6d7d2995/go.mod h1:lJgMEyOkYFkPcDKwRXegd+iM6E7matEszMG5HhwytU8=
github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk=
github.com/google/btree v0.0.0-20160524151835-7d79101e329e/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/cadvisor v0.33.2-0.20190411163913-9db8c7dee20a/go.mod h1:1nql6U13uTHaLYB8rLS5x9IJc2qT6Xd/Tr1sTX6NE48=
github.com/google/certificate-transparency-go v1.0.21/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.0.0 h1:b4Gk+7WdP/d3HZH8EJsZpvV7EtDOgaZLtnaNGIu1adA=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gnostic v0.0.0-20170426233943-68f4ded48ba9/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.1.0 h1:rVsPeBmXbYv4If/cumu1AzZPwV58q433hvONV1UEZoI=
github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
github.com/gophercloud/gophercloud v0.0.0-20190126172459-c818fa66e4c8/go.mod h1:3WdhXV3rUYy9p6AUW8d94kr+HS62Y4VL9mBnFxsD8q4=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2 h1:Pgr17XVTNXAk3q/r4CpKzC5xBM/qW1uVLV+IhRZpIIk=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 h1:pdN6V1QBWetyv/0+wjACpqVH+eVULgEjkurDLq3goeM=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/gorilla/mux v1.7.0 h1:tOSd0UKHQd6urX6ApfOn4XdBMY6Sh1MfxV3kmaazO+U=
github.com/gorilla/mux v1.7.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gregjones/httpcache v0.0.0-20170728041850-787624de3eb7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v0.0.0-20190222133341-cfaf5686ec79/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v0.0.0-20170330212424-2500245aa611/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.3.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway v1.8.5 h1:2+KSC78XiO6Qy0hIjfc1OD9H+hsaJdJlb8Kqsd41CTE=
github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v0.0.0-20160711231752-d8c773c4cba1/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=
github.com/heketi/heketi v0.0.0-20181109135656-558b29266ce0/go.mod h1:bB9ly3RchcQqsQ9CpyaQwvva7RS5ytVoSoholZQON6o=
github.com/heketi/rest v0.0.0-20180404230133-aa6a65207413/go.mod h1:BeS3M108VzVlmAue3lv2WcGuPAX94/KN63MUURzbYSI=
github.com/heketi/tests v0.0.0-20151005000721-f3775cbcefd6/go.mod h1:xGMAM8JLi7UkZt1i4FQeQy0R2T8GLUwQhOP5M1gBhy4=
github.com/heketi/utils v0.0.0-20170317161834-435bc5bdfa64/go.mod h1:RYlF4ghFZPPmk2TC5REt5OFwvfb6lzxFWrTWB+qs28s=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/imdario/mergo v0.3.4 h1:mKkfHkZWD8dC7WxKx3N9WCF0Y+dLau45704YQmY6H94=
github.com/imdario/mergo v0.3.4/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.0.0-20141017032234-72f9bd7c4e0c/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.6 h1:MrUvLMLTMxbqFJ9kzlvat/rYZqZnW3u4wkLzWTaFwKs=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jteeuwen/go-bindata v0.0.0-20151023091102-a0ff2567cfb7/go.mod h1:JVvhzYOiGBnFSYRyV00iY8q7/0PThjIYav1p9h5dmKs=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kardianos/osext v0.0.0-20150410034420-8fef92e41e22/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=
github.com/karrick/godirwalk v1.7.5/go.mod h1:2c9FRhkDxdIbgkOnCEvnSWs71Bhugbl46shStcFDJ34=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2 h1:DB17ag19krx9CFsz4o3enTrPXyIXCl+2iCXH/aMAp9s=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/fs v0.0.0-20131111012553-2788f0dbd169/go.mod h1:glhvuHOU9Hy7/8PwwdtnarXqLagOX0b/TbZx2zLMqEg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.0.0-20140812000539-f31442d60e51/go.mod h1:Bvhd+E3laJ0AVkG0c9rmtZcnhV0HQ3+c3YxxqTvc/gA=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.0.0-20130911015532-6807e777504f/go.mod h1:sjUstKUATFIcff4qlB53Kml0wQPtJVc/3fWrmuUmcfA=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/libopenstorage/openstorage v0.0.0-20170906232338-093a0c388875/go.mod h1:Sp1sIObHjat1BeXhfMqLZ14wnOzEhNx2YQedreMcUyc=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z9BP0jIOc=
github.com/lpabon/godbc v0.1.1/go.mod h1:Jo9QV0cf3U6jZABgiJ2skINAXb9j8m51r07g4KI92ZA=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/magiconair/properties v0.0.0-20160816085511-61b492c03cf4/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/marstr/guid v0.0.0-20170427235115-8bdf7d1a087c/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/mattn/go-shellwords v0.0.0-20180605041737-f8471b0a71de/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mesos/mesos-go v0.0.9/go.mod h1:kPYCMQ9gsOXVAle1OsoY4I1+9kPu8GHkf88aV59fDr4=
github.com/mholt/caddy v0.0.0-20180213163048-2de495001514/go.mod h1:Wb1PlT4DAYSqOEd03MsqkdkXnTxA8v9pKjdpxbqM1kY=
github.com/miekg/dns v0.0.0-20160614162101-5d001d020961/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mindprince/gonvml v0.0.0-20171110221305-fee913ce8fb2/go.mod h1:2eu9pRWp8mo84xCg6KswZ+USQHjwgRhNp06sozOdsTY=
github.com/mistifyio/go-zfs v0.0.0-20151009155749-1b4ae6fb4e77/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/morikuni/aec v0.0.0-20170113033406-39771216ff4c/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mrunalp/fileutils v0.0.0-20160930181131-4ee1cc9a8058/go.mod h1:x8F1gnqOkIEiO4rqoeEEEqQbo7HjGMTvyoq3gej4iT0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mvdan/xurls v0.0.0-20160110113200-1b768d7c393a/go.mod h1:tQlNn3BED8bE/15hnSL2HLkDeLWpNPAwtw7wkEq44oU=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/natefinch/lumberjack v2.0.0+incompatible/go.mod h1:Wi9p2TTF5DG5oU+6YfsmYQpsTIOm0B1VNzQg9Mw6nPk=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v0.0.0-20190113212917-5533ce8a0da3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/image-spec v0.0.0-20170604055404-372ad780f634/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.0.0-20181113202123-f000fe11ece1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runtime-spec v1.0.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v0.0.0-20170621221121-4a2974bf1ee9/go.mod h1:+BLncwf63G4dgOzykXAxcmnFlUaOlkDdmw/CqsW6pjs=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.0.1/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v0.0.0-20160930220758-4d0e916071f6/go.mod h1:NxmoDg/QLVWluQDUYG7XBZTLUpKeFa8e3aMf1BfjyHk=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/pquerna/ffjson v0.0.0-20180717144149-af8b230fcd20/go.mod h1:YARuvh7BUWHNhzDq2OM5tzR2RiCcN2D7sapiKyCel/M=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829 h1:D+CiwcpGTW6pL6bv6KI3KbyEyCKyS+1JWS2h8PNDnGA=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f h1:BVwpUVJDADN2ufcGik7W992pyps0wZ888b/y9GXcLTU=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0 h1:kUZDBDTdBVBYBj5Tmh2NZLlF60mfjA27rM34b+cVwNU=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1 h1:/K3IL0Z1quvmJ7X0A1AwNEK7CRkVK3YwfOU/QAL4WGg=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/quobyte/api v0.1.2/go.mod h1:jL7lIHrmqQ7yh05OJ+eEEdHr0u/kmT1Ff9iHd+4H6VI=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/remyoudompheng/bigfft v0.0.0-20170806203942-52369c62f446/go.mod h1:uYEyJGbgTkfkS4+E/PavXkNJcbFIpEtjt2B0KDQ5+9M=
github.com/robfig/cron v0.0.0-20170309132418-df38d32658d8/go.mod h1:JGuDeoQd7Z6yL4zQhZ3OPEVHB7fL6Ka6skscFHfmt2k=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-charset v0.0.0-20180617210344-2471d30d28b4/go.mod h1:qgYeAmZ5ZIpBWTGllZSQnw97Dj+woV0toclVaRGI8pc=
github.com/rubiojr/go-vhd v0.0.0-20160810183302-0bfd3b39853c/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto=
github.com/russross/blackfriday v0.0.0-20151117072312-300106c228d5/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/seccomp/libseccomp-golang v0.0.0-20150813023252-1b506fc7c24e/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/shurcooL/sanitized_anchor_name v0.0.0-20151028001915-10ef21a441db/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sigma/go-inotify v0.0.0-20181102212354-c87b6cf5033d/go.mod h1:stlh9OsqBQSdwxTxX73mu41BBtRbIpZLQ7flcAoxAfo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1 h1:GL2rEmy6nsikmW0r8opw9JIRScdMF5hA8cOYLH7In1k=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.3/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spf13/afero v0.0.0-20160816080757-b28a7effac97/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/cast v0.0.0-20160730092037-e31f36ffc91a/go.mod h1:r2rcYCSwa1IExKTDiTfzaxqT2FNHs8hODu4LnUfgKEg=
github.com/spf13/cobra v0.0.0-20180319062004-c439c4fa0937/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.2 h1:NfkwRbgViGoyjBKsLI0QMDcuMnhM+SBg3T0cGfpvKDE=
github.com/spf13/cobra v0.0.2/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/jwalterweatherman v0.0.0-20160311093646-33c24e77fb80/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v0.0.0-20160820190039-7fb2782df3d8/go.mod h1:A8kyI5cUJhb8N+3pkfONlcEcZbueH6nhAm0Fq7SrnBM=
github.com/storageos/go-api v0.0.0-20180912212459-343b3eff91fc/go.mod h1:ZrLn+e0ZuF3Y65PNF6dIwbJPZqfmtCXxFm9ckv0agOY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/syndtr/gocapability v0.0.0-20160928074757-e7cb7fa329f4/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/urfave/negroni v1.0.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4=
github.com/vishvananda/netlink v0.0.0-20171020171820-b2de5d10e38e/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
github.com/vishvananda/netns v0.0.0-20171111001504-be1fbeda1936/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vmware/govmomi v0.20.1/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU=
github.com/vmware/photon-controller-go-sdk v0.0.0-20170310013346-4a435daef6cc/go.mod h1:e6humHha1ekIwTCm+A5Qed5mG8V4JL+ChHcUOJ+L/8U=
github.com/xanzy/go-cloudstack v0.0.0-20160728180336-1e2cbf647e57/go.mod h1:s3eL3z5pNXF5FVybcT+LIVdId8pYn709yv6v5mrkrQE=
github.com/xiang90/probing v0.0.0-20160813154853-07dd2e8dfe18/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/handysort v0.0.0-20150421192137-fb3537ed64a1/go.mod h1:QcJo0QPSfTONNIgpN5RA8prR7fF8nkF6cTWTcNerRO8=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2 h1:NAfh7zF0/3/HqtMvJNZ/RFrSlCE6ZTlHmKfhL/Dm1Jk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0 h1:mU6zScU4U1YAFPHEHYk+3JC4SY7JxgkqS10ZOSyksNg=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.uber.org/atomic v0.0.0-20181018215023-8dc6146f7569/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v0.0.0-20180122172545-ddea229ff1df/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v0.0.0-20180814183419-67bc79d13d15/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c h1:Vj5n4GlwjmQteupaxJ9+0FNOmBrHfq7vN4btdGoDZgI=
golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190312203227-4b39c73a6495/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181217174547-8f45f776aaf1/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190206173232-65e2d4e15006/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3 h1:0GoQqolDA55aaLxZyTzK/Y2ePZzZTUrRacwib7cNsYQ=
@@ -193,37 +396,50 @@ golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181004145325-8469e314837c/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e h1:nFYrTHrdrAOpShe27kaFHjsqYSEQ0KWqdWLu3xuZJts=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db h1:6/JqlYfC1CCaLnGceQTI+sDGhC9UBSPAsBqI0Gun6kU=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20161028155119-f51c12702a4d/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20170824195420-5d2fd3ccab98/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180810170437-e96c4e24768d/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190206041539-40960b6deb8e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
gonum.org/v1/gonum v0.0.0-20190331200053-3d26580ed485/go.mod h1:2ltnJ7xHfj0zHS40VVPYEAAMTa3ZGguvHGBSJeRWqE0=
gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw=
gonum.org/v1/netlib v0.0.0-20190331212654-76723241ea4e/go.mod h1:kS+toOQn6AQKjmKJ7gzohV1XkqsFehRA2FbsbkopSuQ=
google.golang.org/api v0.0.0-20181220000619-583d854617af/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.3.2 h1:iTp+3yyl/KOtxa/d1/JUE0GGSoR6FuW5udver22iwpw=
google.golang.org/api v0.3.2/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20170731182057-09f6ed296fc6/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107 h1:xtNn7qFlagY2mQNFHMSRPjT2RkOV4OXM7P5TVy9xATo=
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/grpc v1.13.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -235,37 +451,71 @@ gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gcfg.v1 v1.2.0/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
gopkg.in/inf.v0 v0.9.0/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.0.0-20150622162204-20b71e5b60d7/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.0.0-20180411045311-89060dee6a84/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.1/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v1 v1.0.0-20140924161607-9f9df34309c0/go.mod h1:WDnlLJ4WF5VGsH/HVa3CI79GS0ol3YnhVnKP89i0kNg=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gotest.tools v0.0.0-20181223230014-1083505acf35 h1:zpdCK+REwbk+rqjJmHhiCN6iBIigrZ39glqSF0P3KF0=
gotest.tools v0.0.0-20181223230014-1083505acf35/go.mod h1:R//lfYlUuTOTfblYI3lGoAAAebUdzjvbmQsuB7Ykd90=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.0.0-20190222213804-5cb15d344471 h1:MzQGt8qWQCR+39kbYRd0uQqsvSidpYqJLFeWiJ9l4OE=
k8s.io/api v0.0.0-20190222213804-5cb15d344471/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/apimachinery v0.0.0-20190221213512-86fb29eff628 h1:UYfHH+KEF88OTg+GojQUwFTNxbxwmoktLwutUzR0GPg=
k8s.io/apimachinery v0.0.0-20190221213512-86fb29eff628/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/apiserver v0.0.0-20181213151703-3ccfe8365421 h1:NyOpnIh+7SLvC05NGCIXF9c4KhnkTZQE2SxF+m9otww=
k8s.io/apiserver v0.0.0-20181213151703-3ccfe8365421/go.mod h1:6bqaTSOSJavUIXUtfaR9Os9JtTCm8ZqH2SUl2S60C4w=
k8s.io/client-go v10.0.0+incompatible h1:F1IqCqw7oMBzDkqlcBymRq1450wD0eNqLE9jzUrIi34=
k8s.io/client-go v10.0.0+incompatible/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=
k8s.io/api v0.0.0-20190805141119-fdd30b57c827 h1:Yf7m8lslHFWm22YDRTAHrGPh729A6Lmxcm1weHHBTuw=
k8s.io/api v0.0.0-20190805141119-fdd30b57c827/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A=
k8s.io/apiextensions-apiserver v0.0.0-20190805143126-cdb999c96590/go.mod h1:31VwenKtjRVPM+9p/9WBr2C4RUlwrs53rbGrhPiTzKk=
k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719 h1:uV4S5IB5g4Nvi+TBVNf3e9L4wrirlwYJ6w88jUQxTUw=
k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA=
k8s.io/apiserver v0.0.0-20190805142138-368b2058237c h1:OxQmmVHy+tsC9ciM88NSUWyX3PWastYCO192eVJ7HNg=
k8s.io/apiserver v0.0.0-20190805142138-368b2058237c/go.mod h1:k9Vk6Fiw9pZljxzTtvH2MAfADQK6+hPgf7/eRaZb//o=
k8s.io/cli-runtime v0.0.0-20190805143448-a07e59fb081d/go.mod h1:5w8rmLFPEY2JBGBgRZyieqhi9q0iuUg8oK+zxOdtO7U=
k8s.io/client-go v0.0.0-20190805141520-2fe0317bcee0 h1:BtLpkscF7UZVmtKshdjDIcWLnfGOY01MRIdtYTUme+o=
k8s.io/client-go v0.0.0-20190805141520-2fe0317bcee0/go.mod h1:ayzmabJptoFlxo7SQxN2Oz3a12t9kmpMKADzQmr5Zbc=
k8s.io/cloud-provider v0.0.0-20190805144409-8484242760e7/go.mod h1:CBAE+UyBK7Sf2hxVn6mJWVRZvcsvxq4IgngvZtKmEgM=
k8s.io/cluster-bootstrap v0.0.0-20190805144246-c01ee70854a1/go.mod h1:4ijIkuJiiLZ51gE9wH/RJgMoyQHmGk7EknPexWJzzZY=
k8s.io/code-generator v0.0.0-20190612205613-18da4a14b22b/go.mod h1:G8bQwmHm2eafm5bgtX67XDZQ8CWKSGu9DekI+yN4Y5I=
k8s.io/component-base v0.0.0-20190805141645-3a5e5ac800ae/go.mod h1:VLedAFwENz2swOjm0zmUXpAP2mV55c49xgaOzPBI/QQ=
k8s.io/cri-api v0.0.0-20190531030430-6117653b35f1/go.mod h1:K6Ux7uDbzKhacgqW0OJg3rjXk/SR9kprCPfSUDXGB5A=
k8s.io/csi-translation-lib v0.0.0-20190805144531-3985229e1802/go.mod h1:WZWsyiXyvB8YDkJbQ2o7MWxl8QXg6XMvfX2+NfV/otY=
k8s.io/gengo v0.0.0-20190116091435-f8a0810f38af/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/heapster v1.2.0-beta.1/go.mod h1:h1uhptVXMwC8xtZBYsPXKVi8fpdlYkTs6k949KozGrM=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.1.0 h1:I5HMfc/DtuVaGR1KPwUrTc476K8NCqNBldC7H4dYEzk=
k8s.io/klog v0.1.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.1 h1:RVgyDHY/kFKtLqh67NvEWIgkMneNoIrdkN0CxDSQc68=
k8s.io/klog v0.3.1/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/kube-aggregator v0.0.0-20190805142416-fd821fbbb94e/go.mod h1:CeR9bnF7HDA1LsoCd62doStyCAcWGT2BLuz6arA7FM4=
k8s.io/kube-controller-manager v0.0.0-20190805144128-269742da31dd/go.mod h1:spJVyiWbnjJmny8JcC1zS+CDsnoR6fvt7wKKG+vPCrg=
k8s.io/kube-openapi v0.0.0-20190228160746-b3a7cee44a30/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc=
k8s.io/kube-openapi v0.0.0-20190510232812-a01b7d5d6c22 h1:f0BTap/vrgs21vVbJ1ySdsNtcivpA1x4ut6Wla9HKKw=
k8s.io/kube-openapi v0.0.0-20190510232812-a01b7d5d6c22/go.mod h1:iU+ZGYsNlvU9XKUSso6SQfKTCCw7lFduMZy26Mgr2Fw=
k8s.io/kubernetes v1.13.7 h1:6I48MdE69fo0SRopCAnxgBlUqhlMjeWWEA8Y3ThzUyA=
k8s.io/kubernetes v1.13.7/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
k8s.io/utils v0.0.0-20180801164400-045dc31ee5c4 h1:jx/N9qda/hFHvydYhYL9SZ6oh/vXekqK7YeIghe5cjI=
k8s.io/utils v0.0.0-20180801164400-045dc31ee5c4/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
k8s.io/kube-proxy v0.0.0-20190805143734-7f1675b90353/go.mod h1:REU8MsRh0E7Bq6LbeWcnelU7MPhcr6w6okq95NvswCs=
k8s.io/kube-scheduler v0.0.0-20190805144012-2a1ed1f3d8a4/go.mod h1:JIN72V0gy+lWRlEyk1F5PVARAGZApNsZiFpXWAuqawM=
k8s.io/kubelet v0.0.0-20190805143852-517ff267f8d1/go.mod h1:xFCK3b5WIEViwd//lUcnLXundamh1B64yiIGgkW9TD0=
k8s.io/kubernetes v1.15.2 h1:RO9EuRw5vlN3oa/lnmPxmywOoJRtg9o40KcklHXNIAQ=
k8s.io/kubernetes v1.15.2/go.mod h1:3RE5ikMc73WK+dSxk4pQuQ6ZaJcPXiZX2dj98RcdCuM=
k8s.io/legacy-cloud-providers v0.0.0-20190805144654-3d5bf3a310c1/go.mod h1:nqr8H9tPJMAtFWiSk7g5SEQKSzsxKExlgZ1X6jAziPA=
k8s.io/metrics v0.0.0-20190805143318-16b07057415d/go.mod h1:bH/65+wgFBMhtyIL8lTvHgfBXNd7lwVv6Xrw3YHVbVw=
k8s.io/repo-infra v0.0.0-20181204233714-00fe14e3d1a3/go.mod h1:+G1xBfZDfVFsm1Tj/HNCvg4QqWx8rJ2Fxpqr1rqp/gQ=
k8s.io/sample-apiserver v0.0.0-20190805142637-3b65bc4bb24f/go.mod h1:6eBtUxofjk+EVL6etTnCcg2URVhJbgxwvC85wZYSrBk=
k8s.io/utils v0.0.0-20190221042446-c2654d5206da h1:ElyM7RPonbKnQqOcw7dG2IK5uvQQn3b/WPHqD5mBvP4=
k8s.io/utils v0.0.0-20190221042446-c2654d5206da/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
modernc.org/cc v1.0.0/go.mod h1:1Sk4//wdnYJiUIxnW8ddKpaOJCF37yAdqYnkxUpaYxw=
modernc.org/golex v1.0.0/go.mod h1:b/QX9oBD/LhixY6NDh+IdGv17hgB+51fET1i2kPSmvk=
modernc.org/mathutil v1.0.0/go.mod h1:wU0vUrJsVWBZ4P6e7xtFJEhFSNsfRLJ8H458uRjg03k=
modernc.org/strutil v1.0.0/go.mod h1:lstksw84oURvj9y3tn8lGvRxyRC1S2+g5uuIzNfIOBs=
modernc.org/xc v1.0.0/go.mod h1:mRNCo0bvLjGhHO9WsyuKVU4q0ceiDDDoEeWDJHrNx8I=
sigs.k8s.io/kustomize v2.0.3+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU=
sigs.k8s.io/structured-merge-diff v0.0.0-20190302045857-e85c7b244fd2/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=
sigs.k8s.io/structured-merge-diff v0.0.0-20190426204423-ea680f03cc65/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
vbom.ml/util v0.0.0-20160121211510-db5cfe13f5cc/go.mod h1:so/NYdZXCz+E3ZpW0uAoCj6uzU2+8OWDFv/HxUSs7kI=

View File

@@ -0,0 +1,8 @@
package lockdeps
import (
// TODO(Sargun): Remove in Go1.13
// This is a dep that `go mod tidy` keeps removing, because it's a transitive dep that's pulled in via a test
// See: https://github.com/golang/go/issues/29702
_ "github.com/prometheus/client_golang/prometheus"
)

View File

@@ -17,13 +17,13 @@ package manager_test
import (
"testing"
"github.com/virtual-kubelet/virtual-kubelet/internal/manager"
testutil "github.com/virtual-kubelet/virtual-kubelet/internal/test/util"
"gotest.tools/assert"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
corev1listers "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/tools/cache"
"github.com/virtual-kubelet/virtual-kubelet/internal/manager"
testutil "github.com/virtual-kubelet/virtual-kubelet/internal/test/util"
)
// TestGetPods verifies that the resource manager acts as a passthrough to a pod lister.
@@ -38,7 +38,7 @@ func TestGetPods(t *testing.T) {
// Create a pod lister that will list the pods defined above.
indexer := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
for _, pod := range lsPods {
indexer.Add(pod)
assert.NilError(t, indexer.Add(pod))
}
podLister := corev1listers.NewPodLister(indexer)
@@ -67,7 +67,7 @@ func TestGetSecret(t *testing.T) {
// Create a secret lister that will list the secrets defined above.
indexer := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
for _, secret := range lsSecrets {
indexer.Add(secret)
assert.NilError(t, indexer.Add(secret))
}
secretLister := corev1listers.NewSecretLister(indexer)
@@ -106,7 +106,7 @@ func TestGetConfigMap(t *testing.T) {
// Create a config map lister that will list the config maps defined above.
indexer := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
for _, secret := range lsConfigMaps {
indexer.Add(secret)
assert.NilError(t, indexer.Add(secret))
}
configMapLister := corev1listers.NewConfigMapLister(indexer)
@@ -145,7 +145,7 @@ func TestListServices(t *testing.T) {
// Create a pod lister that will list the pods defined above.
indexer := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
for _, service := range lsServices {
indexer.Add(service)
assert.NilError(t, indexer.Add(service))
}
serviceLister := corev1listers.NewServiceLister(indexer)

View File

@@ -75,7 +75,7 @@ func (f *Framework) CreatePodObjectWithOptionalSecretKey(testName string) *corev
// CreatePodObjectWithEnv creates a pod object whose name starts with "env-test-" and that uses the specified environment configuration for its first container.
func (f *Framework) CreatePodObjectWithEnv(testName string, env []corev1.EnvVar) *corev1.Pod {
pod := f.CreateDummyPodObjectWithPrefix(testName, "env-test-", "foo")
pod := f.CreateDummyPodObjectWithPrefix(testName, "env-test", "foo")
pod.Spec.Containers[0].Env = env
return pod
}

View File

@@ -1,6 +1,8 @@
package framework
import (
"time"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
@@ -8,17 +10,19 @@ import (
// Framework encapsulates the configuration for the current run, and provides helper methods to be used during testing.
type Framework struct {
KubeClient kubernetes.Interface
Namespace string
NodeName string
KubeClient kubernetes.Interface
Namespace string
NodeName string
WatchTimeout time.Duration
}
// NewTestingFramework returns a new instance of the testing framework.
func NewTestingFramework(kubeconfig, namespace, nodeName string) *Framework {
func NewTestingFramework(kubeconfig, namespace, nodeName string, watchTimeout time.Duration) *Framework {
return &Framework{
KubeClient: createKubeClient(kubeconfig),
Namespace: namespace,
NodeName: nodeName,
KubeClient: createKubeClient(kubeconfig),
Namespace: namespace,
NodeName: nodeName,
WatchTimeout: watchTimeout,
}
}

View File

@@ -31,7 +31,7 @@ func (f *Framework) WaitUntilNodeCondition(fn watch.ConditionFunc) error {
}
// Watch for updates to the Pod resource until fn is satisfied, or until the timeout is reached.
ctx, cancel := context.WithTimeout(context.Background(), defaultWatchTimeout)
ctx, cancel := context.WithTimeout(context.Background(), f.WatchTimeout)
defer cancel()
last, err := watch.UntilWithSync(ctx, lw, &corev1.Node{}, nil, fn)
if err != nil {

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -16,8 +15,6 @@ import (
podutil "k8s.io/kubernetes/pkg/api/v1/pod"
)
const defaultWatchTimeout = 2 * time.Minute
// CreateDummyPodObjectWithPrefix creates a dujmmy pod object using the specified prefix as the value of .metadata.generateName.
// A variable number of strings can be provided.
// For each one of these strings, a container that uses the string as its image will be appended to the pod.
@@ -25,8 +22,7 @@ const defaultWatchTimeout = 2 * time.Minute
func (f *Framework) CreateDummyPodObjectWithPrefix(testName string, prefix string, images ...string) *corev1.Pod {
// Safe the test name
if testName != "" {
testName = strings.Replace(testName, "/", "-", -1)
testName = strings.ToLower(testName)
testName = stripParentTestName(strings.ToLower(testName))
prefix = prefix + "-" + testName + "-"
}
enableServiceLink := false
@@ -88,7 +84,7 @@ func (f *Framework) WaitUntilPodCondition(namespace, name string, fn watch.Condi
},
}
// Watch for updates to the Pod resource until fn is satisfied, or until the timeout is reached.
ctx, cfn := context.WithTimeout(context.Background(), defaultWatchTimeout)
ctx, cfn := context.WithTimeout(context.Background(), f.WatchTimeout)
defer cfn()
last, err := watch.UntilWithSync(ctx, lw, &corev1.Pod{}, nil, fn)
if err != nil {
@@ -147,7 +143,7 @@ func (f *Framework) WaitUntilPodEventWithReason(pod *corev1.Pod, reason string)
},
}
// Watch for updates to the Event resource until fn is satisfied, or until the timeout is reached.
ctx, cfn := context.WithTimeout(context.Background(), defaultWatchTimeout)
ctx, cfn := context.WithTimeout(context.Background(), f.WatchTimeout)
defer cfn()
last, err := watch.UntilWithSync(ctx, lw, &corev1.Event{}, nil, func(event watchapi.Event) (b bool, e error) {
switch event.Type {
@@ -184,3 +180,15 @@ func (f *Framework) GetRunningPods() (*corev1.PodList, error) {
return result, err
}
// stripParentTestName strips out the parent's test name from the input (in the form of 'TestParent/TestChild').
// Some test cases use their name as the pod name for testing purpose, and sometimes it might exceed 63
// characters (Kubernetes's limit for pod name). This function ensures that we strip out the parent's
// test name to decrease the length of the pod name
func stripParentTestName(name string) string {
parts := strings.Split(name, "/")
if len(parts) == 1 {
return parts[0]
}
return parts[len(parts)-1]
}

View File

@@ -4,12 +4,12 @@ package e2e
import (
"flag"
"os"
"fmt"
"testing"
v1 "k8s.io/api/core/v1"
vke2e "github.com/virtual-kubelet/virtual-kubelet/test/e2e"
"github.com/virtual-kubelet/virtual-kubelet/internal/test/e2e/framework"
v1 "k8s.io/api/core/v1"
)
const (
@@ -18,15 +18,9 @@ const (
)
var (
// f is the testing framework used for running the test suite.
f *framework.Framework
// kubeconfig is the path to the kubeconfig file to use when running the test suite outside a Kubernetes cluster.
kubeconfig string
// namespace is the name of the Kubernetes namespace to use for running the test suite (i.e. where to create pods).
namespace string
// nodeName is the name of the virtual-kubelet node to test.
nodeName string
namespace string
nodeName string
)
func init() {
@@ -36,17 +30,36 @@ func init() {
flag.Parse()
}
func TestMain(m *testing.M) {
// Set sane defaults in case no values (or empty ones) have been provided.
// Provider-specific setup function
func setup() error {
fmt.Println("Setting up end-to-end test suite for mock provider...")
return nil
}
// Provider-specific teardown function
func teardown() error {
fmt.Println("Tearing down end-to-end test suite for mock provider...")
return nil
}
// Provider-specific shouldSkipTest function
func shouldSkipTest(testName string) bool {
return false
}
// TestEndToEnd creates and runs the end-to-end test suite for virtual kubelet
func TestEndToEnd(t *testing.T) {
setDefaults()
// Create a new instance of the test framework targeting the specified node.
f = framework.NewTestingFramework(kubeconfig, namespace, nodeName)
// Wait for the virtual-kubelet pod to be ready.
if _, err := f.WaitUntilPodReady(namespace, nodeName); err != nil {
panic(err)
config := vke2e.EndToEndTestSuiteConfig{
Kubeconfig: kubeconfig,
Namespace: namespace,
NodeName: nodeName,
Setup: setup,
Teardown: teardown,
ShouldSkipTest: shouldSkipTest,
}
// Run the test suite.
os.Exit(m.Run())
ts := vke2e.NewEndToEndTestSuite(config)
ts.Run(t)
}
// setDefaults sets sane defaults in case no values (or empty ones) have been provided.

View File

@@ -0,0 +1,85 @@
package suite
import (
"reflect"
"runtime/debug"
"strings"
"testing"
)
// TestFunc defines the test function in a test case
type TestFunc func(*testing.T)
// SetUpFunc sets up provider-specific resource in the test suite
type SetUpFunc func() error
// TeardownFunc tears down provider-specific resources from the test suite
type TeardownFunc func() error
// ShouldSkipTestFunc determines whether the test suite should skip certain tests
type ShouldSkipTestFunc func(string) bool
// TestSuite contains methods that defines the lifecycle of a test suite
type TestSuite interface {
Setup()
Teardown()
}
// TestSkipper allows providers to skip certain tests
type TestSkipper interface {
ShouldSkipTest(string) bool
}
type testCase struct {
name string
f TestFunc
}
// Run runs tests registered in the test suite
func Run(t *testing.T, ts TestSuite) {
defer failOnPanic(t)
ts.Setup()
defer ts.Teardown()
// The implementation below is based on https://github.com/stretchr/testify
testFinder := reflect.TypeOf(ts)
tests := []testCase{}
for i := 0; i < testFinder.NumMethod(); i++ {
method := testFinder.Method(i)
if !isValidTestFunc(method) {
continue
}
test := testCase{
name: method.Name,
f: func(t *testing.T) {
defer failOnPanic(t)
if tSkipper, ok := ts.(TestSkipper); ok && tSkipper.ShouldSkipTest(method.Name) {
t.Skipf("Skipped due to shouldSkipTest()")
}
method.Func.Call([]reflect.Value{reflect.ValueOf(ts), reflect.ValueOf(t)})
},
}
tests = append(tests, test)
}
for _, test := range tests {
t.Run(test.name, test.f)
}
}
// failOnPanic recovers panic occurred in the test suite and marks the test / test suite as failed
func failOnPanic(t *testing.T) {
if r := recover(); r != nil {
t.Fatalf("%v\n%s", r, debug.Stack())
}
}
// isValidTestFunc determines whether or not a given method is a valid test function
func isValidTestFunc(method reflect.Method) bool {
return strings.HasPrefix(method.Name, "Test") && // Test function name must start with "Test",
method.Type.NumIn() == 2 && // the number of function input should be 2 (*TestSuite ts and t *testing.T),
method.Type.In(1) == reflect.TypeOf(&testing.T{}) &&
method.Type.NumOut() == 0 // and the number of function output should be 0
}

View File

@@ -0,0 +1,126 @@
package suite
import (
"strings"
"testing"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
type basicTestSuite struct {
setupCount int
testFooCount int
testBarCount int
bazCount int
testFooBarCount int
testFooBazCount int
testBarBazCount int
teardownCount int
testsRan []string
}
func (bts *basicTestSuite) Setup() {
bts.setupCount++
}
func (bts *basicTestSuite) Teardown() {
bts.teardownCount++
}
func (bts *basicTestSuite) TestFoo(t *testing.T) {
bts.testFooCount++
bts.testsRan = append(bts.testsRan, t.Name())
}
func (bts *basicTestSuite) TestBar(t *testing.T) {
bts.testBarCount++
bts.testsRan = append(bts.testsRan, t.Name())
}
// Baz should not be executed by the test suite
// because it does not have the prefix 'Test'
func (bts *basicTestSuite) Baz(t *testing.T) {
bts.bazCount++
bts.testsRan = append(bts.testsRan, t.Name())
}
// TestFooBar should not be executed by the test suite
// because the number of function input is not 2 (*basicTestSuite and *testing.T)
func (bts *basicTestSuite) TestFooBar() {
bts.testFooBarCount++
bts.testsRan = append(bts.testsRan, "TestFooBar")
}
// TestFooBaz should not be executed by the test suite
// because the number of function output is not 0
func (bts *basicTestSuite) TestFooBaz(t *testing.T) error {
bts.testFooBazCount++
bts.testsRan = append(bts.testsRan, t.Name())
return nil
}
// TestBarBaz should not be executed by the test suite
// because the type of the function input is not *testing.T
func (bts *basicTestSuite) TestBarBaz(t string) {
bts.testBarBazCount++
bts.testsRan = append(bts.testsRan, "TestBarBaz")
}
func TestBasicTestSuite(t *testing.T) {
bts := new(basicTestSuite)
Run(t, bts)
assert.Equal(t, bts.setupCount, 1)
assert.Equal(t, bts.testFooCount, 1)
assert.Equal(t, bts.testBarCount, 1)
assert.Equal(t, bts.teardownCount, 1)
assert.Assert(t, is.Len(bts.testsRan, 2))
assertTestsRan(t, bts.testsRan)
assertNonTests(t, bts)
}
type skipTestSuite struct {
basicTestSuite
skippedTestCount int
}
func (sts *skipTestSuite) ShouldSkipTest(testName string) bool {
if testName == "TestBar" {
sts.skippedTestCount++
return true
}
return false
}
func TestSkipTest(t *testing.T) {
sts := new(skipTestSuite)
Run(t, sts)
assert.Equal(t, sts.setupCount, 1)
assert.Equal(t, sts.testFooCount, 1)
assert.Equal(t, sts.testBarCount, 0)
assert.Equal(t, sts.teardownCount, 1)
assert.Equal(t, sts.skippedTestCount, 1)
assert.Assert(t, is.Len(sts.testsRan, 1))
assertTestsRan(t, sts.testsRan)
assertNonTests(t, &sts.basicTestSuite)
}
func assertTestsRan(t *testing.T, testsRan []string) {
for _, testRan := range testsRan {
parts := strings.Split(testRan, "/")
// Make sure that the name of the test has exactly one parent name and one subtest name
assert.Assert(t, is.Len(parts, 2))
// Check the parent test's name
assert.Equal(t, parts[0], t.Name())
}
}
// assertNonTests ensures that any malformed test functions are not run by the test suite
func assertNonTests(t *testing.T, bts *basicTestSuite) {
assert.Equal(t, bts.bazCount, 0)
assert.Equal(t, bts.testFooBarCount, 0)
assert.Equal(t, bts.testFooBazCount, 0)
assert.Equal(t, bts.testBarBazCount, 0)
}

View File

@@ -29,7 +29,7 @@ var (
G = GetLogger
// L is the default logger. It should be initialized before using `G` or `GetLogger`
// If L is unitialized and no logger is available in a provided context, a
// If L is uninitialized and no logger is available in a provided context, a
// panic will occur.
L Logger = nopLogger{}
)

View File

@@ -33,7 +33,7 @@ func handleError(f handlerFunc) http.HandlerFunc {
code := httpStatusCode(err)
w.WriteHeader(code)
io.WriteString(w, err.Error())
io.WriteString(w, err.Error()) //nolint:errcheck
logger := log.G(req.Context()).WithError(err).WithField("httpStatusCode", code)
if code >= 500 {

View File

@@ -28,7 +28,7 @@ type PodListerFunc func(context.Context) ([]*v1.Pod, error)
func HandleRunningPods(getPods PodListerFunc) http.HandlerFunc {
scheme := runtime.NewScheme()
v1.SchemeBuilder.AddToScheme(scheme)
v1.SchemeBuilder.AddToScheme(scheme) //nolint:errcheck
codecs := serializer.NewCodecFactory(scheme)
return handleError(func(w http.ResponseWriter, req *http.Request) error {

View File

@@ -14,8 +14,8 @@
/*
Package node implements the components for operating a node in Kubernetes.
This includes controllers for managin the node object, running scheduled pods,
and exporting HTTP endpoints expected by the Kubernets API server.
This includes controllers for managing the node object, running scheduled pods,
and exporting HTTP endpoints expected by the Kubernetes API server.
There are two primary controllers, the node runner and the pod runner.
@@ -27,9 +27,10 @@ There are two primary controllers, the node runner and the pod runner.
select {
case <-podRunner.Ready():
go nodeRunner.Run(ctx)
case <-ctx.Done()
return ctx.Err()
case <-podRunner.Done():
}
if podRunner.Err() != nil {
// handle error
}
After calling start, cancelling the passed in context will shutdown the

View File

@@ -51,18 +51,18 @@ const (
// ReasonFailedToReadOptionalSecret is the reason used in events emitted when an optional secret could not be read.
ReasonFailedToReadOptionalSecret = "FailedToReadOptionalSecret"
// ReasonMandatoryConfigMapNotFound is the reason used in events emitted when an mandatory configmap is not found.
// ReasonMandatoryConfigMapNotFound is the reason used in events emitted when a mandatory configmap is not found.
ReasonMandatoryConfigMapNotFound = "MandatoryConfigMapNotFound"
// ReasonMandatoryConfigMapKeyNotFound is the reason used in events emitted when an mandatory configmap key is not found.
// ReasonMandatoryConfigMapKeyNotFound is the reason used in events emitted when a mandatory configmap key is not found.
ReasonMandatoryConfigMapKeyNotFound = "MandatoryConfigMapKeyNotFound"
// ReasonFailedToReadMandatoryConfigMap is the reason used in events emitted when an mandatory configmap could not be read.
// ReasonFailedToReadMandatoryConfigMap is the reason used in events emitted when a mandatory configmap could not be read.
ReasonFailedToReadMandatoryConfigMap = "FailedToReadMandatoryConfigMap"
// ReasonMandatorySecretNotFound is the reason used in events emitted when an mandatory secret is not found.
// ReasonMandatorySecretNotFound is the reason used in events emitted when a mandatory secret is not found.
ReasonMandatorySecretNotFound = "MandatorySecretNotFound"
// ReasonMandatorySecretKeyNotFound is the reason used in events emitted when an mandatory secret key is not found.
// ReasonMandatorySecretKeyNotFound is the reason used in events emitted when a mandatory secret key is not found.
ReasonMandatorySecretKeyNotFound = "MandatorySecretKeyNotFound"
// ReasonFailedToReadMandatorySecret is the reason used in events emitted when an mandatory secret could not be read.
// ReasonFailedToReadMandatorySecret is the reason used in events emitted when a mandatory secret could not be read.
ReasonFailedToReadMandatorySecret = "FailedToReadMandatorySecret"
// ReasonInvalidEnvironmentVariableNames is the reason used in events emitted when a configmap/secret referenced in a ".spec.containers[*].envFrom" field contains invalid environment variable names.

676
node/lifecycle_test.go Normal file
View File

@@ -0,0 +1,676 @@
package node
import (
"context"
"fmt"
"strconv"
"testing"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/virtual-kubelet/virtual-kubelet/errdefs"
"github.com/virtual-kubelet/virtual-kubelet/log"
logruslogger "github.com/virtual-kubelet/virtual-kubelet/log/logrus"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/apimachinery/pkg/watch"
kubeinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes/fake"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
ktesting "k8s.io/client-go/testing"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
watchutils "k8s.io/client-go/tools/watch"
"k8s.io/klog"
)
var (
_ record.EventRecorder = (*fakeDiscardingRecorder)(nil)
)
const (
// There might be a constant we can already leverage here
testNamespace = "default"
informerResyncPeriod = time.Duration(1 * time.Second)
testNodeName = "testnode"
podSyncWorkers = 3
)
func init() {
klog.InitFlags(nil)
// We neet to set log.L because new spans derive their loggers from log.L
sl := logrus.StandardLogger()
sl.SetLevel(logrus.DebugLevel)
newLogger := logruslogger.FromLogrus(logrus.NewEntry(sl))
log.L = newLogger
}
// fakeDiscardingRecorder discards all events. Silently.
type fakeDiscardingRecorder struct {
logger log.Logger
}
func (r *fakeDiscardingRecorder) Event(object runtime.Object, eventType, reason, message string) {
r.Eventf(object, eventType, reason, message)
}
func (r *fakeDiscardingRecorder) Eventf(object runtime.Object, eventType, reason, messageFmt string, args ...interface{}) {
r.logger.WithFields(map[string]interface{}{
"object": object,
"eventType": eventType,
"message": fmt.Sprintf(messageFmt, args...),
}).Infof("Received event")
}
func (r *fakeDiscardingRecorder) PastEventf(object runtime.Object, timestamp metav1.Time, eventType, reason, messageFmt string, args ...interface{}) {
r.logger.WithFields(map[string]interface{}{
"timestamp": timestamp.String(),
"object": object,
"eventType": eventType,
"message": fmt.Sprintf(messageFmt, args...),
}).Infof("Received past event")
}
func (r *fakeDiscardingRecorder) AnnotatedEventf(object runtime.Object, annotations map[string]string, eventType, reason, messageFmt string, args ...interface{}) {
r.logger.WithFields(map[string]interface{}{
"object": object,
"annotations": annotations,
"eventType": eventType,
"message": fmt.Sprintf(messageFmt, args...),
}).Infof("Received annotated event")
}
func TestPodLifecycle(t *testing.T) {
// We don't do the defer cancel() thing here because t.Run is non-blocking, so the parent context may be cancelled
// before the children are finished and there is no way to do a "join" and wait for them without using a waitgroup,
// at which point, it doesn't seem much better.
ctx := context.Background()
ctx = log.WithLogger(ctx, log.L)
// isPodDeletedPermanentlyFunc is a condition func that waits until the pod is _deleted_, which is the VK's
// action when the pod is deleted from the provider
isPodDeletedPermanentlyFunc := func(ctx context.Context, watcher watch.Interface) error {
_, watchErr := watchutils.UntilWithoutRetry(ctx, watcher, func(ev watch.Event) (bool, error) {
log.G(ctx).WithField("event", ev).Info("got event")
// TODO(Sargun): The pod should have transitioned into some status around failed / succeeded
// prior to being deleted.
// In addition, we should check if the deletion timestamp gets set
return ev.Type == watch.Deleted, nil
})
return watchErr
}
// createStartDeleteScenario tests the basic flow of creating a pod, waiting for it to start, and deleting
// it gracefully
t.Run("createStartDeleteScenario", func(t *testing.T) {
t.Run("mockProvider", func(t *testing.T) {
assert.NilError(t, wireUpSystem(ctx, newMockProvider(), func(ctx context.Context, s *system) {
testCreateStartDeleteScenario(ctx, t, s, isPodDeletedPermanentlyFunc)
}))
})
if testing.Short() {
return
}
t.Run("mockV0Provider", func(t *testing.T) {
assert.NilError(t, wireUpSystem(ctx, newMockV0Provider(), func(ctx context.Context, s *system) {
testCreateStartDeleteScenario(ctx, t, s, isPodDeletedPermanentlyFunc)
}))
})
})
// createStartDeleteScenarioWithDeletionErrorNotFound tests the flow if the pod was not found in the provider
// for some reason
t.Run("createStartDeleteScenarioWithDeletionErrorNotFound", func(t *testing.T) {
mp := newMockProvider()
mp.errorOnDelete = errdefs.NotFound("not found")
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testCreateStartDeleteScenario(ctx, t, s, isPodDeletedPermanentlyFunc)
}))
})
// createStartDeleteScenarioWithDeletionRandomError tests the flow if the pod was unable to be deleted in the
// provider
t.Run("createStartDeleteScenarioWithDeletionRandomError", func(t *testing.T) {
mp := newMockProvider()
deletionFunc := func(ctx context.Context, watcher watch.Interface) error {
return mp.attemptedDeletes.until(ctx, func(v int) bool { return v >= 2 })
}
mp.errorOnDelete = errors.New("random error")
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testCreateStartDeleteScenario(ctx, t, s, deletionFunc)
pods, err := s.client.CoreV1().Pods(testNamespace).List(metav1.ListOptions{})
assert.NilError(t, err)
assert.Assert(t, is.Len(pods.Items, 1))
assert.Assert(t, pods.Items[0].DeletionTimestamp != nil)
}))
})
// danglingPodScenario tests if a pod is created in the provider prior to the pod controller starting,
// and ensures the pod controller deletes the pod prior to continuing.
t.Run("danglingPodScenario", func(t *testing.T) {
t.Run("mockProvider", func(t *testing.T) {
mp := newMockProvider()
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testDanglingPodScenario(ctx, t, s, mp.mockV0Provider)
}))
})
if testing.Short() {
return
}
t.Run("mockV0Provider", func(t *testing.T) {
mp := newMockV0Provider()
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testDanglingPodScenario(ctx, t, s, mp)
}))
})
})
// failedPodScenario ensures that the VK ignores failed pods that were failed prior to the PC starting up
t.Run("failedPodScenario", func(t *testing.T) {
t.Run("mockProvider", func(t *testing.T) {
mp := newMockProvider()
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testFailedPodScenario(ctx, t, s)
}))
})
if testing.Short() {
return
}
t.Run("mockV0Provider", func(t *testing.T) {
assert.NilError(t, wireUpSystem(ctx, newMockV0Provider(), func(ctx context.Context, s *system) {
testFailedPodScenario(ctx, t, s)
}))
})
})
// succeededPodScenario ensures that the VK ignores succeeded pods that were succeeded prior to the PC starting up.
t.Run("succeededPodScenario", func(t *testing.T) {
t.Run("mockProvider", func(t *testing.T) {
mp := newMockProvider()
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testSucceededPodScenario(ctx, t, s)
}))
})
if testing.Short() {
return
}
t.Run("mockV0Provider", func(t *testing.T) {
assert.NilError(t, wireUpSystem(ctx, newMockV0Provider(), func(ctx context.Context, s *system) {
testSucceededPodScenario(ctx, t, s)
}))
})
})
// updatePodWhileRunningScenario updates a pod while the VK is running to ensure the update is propagated
// to the provider
t.Run("updatePodWhileRunningScenario", func(t *testing.T) {
t.Run("mockProvider", func(t *testing.T) {
mp := newMockProvider()
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testUpdatePodWhileRunningScenario(ctx, t, s, mp)
}))
})
})
// podStatusMissingWhileRunningScenario waits for the pod to go into the running state, with a V0 style provider,
// and then makes the pod disappear!
t.Run("podStatusMissingWhileRunningScenario", func(t *testing.T) {
mp := newMockV0Provider()
assert.NilError(t, wireUpSystem(ctx, mp, func(ctx context.Context, s *system) {
testPodStatusMissingWhileRunningScenario(ctx, t, s, mp)
}))
})
}
type testFunction func(ctx context.Context, s *system)
type system struct {
pc *PodController
client *fake.Clientset
podControllerConfig PodControllerConfig
}
func (s *system) start(ctx context.Context) error {
go s.pc.Run(ctx, podSyncWorkers) // nolint:errcheck
select {
case <-s.pc.Ready():
case <-s.pc.Done():
case <-ctx.Done():
return ctx.Err()
}
return s.pc.Err()
}
func wireUpSystem(ctx context.Context, provider PodLifecycleHandler, f testFunction) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// Create the fake client.
client := fake.NewSimpleClientset()
client.PrependReactor("update", "pods", func(action ktesting.Action) (handled bool, ret runtime.Object, err error) {
var pod *corev1.Pod
updateAction := action.(ktesting.UpdateAction)
pod = updateAction.GetObject().(*corev1.Pod)
resourceVersion, err := strconv.Atoi(pod.ResourceVersion)
if err != nil {
panic(errors.Wrap(err, "Could not parse resource version of pod"))
}
pod.ResourceVersion = strconv.Itoa(resourceVersion + 1)
return false, nil, nil
})
// This is largely copy and pasted code from the root command
sharedInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(
client,
informerResyncPeriod,
)
podInformer := sharedInformerFactory.Core().V1().Pods()
eb := record.NewBroadcaster()
eb.StartLogging(log.G(ctx).Infof)
eb.StartRecordingToSink(&corev1client.EventSinkImpl{Interface: client.CoreV1().Events(testNamespace)})
fakeRecorder := &fakeDiscardingRecorder{
logger: log.G(ctx),
}
secretInformer := sharedInformerFactory.Core().V1().Secrets()
configMapInformer := sharedInformerFactory.Core().V1().ConfigMaps()
serviceInformer := sharedInformerFactory.Core().V1().Services()
sys := &system{
client: client,
podControllerConfig: PodControllerConfig{
PodClient: client.CoreV1(),
PodInformer: podInformer,
EventRecorder: fakeRecorder,
Provider: provider,
ConfigMapInformer: configMapInformer,
SecretInformer: secretInformer,
ServiceInformer: serviceInformer,
},
}
var err error
sys.pc, err = NewPodController(sys.podControllerConfig)
if err != nil {
return err
}
go sharedInformerFactory.Start(ctx.Done())
sharedInformerFactory.WaitForCacheSync(ctx.Done())
if ok := cache.WaitForCacheSync(ctx.Done(), podInformer.Informer().HasSynced); !ok {
return errors.New("podinformer failed to sync")
}
if err := ctx.Err(); err != nil {
return err
}
f(ctx, sys)
// Shutdown the pod controller, and wait for it to exit
cancel()
return nil
}
func testFailedPodScenario(ctx context.Context, t *testing.T, s *system) {
testTerminalStatePodScenario(ctx, t, s, corev1.PodFailed)
}
func testSucceededPodScenario(ctx context.Context, t *testing.T, s *system) {
testTerminalStatePodScenario(ctx, t, s, corev1.PodSucceeded)
}
func testTerminalStatePodScenario(ctx context.Context, t *testing.T, s *system, state corev1.PodPhase) {
t.Parallel()
p1 := newPod()
p1.Status.Phase = state
// Create the Pod
_, e := s.client.CoreV1().Pods(testNamespace).Create(p1)
assert.NilError(t, e)
// Start the pod controller
assert.NilError(t, s.start(ctx))
for s.pc.k8sQ.Len() > 0 {
time.Sleep(10 * time.Millisecond)
}
p2, err := s.client.CoreV1().Pods(testNamespace).Get(p1.Name, metav1.GetOptions{})
assert.NilError(t, err)
// Make sure the pods have not changed
assert.DeepEqual(t, p1, p2)
}
func testDanglingPodScenario(ctx context.Context, t *testing.T, s *system, m *mockV0Provider) {
t.Parallel()
pod := newPod()
assert.NilError(t, m.CreatePod(ctx, pod))
// Start the pod controller
assert.NilError(t, s.start(ctx))
assert.Assert(t, is.Equal(m.deletes.read(), 1))
}
func testCreateStartDeleteScenario(ctx context.Context, t *testing.T, s *system, waitFunction func(ctx context.Context, watch watch.Interface) error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
p := newPod()
listOptions := metav1.ListOptions{
FieldSelector: fields.OneTermEqualSelector("metadata.name", p.ObjectMeta.Name).String(),
}
watchErrCh := make(chan error)
// Setup a watch (prior to pod creation, and pod controller startup)
watcher, err := s.client.CoreV1().Pods(testNamespace).Watch(listOptions)
assert.NilError(t, err)
defer watcher.Stop()
// This ensures that the pod is created.
go func() {
_, watchErr := watchutils.UntilWithoutRetry(ctx, watcher,
// Wait for the pod to be created
// TODO(Sargun): Make this "smarter" about the status the pod is in.
func(ev watch.Event) (bool, error) {
pod := ev.Object.(*corev1.Pod)
return pod.Name == p.ObjectMeta.Name, nil
})
watchErrCh <- watchErr
}()
// Create the Pod
_, e := s.client.CoreV1().Pods(testNamespace).Create(p)
assert.NilError(t, e)
log.G(ctx).Debug("Created pod")
// This will return once
select {
case <-ctx.Done():
t.Fatalf("Context ended early: %s", ctx.Err().Error())
case err = <-watchErrCh:
assert.NilError(t, err)
}
// Setup a watch to check if the pod is in running
watcher, err = s.client.CoreV1().Pods(testNamespace).Watch(listOptions)
assert.NilError(t, err)
defer watcher.Stop()
go func() {
_, watchErr := watchutils.UntilWithoutRetry(ctx, watcher,
// Wait for the pod to be started
func(ev watch.Event) (bool, error) {
pod := ev.Object.(*corev1.Pod)
return pod.Status.Phase == corev1.PodRunning, nil
})
watchErrCh <- watchErr
}()
assert.NilError(t, s.start(ctx))
// Wait for the pod to go into running
select {
case <-ctx.Done():
t.Fatalf("Context ended early: %s", ctx.Err().Error())
case err = <-watchErrCh:
assert.NilError(t, err)
}
// Setup a watch prior to pod deletion
watcher, err = s.client.CoreV1().Pods(testNamespace).Watch(listOptions)
assert.NilError(t, err)
defer watcher.Stop()
go func() {
watchErrCh <- waitFunction(ctx, watcher)
}()
// Delete the pod via deletiontimestamp
// 1. Get the pod
currentPod, err := s.client.CoreV1().Pods(testNamespace).Get(p.Name, metav1.GetOptions{})
assert.NilError(t, err)
// 2. Set the pod's deletion timestamp, version, and so on
var deletionGracePeriod int64 = 30
currentPod.DeletionGracePeriodSeconds = &deletionGracePeriod
deletionTimestamp := metav1.NewTime(time.Now().Add(time.Second * time.Duration(deletionGracePeriod)))
currentPod.DeletionTimestamp = &deletionTimestamp
// 3. Update (overwrite) the pod
_, err = s.client.CoreV1().Pods(testNamespace).Update(currentPod)
assert.NilError(t, err)
select {
case <-ctx.Done():
t.Fatalf("Context ended early: %s", ctx.Err().Error())
case err = <-watchErrCh:
assert.NilError(t, err)
}
}
func testUpdatePodWhileRunningScenario(ctx context.Context, t *testing.T, s *system, m *mockProvider) {
t.Parallel()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
p := newPod()
listOptions := metav1.ListOptions{
FieldSelector: fields.OneTermEqualSelector("metadata.name", p.ObjectMeta.Name).String(),
}
watchErrCh := make(chan error)
// Create a Pod
_, e := s.client.CoreV1().Pods(testNamespace).Create(p)
assert.NilError(t, e)
// Setup a watch to check if the pod is in running
watcher, err := s.client.CoreV1().Pods(testNamespace).Watch(listOptions)
assert.NilError(t, err)
defer watcher.Stop()
go func() {
newPod, watchErr := watchutils.UntilWithoutRetry(ctx, watcher,
// Wait for the pod to be started
func(ev watch.Event) (bool, error) {
pod := ev.Object.(*corev1.Pod)
return pod.Status.Phase == corev1.PodRunning, nil
})
// This deepcopy is required to please the race detector
p = newPod.Object.(*corev1.Pod).DeepCopy()
watchErrCh <- watchErr
}()
// Start the pod controller
assert.NilError(t, s.start(ctx))
// Wait for pod to be in running
select {
case <-ctx.Done():
t.Fatalf("Context ended early: %s", ctx.Err().Error())
case err = <-watchErrCh:
assert.NilError(t, err)
}
// Update the pod
version, err := strconv.Atoi(p.ResourceVersion)
if err != nil {
t.Fatalf("Could not parse pod's resource version: %s", err.Error())
}
p.ResourceVersion = strconv.Itoa(version + 1)
var activeDeadlineSeconds int64 = 300
p.Spec.ActiveDeadlineSeconds = &activeDeadlineSeconds
log.G(ctx).WithField("pod", p).Info("Updating pod")
_, err = s.client.CoreV1().Pods(p.Namespace).Update(p)
assert.NilError(t, err)
assert.NilError(t, m.updates.until(ctx, func(v int) bool { return v > 0 }))
}
func testPodStatusMissingWhileRunningScenario(ctx context.Context, t *testing.T, s *system, m *mockV0Provider) {
t.Parallel()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
p := newPod()
key, err := buildKey(p)
assert.NilError(t, err)
listOptions := metav1.ListOptions{
FieldSelector: fields.OneTermEqualSelector("metadata.name", p.ObjectMeta.Name).String(),
}
watchErrCh := make(chan error)
// Create a Pod
_, e := s.client.CoreV1().Pods(testNamespace).Create(p)
assert.NilError(t, e)
// Setup a watch to check if the pod is in running
watcher, err := s.client.CoreV1().Pods(testNamespace).Watch(listOptions)
assert.NilError(t, err)
defer watcher.Stop()
go func() {
newPod, watchErr := watchutils.UntilWithoutRetry(ctx, watcher,
// Wait for the pod to be started
func(ev watch.Event) (bool, error) {
pod := ev.Object.(*corev1.Pod)
return pod.Status.Phase == corev1.PodRunning, nil
})
// This deepcopy is required to please the race detector
p = newPod.Object.(*corev1.Pod).DeepCopy()
watchErrCh <- watchErr
}()
// Start the pod controller
assert.NilError(t, s.start(ctx))
// Wait for pod to be in running
select {
case <-ctx.Done():
t.Fatalf("Context ended early: %s", ctx.Err().Error())
case <-s.pc.Done():
assert.NilError(t, s.pc.Err())
t.Fatal("Pod controller exited prematurely without error")
case err = <-watchErrCh:
assert.NilError(t, err)
}
// Setup a watch to check if the pod is in failed due to provider issues
watcher, err = s.client.CoreV1().Pods(testNamespace).Watch(listOptions)
assert.NilError(t, err)
defer watcher.Stop()
go func() {
newPod, watchErr := watchutils.UntilWithoutRetry(ctx, watcher,
// Wait for the pod to be failed
func(ev watch.Event) (bool, error) {
pod := ev.Object.(*corev1.Pod)
return pod.Status.Phase == corev1.PodFailed, nil
})
// This deepcopy is required to please the race detector
p = newPod.Object.(*corev1.Pod).DeepCopy()
watchErrCh <- watchErr
}()
// delete the pod from the mock provider
m.pods.Delete(key)
select {
case <-ctx.Done():
t.Fatalf("Context ended early: %s", ctx.Err().Error())
case <-s.pc.Done():
assert.NilError(t, s.pc.Err())
t.Fatal("Pod controller exited prematurely without error")
case err = <-watchErrCh:
assert.NilError(t, err)
}
assert.Equal(t, p.Status.Reason, podStatusReasonNotFound)
}
func BenchmarkCreatePods(b *testing.B) {
sl := logrus.StandardLogger()
sl.SetLevel(logrus.ErrorLevel)
newLogger := logruslogger.FromLogrus(logrus.NewEntry(sl))
ctx := context.Background()
ctx = log.WithLogger(ctx, newLogger)
assert.NilError(b, wireUpSystem(ctx, newMockProvider(), func(ctx context.Context, s *system) {
benchmarkCreatePods(ctx, b, s)
}))
}
func benchmarkCreatePods(ctx context.Context, b *testing.B, s *system) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
assert.NilError(b, s.start(ctx))
b.ResetTimer()
for i := 0; i < b.N; i++ {
pod := newPod(randomizeUID, randomizeName)
_, err := s.client.CoreV1().Pods(pod.Namespace).Create(pod)
assert.NilError(b, err)
assert.NilError(b, ctx.Err())
}
}
type podModifier func(*corev1.Pod)
func randomizeUID(pod *corev1.Pod) {
pod.ObjectMeta.UID = uuid.NewUUID()
}
func randomizeName(pod *corev1.Pod) {
name := fmt.Sprintf("pod-%s", uuid.NewUUID())
pod.Name = name
}
func newPod(podmodifiers ...podModifier) *corev1.Pod {
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "my-pod",
Namespace: testNamespace,
UID: "4f20ff31-7775-11e9-893d-000c29a24b34",
ResourceVersion: "100",
},
Spec: corev1.PodSpec{
NodeName: testNodeName,
},
Status: corev1.PodStatus{
Phase: corev1.PodPending,
},
}
for _, modifier := range podmodifiers {
modifier(pod)
}
return pod
}

292
node/mock_test.go Normal file
View File

@@ -0,0 +1,292 @@
package node
import (
"context"
"fmt"
"sync"
"time"
"github.com/virtual-kubelet/virtual-kubelet/errdefs"
"github.com/virtual-kubelet/virtual-kubelet/log"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var (
_ PodLifecycleHandler = (*mockV0Provider)(nil)
_ PodNotifier = (*mockProvider)(nil)
)
type waitableInt struct {
cond *sync.Cond
val int
}
func newWaitableInt() *waitableInt {
return &waitableInt{
cond: sync.NewCond(&sync.Mutex{}),
}
}
func (w *waitableInt) read() int {
defer w.cond.L.Unlock()
w.cond.L.Lock()
return w.val
}
func (w *waitableInt) until(ctx context.Context, f func(int) bool) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
<-ctx.Done()
w.cond.Broadcast()
}()
w.cond.L.Lock()
defer w.cond.L.Unlock()
for !f(w.val) {
if err := ctx.Err(); err != nil {
return err
}
w.cond.Wait()
}
return nil
}
func (w *waitableInt) increment() {
w.cond.L.Lock()
defer w.cond.L.Unlock()
w.val++
w.cond.Broadcast()
}
type mockV0Provider struct {
creates *waitableInt
updates *waitableInt
deletes *waitableInt
attemptedDeletes *waitableInt
errorOnDelete error
pods sync.Map
startTime time.Time
realNotifier func(*v1.Pod)
}
type mockProvider struct {
*mockV0Provider
}
// NewMockProviderMockConfig creates a new mockV0Provider. Mock legacy provider does not implement the new asynchronous podnotifier interface
func newMockV0Provider() *mockV0Provider {
provider := mockV0Provider{
startTime: time.Now(),
creates: newWaitableInt(),
updates: newWaitableInt(),
deletes: newWaitableInt(),
attemptedDeletes: newWaitableInt(),
}
// By default notifier is set to a function which is a no-op. In the event we've implemented the PodNotifier interface,
// it will be set, and then we'll call a real underlying implementation.
// This makes it easier in the sense we don't need to wrap each method.
return &provider
}
// NewMockProviderMockConfig creates a new MockProvider with the given config
func newMockProvider() *mockProvider {
return &mockProvider{mockV0Provider: newMockV0Provider()}
}
// notifier calls the callback that we got from the pod controller to notify it of updates (if it is set)
func (p *mockV0Provider) notifier(pod *v1.Pod) {
if p.realNotifier != nil {
p.realNotifier(pod)
}
}
// CreatePod accepts a Pod definition and stores it in memory.
func (p *mockV0Provider) CreatePod(ctx context.Context, pod *v1.Pod) error {
log.G(ctx).Infof("receive CreatePod %q", pod.Name)
p.creates.increment()
key, err := buildKey(pod)
if err != nil {
return err
}
now := metav1.NewTime(time.Now())
pod.Status = v1.PodStatus{
Phase: v1.PodRunning,
HostIP: "1.2.3.4",
PodIP: "5.6.7.8",
StartTime: &now,
Conditions: []v1.PodCondition{
{
Type: v1.PodInitialized,
Status: v1.ConditionTrue,
},
{
Type: v1.PodReady,
Status: v1.ConditionTrue,
},
{
Type: v1.PodScheduled,
Status: v1.ConditionTrue,
},
},
}
for _, container := range pod.Spec.Containers {
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, v1.ContainerStatus{
Name: container.Name,
Image: container.Image,
Ready: true,
RestartCount: 0,
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{
StartedAt: now,
},
},
})
}
p.pods.Store(key, pod)
p.notifier(pod)
return nil
}
// UpdatePod accepts a Pod definition and updates its reference.
func (p *mockV0Provider) UpdatePod(ctx context.Context, pod *v1.Pod) error {
log.G(ctx).Infof("receive UpdatePod %q", pod.Name)
p.updates.increment()
key, err := buildKey(pod)
if err != nil {
return err
}
p.pods.Store(key, pod)
p.notifier(pod)
return nil
}
// DeletePod deletes the specified pod out of memory. The PodController deepcopies the pod object
// for us, so we don't have to worry about mutation.
func (p *mockV0Provider) DeletePod(ctx context.Context, pod *v1.Pod) (err error) {
log.G(ctx).Infof("receive DeletePod %q", pod.Name)
p.attemptedDeletes.increment()
if p.errorOnDelete != nil {
return p.errorOnDelete
}
p.deletes.increment()
key, err := buildKey(pod)
if err != nil {
return err
}
if _, exists := p.pods.Load(key); !exists {
return errdefs.NotFound("pod not found")
}
now := metav1.Now()
pod.Status.Phase = v1.PodSucceeded
pod.Status.Reason = "MockProviderPodDeleted"
for idx := range pod.Status.ContainerStatuses {
pod.Status.ContainerStatuses[idx].Ready = false
pod.Status.ContainerStatuses[idx].State = v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
Message: "Mock provider terminated container upon deletion",
FinishedAt: now,
Reason: "MockProviderPodContainerDeleted",
StartedAt: pod.Status.ContainerStatuses[idx].State.Running.StartedAt,
},
}
}
p.notifier(pod)
p.pods.Store(key, pod)
if pod.DeletionGracePeriodSeconds == nil || *pod.DeletionGracePeriodSeconds == 0 {
p.pods.Delete(key)
} else {
time.AfterFunc(time.Duration(*pod.DeletionGracePeriodSeconds)*time.Second, func() {
p.pods.Delete(key)
})
}
return nil
}
// GetPod returns a pod by name that is stored in memory.
func (p *mockV0Provider) GetPod(ctx context.Context, namespace, name string) (pod *v1.Pod, err error) {
log.G(ctx).Infof("receive GetPod %q", name)
key, err := buildKeyFromNames(namespace, name)
if err != nil {
return nil, err
}
if pod, ok := p.pods.Load(key); ok {
return pod.(*v1.Pod).DeepCopy(), nil
}
return nil, errdefs.NotFoundf("pod \"%s/%s\" is not known to the provider", namespace, name)
}
// GetPodStatus returns the status of a pod by name that is "running".
// returns nil if a pod by that name is not found.
func (p *mockV0Provider) GetPodStatus(ctx context.Context, namespace, name string) (*v1.PodStatus, error) {
log.G(ctx).Infof("receive GetPodStatus %q", name)
pod, err := p.GetPod(ctx, namespace, name)
if err != nil {
return nil, err
}
return pod.Status.DeepCopy(), nil
}
// GetPods returns a list of all pods known to be "running".
func (p *mockV0Provider) GetPods(ctx context.Context) ([]*v1.Pod, error) {
log.G(ctx).Info("receive GetPods")
var pods []*v1.Pod
p.pods.Range(func(key, pod interface{}) bool {
pods = append(pods, pod.(*v1.Pod).DeepCopy())
return true
})
return pods, nil
}
// NotifyPods is called to set a pod notifier callback function. This should be called before any operations are done
// within the provider.
func (p *mockProvider) NotifyPods(ctx context.Context, notifier func(*v1.Pod)) {
p.realNotifier = notifier
}
func buildKeyFromNames(namespace string, name string) (string, error) {
return fmt.Sprintf("%s-%s", namespace, name), nil
}
// buildKey is a helper for building the "key" for the providers pod store.
func buildKey(pod *v1.Pod) (string, error) {
if pod.ObjectMeta.Namespace == "" {
return "", fmt.Errorf("pod namespace not found")
}
if pod.ObjectMeta.Name == "" {
return "", fmt.Errorf("pod name not found")
}
return buildKeyFromNames(pod.ObjectMeta.Namespace, pod.ObjectMeta.Name)
}

View File

@@ -38,7 +38,7 @@ import (
//
// Note: Implementers can choose to manage a node themselves, in which case
// it is not needed to provide an implementation for this interface.
type NodeProvider interface {
type NodeProvider interface { //nolint:golint
// Ping checks if the node is still active.
// This is intended to be lightweight as it will be called periodically as a
// heartbeat to keep the node marked as ready in Kubernetes.
@@ -168,7 +168,7 @@ const (
// Run registers the node in kubernetes and starts loops for updating the node
// status in Kubernetes.
//
// The node status must be updated periodically in Kubertnetes to keep the node
// The node status must be updated periodically in Kubernetes to keep the node
// active. Newer versions of Kubernetes support node leases, which are
// essentially light weight pings. Older versions of Kubernetes require updating
// the node status periodically.
@@ -244,7 +244,12 @@ func (n *NodeController) controlLoop(ctx context.Context) error {
statusTimer := time.NewTimer(n.statusInterval)
defer statusTimer.Stop()
timerResetDuration := n.statusInterval
if n.disableLease {
// when resetting the timer after processing a status update, reset it to the ping interval
// (since it will be the ping timer as n.disableLease == true)
timerResetDuration = n.pingInterval
// hack to make sure this channel always blocks since we won't be using it
if !statusTimer.Stop() {
<-statusTimer.C
@@ -267,7 +272,7 @@ func (n *NodeController) controlLoop(ctx context.Context) error {
log.G(ctx).Debug("Received node status update")
// Performing a status update so stop/reset the status update timer in this
// branch otherwise there could be an uneccessary status update.
// branch otherwise there could be an unnecessary status update.
if !t.Stop() {
<-t.C
}
@@ -276,7 +281,7 @@ func (n *NodeController) controlLoop(ctx context.Context) error {
if err := n.updateStatus(ctx, false); err != nil {
log.G(ctx).WithError(err).Error("Error handling node status update")
}
t.Reset(n.statusInterval)
t.Reset(timerResetDuration)
case <-statusTimer.C:
if err := n.updateStatus(ctx, false); err != nil {
log.G(ctx).WithError(err).Error("Error handling node status update")
@@ -443,7 +448,7 @@ func preparePatchBytesforNodeStatus(nodeName types.NodeName, oldNode *corev1.Nod
// It first fetches the current node details and then sets the status according
// to the passed in node object.
//
// If you use this function, it is up to you to syncronize this with other operations.
// If you use this function, it is up to you to synchronize this with other operations.
// This reduces the time to second-level precision.
func updateNodeStatus(ctx context.Context, nodes v1.NodeInterface, n *corev1.Node) (_ *corev1.Node, retErr error) {
ctx, span := trace.StartSpan(ctx, "UpdateNodeStatus")

View File

@@ -44,7 +44,7 @@ func testNodeRun(t *testing.T, enableLease bool) {
testP := &testNodeProvider{NodeProvider: &NaiveNodeProvider{}}
nodes := c.CoreV1().Nodes()
leases := c.Coordination().Leases(corev1.NamespaceNodeLease)
leases := c.CoordinationV1beta1().Leases(corev1.NamespaceNodeLease)
interval := 1 * time.Millisecond
opts := []NodeControllerOpt{
@@ -54,7 +54,11 @@ func testNodeRun(t *testing.T, enableLease bool) {
if enableLease {
opts = append(opts, WithNodeEnableLeaseV1Beta1(leases, nil))
}
node, err := NewNodeController(testP, testNode(t), nodes, opts...)
testNode := testNode(t)
// We have to refer to testNodeCopy during the course of the test. testNode is modified by the node controller
// so it will trigger the race detector.
testNodeCopy := testNode.DeepCopy()
node, err := NewNodeController(testP, testNode, nodes, opts...)
assert.NilError(t, err)
chErr := make(chan error)
@@ -68,11 +72,11 @@ func testNodeRun(t *testing.T, enableLease bool) {
close(chErr)
}()
nw := makeWatch(t, nodes, node.n.Name)
nw := makeWatch(t, nodes, testNodeCopy.Name)
defer nw.Stop()
nr := nw.ResultChan()
lw := makeWatch(t, leases, node.n.Name)
lw := makeWatch(t, leases, testNodeCopy.Name)
defer lw.Stop()
lr := lw.ResultChan()
@@ -105,7 +109,8 @@ func testNodeRun(t *testing.T, enableLease bool) {
leaseUpdates++
assert.Assert(t, cmp.Equal(l.Spec.HolderIdentity != nil, true))
assert.Check(t, cmp.Equal(*l.Spec.HolderIdentity, node.n.Name))
assert.NilError(t, err)
assert.Check(t, cmp.Equal(*l.Spec.HolderIdentity, testNodeCopy.Name))
if lBefore != nil {
assert.Check(t, before(lBefore.Spec.RenewTime.Time, l.Spec.RenewTime.Time))
}
@@ -125,14 +130,15 @@ func testNodeRun(t *testing.T, enableLease bool) {
}
// trigger an async node status update
n := node.n.DeepCopy()
n, err := nodes.Get(testNode.Name, metav1.GetOptions{})
assert.NilError(t, err)
newCondition := corev1.NodeCondition{
Type: corev1.NodeConditionType("UPDATED"),
LastTransitionTime: metav1.Now().Rfc3339Copy(),
}
n.Status.Conditions = append(n.Status.Conditions, newCondition)
nw = makeWatch(t, nodes, node.n.Name)
nw = makeWatch(t, nodes, testNodeCopy.Name)
defer nw.Stop()
nr = nw.ResultChan()
@@ -216,7 +222,7 @@ func TestNodeCustomUpdateStatusErrorHandler(t *testing.T) {
}
func TestEnsureLease(t *testing.T) {
c := testclient.NewSimpleClientset().Coordination().Leases(corev1.NamespaceNodeLease)
c := testclient.NewSimpleClientset().CoordinationV1beta1().Leases(corev1.NamespaceNodeLease)
n := testNode(t)
ctx := context.Background()
@@ -270,7 +276,7 @@ func TestUpdateNodeStatus(t *testing.T) {
}
func TestUpdateNodeLease(t *testing.T) {
leases := testclient.NewSimpleClientset().Coordination().Leases(corev1.NamespaceNodeLease)
leases := testclient.NewSimpleClientset().CoordinationV1beta1().Leases(corev1.NamespaceNodeLease)
lease := newLease(nil)
n := testNode(t)
setLeaseAttrs(lease, n, 0)
@@ -296,6 +302,78 @@ func TestUpdateNodeLease(t *testing.T) {
assert.Assert(t, cmp.DeepEqual(compare.Spec.HolderIdentity, lease.Spec.HolderIdentity))
}
// TestPingAfterStatusUpdate checks that Ping continues to be called with the specified interval
// after a node status update occurs, when leases are disabled.
//
// Timing ratios used in this test:
// ping interval (10 ms)
// maximum allowed interval = 2.5 * ping interval
// status update interval = 6 * ping interval
//
// The allowed maximum time is 2.5 times the ping interval because
// the status update resets the ping interval timer, meaning
// that there can be a full two interval durations between
// successive calls to Ping. The extra half is to allow
// for timing variations when using such short durations.
//
// Once the node controller is ready:
// send status update after 10 * ping interval
// end test after another 10 * ping interval
func TestPingAfterStatusUpdate(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
c := testclient.NewSimpleClientset()
nodes := c.CoreV1().Nodes()
testP := &testNodeProviderPing{}
interval := 10 * time.Millisecond
maxAllowedInterval := time.Duration(2.5 * float64(interval.Nanoseconds()))
opts := []NodeControllerOpt{
WithNodePingInterval(interval),
WithNodeStatusUpdateInterval(interval * time.Duration(6)),
}
testNode := testNode(t)
testNodeCopy := testNode.DeepCopy()
node, err := NewNodeController(testP, testNode, nodes, opts...)
assert.NilError(t, err)
chErr := make(chan error, 1)
go func() {
chErr <- node.Run(ctx)
}()
timer := time.NewTimer(10 * time.Second)
defer timer.Stop()
// wait for the node to be ready
select {
case <-timer.C:
t.Fatal("timeout waiting for node to be ready")
case <-chErr:
t.Fatalf("node.Run returned earlier than expected: %v", err)
case <-node.Ready():
}
notifyTimer := time.After(interval * time.Duration(10))
select {
case <-notifyTimer:
testP.triggerStatusUpdate(testNodeCopy)
}
endTimer := time.After(interval * time.Duration(10))
select {
case <-endTimer:
break
}
assert.Assert(t, testP.maxPingInterval < maxAllowedInterval, "maximum time between node pings (%v) was greater than the maximum expected interval (%v)", testP.maxPingInterval, maxAllowedInterval)
}
func testNode(t *testing.T) *corev1.Node {
n := &corev1.Node{}
n.Name = strings.ToLower(t.Name())
@@ -317,6 +395,26 @@ func (p *testNodeProvider) triggerStatusUpdate(n *corev1.Node) {
}
}
// testNodeProviderPing tracks the maximum time interval between calls to Ping
type testNodeProviderPing struct {
testNodeProvider
lastPingTime time.Time
maxPingInterval time.Duration
}
func (tnp *testNodeProviderPing) Ping(ctx context.Context) error {
now := time.Now()
if tnp.lastPingTime.IsZero() {
tnp.lastPingTime = now
return nil
}
if now.Sub(tnp.lastPingTime) > tnp.maxPingInterval {
tnp.maxPingInterval = now.Sub(tnp.lastPingTime)
}
tnp.lastPingTime = now
return nil
}
type watchGetter interface {
Watch(metav1.ListOptions) (watch.Interface, error)
}

View File

@@ -20,6 +20,7 @@ import (
"time"
"github.com/davecgh/go-spew/spew"
"github.com/google/go-cmp/cmp"
pkgerrors "github.com/pkg/errors"
"github.com/virtual-kubelet/virtual-kubelet/errdefs"
"github.com/virtual-kubelet/virtual-kubelet/log"
@@ -33,7 +34,12 @@ import (
)
const (
podStatusReasonProviderFailed = "ProviderFailed"
podStatusReasonProviderFailed = "ProviderFailed"
podStatusReasonNotFound = "NotFound"
podStatusMessageNotFound = "The pod status was not found and may have been deleted from the provider"
containerStatusReasonNotFound = "NotFound"
containerStatusMessageNotFound = "Container was not found and was likely deleted"
containerStatusExitCodeNotFound = -137
)
func addPodAttributes(ctx context.Context, span trace.Span, pod *corev1.Pod) context.Context {
@@ -57,32 +63,31 @@ func (pc *PodController) createOrUpdatePod(ctx context.Context, pod *corev1.Pod)
"namespace": pod.GetNamespace(),
})
// We do this so we don't mutate the pod from the informer cache
pod = pod.DeepCopy()
if err := populateEnvironmentVariables(ctx, pod, pc.resourceManager, pc.recorder); err != nil {
span.SetStatus(err)
return err
}
// We have to use a different pod that we pass to the provider than the one that gets used in handleProviderError
// because the provider may manipulate the pod in a separate goroutine while we were doing work
podForProvider := pod.DeepCopy()
// Check if the pod is already known by the provider.
// NOTE: Some providers return a non-nil error in their GetPod implementation when the pod is not found while some other don't.
// Hence, we ignore the error and just act upon the pod if it is non-nil (meaning that the provider still knows about the pod).
if pp, _ := pc.provider.GetPod(ctx, pod.Namespace, pod.Name); pp != nil {
// Pod Update Only Permits update of:
// - `spec.containers[*].image`
// - `spec.initContainers[*].image`
// - `spec.activeDeadlineSeconds`
// - `spec.tolerations` (only additions to existing tolerations)
// compare the hashes of the pod specs to see if the specs actually changed
expected := hashPodSpec(pp.Spec)
if actual := hashPodSpec(pod.Spec); actual != expected {
log.G(ctx).Debugf("Pod %s exists, updating pod in provider", pp.Name)
if origErr := pc.provider.UpdatePod(ctx, pod); origErr != nil {
if podFromProvider, _ := pc.provider.GetPod(ctx, pod.Namespace, pod.Name); podFromProvider != nil {
if !podsEqual(podFromProvider, podForProvider) {
log.G(ctx).Debugf("Pod %s exists, updating pod in provider", podFromProvider.Name)
if origErr := pc.provider.UpdatePod(ctx, podForProvider); origErr != nil {
pc.handleProviderError(ctx, span, origErr, pod)
return origErr
}
log.G(ctx).Info("Updated pod in provider")
}
} else {
if origErr := pc.provider.CreatePod(ctx, pod); origErr != nil {
if origErr := pc.provider.CreatePod(ctx, podForProvider); origErr != nil {
pc.handleProviderError(ctx, span, origErr, pod)
return origErr
}
@@ -91,6 +96,27 @@ func (pc *PodController) createOrUpdatePod(ctx context.Context, pod *corev1.Pod)
return nil
}
// podsEqual checks if two pods are equal according to the fields we know that are allowed
// to be modified after startup time.
func podsEqual(pod1, pod2 *corev1.Pod) bool {
// Pod Update Only Permits update of:
// - `spec.containers[*].image`
// - `spec.initContainers[*].image`
// - `spec.activeDeadlineSeconds`
// - `spec.tolerations` (only additions to existing tolerations)
// - `objectmeta.labels`
// - `objectmeta.annotations`
// compare the values of the pods to see if the values actually changed
return cmp.Equal(pod1.Spec.Containers, pod2.Spec.Containers) &&
cmp.Equal(pod1.Spec.InitContainers, pod2.Spec.InitContainers) &&
cmp.Equal(pod1.Spec.ActiveDeadlineSeconds, pod2.Spec.ActiveDeadlineSeconds) &&
cmp.Equal(pod1.Spec.Tolerations, pod2.Spec.Tolerations) &&
cmp.Equal(pod1.ObjectMeta.Labels, pod2.Labels) &&
cmp.Equal(pod1.ObjectMeta.Annotations, pod2.Annotations)
}
// This is basically the kube runtime's hash container functionality.
// VK only operates at the Pod level so this is adapted for that
func hashPodSpec(spec corev1.PodSpec) uint64 {
@@ -152,7 +178,7 @@ func (pc *PodController) deletePod(ctx context.Context, namespace, name string)
ctx = addPodAttributes(ctx, span, pod)
var delErr error
if delErr = pc.provider.DeletePod(ctx, pod); delErr != nil && !errdefs.IsNotFound(delErr) {
if delErr = pc.provider.DeletePod(ctx, pod.DeepCopy()); delErr != nil && !errdefs.IsNotFound(delErr) {
span.SetStatus(delErr)
return delErr
}
@@ -188,15 +214,15 @@ func (pc *PodController) forceDeletePodResource(ctx context.Context, namespace,
return nil
}
// updatePodStatuses syncs the providers pod status with the kubernetes pod status.
func (pc *PodController) updatePodStatuses(ctx context.Context, q workqueue.RateLimitingInterface) {
ctx, span := trace.StartSpan(ctx, "updatePodStatuses")
// fetchPodStatusesFromProvider syncs the providers pod status with the kubernetes pod status.
func (pc *PodController) fetchPodStatusesFromProvider(ctx context.Context, q workqueue.RateLimitingInterface) {
ctx, span := trace.StartSpan(ctx, "fetchPodStatusesFromProvider")
defer span.End()
// Update all the pods with the provider status.
pods, err := pc.podsLister.List(labels.Everything())
if err != nil {
err = pkgerrors.Wrap(err, "error getting pod list")
err = pkgerrors.Wrap(err, "error getting pod list from kubernetes")
span.SetStatus(err)
log.G(ctx).WithError(err).Error("Error updating pod statuses")
return
@@ -205,75 +231,114 @@ func (pc *PodController) updatePodStatuses(ctx context.Context, q workqueue.Rate
for _, pod := range pods {
if !shouldSkipPodStatusUpdate(pod) {
enqueuePodStatusUpdate(ctx, q, pod)
enrichedPod, err := pc.fetchPodStatusFromProvider(ctx, q, pod)
if err != nil {
log.G(ctx).WithFields(map[string]interface{}{
"name": pod.Name,
"namespace": pod.Namespace,
}).WithError(err).Error("Could not fetch pod status")
} else if enrichedPod != nil {
pc.enqueuePodStatusUpdate(ctx, q, enrichedPod)
}
}
}
}
// fetchPodStatusFromProvider returns a pod (the pod we pass in) enriched with the pod status from the provider. If the pod is not found,
// and it has been 1 minute since the pod was created, or the pod was previously running, it will be marked as failed.
// If a valid pod status cannot be generated, for example, if a pod is not found in the provider, and it has been less than 1 minute
// since pod creation, we will return nil for the pod.
func (pc *PodController) fetchPodStatusFromProvider(ctx context.Context, q workqueue.RateLimitingInterface, podFromKubernetes *corev1.Pod) (*corev1.Pod, error) {
podStatus, err := pc.provider.GetPodStatus(ctx, podFromKubernetes.Namespace, podFromKubernetes.Name)
if errdefs.IsNotFound(err) || (err == nil && podStatus == nil) {
// Only change the status when the pod was already up
// Only doing so when the pod was successfully running makes sure we don't run into race conditions during pod creation.
if podFromKubernetes.Status.Phase == corev1.PodRunning || time.Since(podFromKubernetes.ObjectMeta.CreationTimestamp.Time) > time.Minute {
// Set the pod to failed, this makes sure if the underlying container implementation is gone that a new pod will be created.
podStatus = podFromKubernetes.Status.DeepCopy()
podStatus.Phase = corev1.PodFailed
podStatus.Reason = podStatusReasonNotFound
podStatus.Message = podStatusMessageNotFound
now := metav1.NewTime(time.Now())
for i, c := range podStatus.ContainerStatuses {
if c.State.Running == nil {
continue
}
podStatus.ContainerStatuses[i].State.Terminated = &corev1.ContainerStateTerminated{
ExitCode: containerStatusExitCodeNotFound,
Reason: containerStatusReasonNotFound,
Message: containerStatusMessageNotFound,
FinishedAt: now,
StartedAt: c.State.Running.StartedAt,
ContainerID: c.ContainerID,
}
podStatus.ContainerStatuses[i].State.Running = nil
}
} else if err != nil {
return nil, nil
}
} else if err != nil {
return nil, err
}
pod := podFromKubernetes.DeepCopy()
podStatus.DeepCopyInto(&pod.Status)
return pod, nil
}
func shouldSkipPodStatusUpdate(pod *corev1.Pod) bool {
return pod.Status.Phase == corev1.PodSucceeded ||
pod.Status.Phase == corev1.PodFailed ||
pod.Status.Reason == podStatusReasonProviderFailed
}
func (pc *PodController) updatePodStatus(ctx context.Context, pod *corev1.Pod) error {
if shouldSkipPodStatusUpdate(pod) {
func (pc *PodController) updatePodStatus(ctx context.Context, podFromKubernetes *corev1.Pod, key string) error {
if shouldSkipPodStatusUpdate(podFromKubernetes) {
return nil
}
ctx, span := trace.StartSpan(ctx, "updatePodStatus")
defer span.End()
ctx = addPodAttributes(ctx, span, pod)
ctx = addPodAttributes(ctx, span, podFromKubernetes)
status, err := pc.provider.GetPodStatus(ctx, pod.Namespace, pod.Name)
if err != nil && !errdefs.IsNotFound(err) {
span.SetStatus(err)
return pkgerrors.Wrap(err, "error retreiving pod status")
obj, ok := pc.knownPods.Load(key)
if !ok {
// This means there was a race and the pod has been deleted from K8s
return nil
}
kPod := obj.(*knownPod)
kPod.Lock()
podFromProvider := kPod.lastPodStatusReceivedFromProvider
kPod.Unlock()
// Update the pod's status
if status != nil {
pod.Status = *status
} else {
// Only change the status when the pod was already up
// Only doing so when the pod was successfully running makes sure we don't run into race conditions during pod creation.
if pod.Status.Phase == corev1.PodRunning || pod.ObjectMeta.CreationTimestamp.Add(time.Minute).Before(time.Now()) {
// Set the pod to failed, this makes sure if the underlying container implementation is gone that a new pod will be created.
pod.Status.Phase = corev1.PodFailed
pod.Status.Reason = "NotFound"
pod.Status.Message = "The pod status was not found and may have been deleted from the provider"
for i, c := range pod.Status.ContainerStatuses {
pod.Status.ContainerStatuses[i].State.Terminated = &corev1.ContainerStateTerminated{
ExitCode: -137,
Reason: "NotFound",
Message: "Container was not found and was likely deleted",
FinishedAt: metav1.NewTime(time.Now()),
StartedAt: c.State.Running.StartedAt,
ContainerID: c.ContainerID,
}
pod.Status.ContainerStatuses[i].State.Running = nil
}
}
}
if _, err := pc.client.Pods(pod.Namespace).UpdateStatus(pod); err != nil {
if _, err := pc.client.Pods(podFromKubernetes.Namespace).UpdateStatus(podFromProvider); err != nil {
span.SetStatus(err)
return pkgerrors.Wrap(err, "error while updating pod status in kubernetes")
}
log.G(ctx).WithFields(log.Fields{
"new phase": string(pod.Status.Phase),
"new reason": pod.Status.Reason,
"new phase": string(podFromProvider.Status.Phase),
"new reason": podFromProvider.Status.Reason,
"old phase": string(podFromKubernetes.Status.Phase),
"old reason": podFromKubernetes.Status.Reason,
}).Debug("Updated pod status in kubernetes")
return nil
}
func enqueuePodStatusUpdate(ctx context.Context, q workqueue.RateLimitingInterface, pod *corev1.Pod) {
// enqueuePodStatusUpdate updates our pod status map, and marks the pod as dirty in the workqueue. The pod must be DeepCopy'd
// prior to enqueuePodStatusUpdate.
func (pc *PodController) enqueuePodStatusUpdate(ctx context.Context, q workqueue.RateLimitingInterface, pod *corev1.Pod) {
if key, err := cache.MetaNamespaceKeyFunc(pod); err != nil {
log.G(ctx).WithError(err).WithField("method", "enqueuePodStatusUpdate").Error("Error getting pod meta namespace key")
} else {
q.AddRateLimited(key)
if obj, ok := pc.knownPods.Load(key); ok {
kpod := obj.(*knownPod)
kpod.Lock()
kpod.lastPodStatusReceivedFromProvider = pod
kpod.Unlock()
q.AddRateLimited(key)
}
}
}
@@ -292,7 +357,7 @@ func (pc *PodController) podStatusHandler(ctx context.Context, key string) (retE
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return pkgerrors.Wrap(err, "error spliting cache key")
return pkgerrors.Wrap(err, "error splitting cache key")
}
pod, err := pc.podsLister.Pods(namespace).Get(name)
@@ -304,5 +369,5 @@ func (pc *PodController) podStatusHandler(ctx context.Context, key string) (retE
return pkgerrors.Wrap(err, "error looking up pod")
}
return pc.updatePodStatus(ctx, pod)
return pc.updatePodStatus(ctx, pod, key)
}

View File

@@ -16,8 +16,8 @@ package node
import (
"context"
"path"
"testing"
"time"
pkgerrors "github.com/pkg/errors"
"github.com/virtual-kubelet/virtual-kubelet/errdefs"
@@ -27,193 +27,104 @@ import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kubeinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/util/workqueue"
)
type mockProvider struct {
pods map[string]*corev1.Pod
creates int
updates int
deletes int
errorOnDelete error
}
func (m *mockProvider) CreatePod(ctx context.Context, pod *corev1.Pod) error {
m.pods[path.Join(pod.GetNamespace(), pod.GetName())] = pod
m.creates++
return nil
}
func (m *mockProvider) UpdatePod(ctx context.Context, pod *corev1.Pod) error {
m.pods[path.Join(pod.GetNamespace(), pod.GetName())] = pod
m.updates++
return nil
}
func (m *mockProvider) GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error) {
p := m.pods[path.Join(namespace, name)]
if p == nil {
return nil, errdefs.NotFound("not found")
}
return p, nil
}
func (m *mockProvider) GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error) {
p := m.pods[path.Join(namespace, name)]
if p == nil {
return nil, errdefs.NotFound("not found")
}
return &p.Status, nil
}
func (m *mockProvider) DeletePod(ctx context.Context, p *corev1.Pod) error {
if m.errorOnDelete != nil {
return m.errorOnDelete
}
delete(m.pods, path.Join(p.GetNamespace(), p.GetName()))
m.deletes++
return nil
}
func (m *mockProvider) GetPods(_ context.Context) ([]*corev1.Pod, error) {
ls := make([]*corev1.Pod, 0, len(m.pods))
for _, p := range ls {
ls = append(ls, p)
}
return ls, nil
}
type TestController struct {
*PodController
mock *mockProvider
client *fake.Clientset
}
func newMockProvider() *mockProvider {
return &mockProvider{pods: make(map[string]*corev1.Pod)}
}
func newTestController() *TestController {
fk8s := fake.NewSimpleClientset()
rm := testutil.FakeResourceManager()
p := newMockProvider()
iFactory := kubeinformers.NewSharedInformerFactoryWithOptions(fk8s, 10*time.Minute)
return &TestController{
PodController: &PodController{
client: fk8s.CoreV1(),
provider: p,
resourceManager: rm,
recorder: testutil.FakeEventRecorder(5),
k8sQ: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
done: make(chan struct{}),
ready: make(chan struct{}),
podsInformer: iFactory.Core().V1().Pods(),
},
mock: p,
client: fk8s,
}
}
func TestPodHashingEqual(t *testing.T) {
p1 := corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12-perl",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
h1 := hashPodSpec(p1)
p2 := corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12-perl",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
h2 := hashPodSpec(p2)
assert.Check(t, is.Equal(h1, h2))
// Run starts the informer and runs the pod controller
func (tc *TestController) Run(ctx context.Context, n int) error {
go tc.podsInformer.Informer().Run(ctx.Done())
return tc.PodController.Run(ctx, n)
}
func TestPodHashingDifferent(t *testing.T) {
p1 := corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
func TestPodsEqual(t *testing.T) {
p1 := &corev1.Pod{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12-perl",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
},
}
h1 := hashPodSpec(p1)
p2 := p1.DeepCopy()
p2 := corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12-perl",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
assert.Assert(t, podsEqual(p1, p2))
}
func TestPodsDifferent(t *testing.T) {
p1 := &corev1.Pod{
Spec: newPodSpec(),
}
h2 := hashPodSpec(p2)
assert.Check(t, h1 != h2)
p2 := p1.DeepCopy()
p2.Spec.Containers[0].Image = "nginx:1.15.12-perl"
assert.Assert(t, !podsEqual(p1, p2))
}
func TestPodsDifferentIgnoreValue(t *testing.T) {
p1 := &corev1.Pod{
Spec: newPodSpec(),
}
p2 := p1.DeepCopy()
p2.Status.Phase = corev1.PodFailed
assert.Assert(t, podsEqual(p1, p2))
}
func TestPodCreateNewPod(t *testing.T) {
svr := newTestController()
pod := &corev1.Pod{}
pod.ObjectMeta.Namespace = "default"
pod.ObjectMeta.Name = "nginx"
pod.Spec = corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
pod.ObjectMeta.Namespace = "default" //nolint:goconst
pod.ObjectMeta.Name = "nginx" //nolint:goconst
pod.Spec = newPodSpec()
err := svr.createOrUpdatePod(context.Background(), pod)
err := svr.createOrUpdatePod(context.Background(), pod.DeepCopy())
assert.Check(t, is.Nil(err))
// createOrUpdate called CreatePod but did not call UpdatePod because the pod did not exist
assert.Check(t, is.Equal(svr.mock.creates, 1))
assert.Check(t, is.Equal(svr.mock.updates, 0))
assert.Check(t, is.Equal(svr.mock.creates.read(), 1))
assert.Check(t, is.Equal(svr.mock.updates.read(), 0))
}
func TestPodUpdateExisting(t *testing.T) {
@@ -222,50 +133,22 @@ func TestPodUpdateExisting(t *testing.T) {
pod := &corev1.Pod{}
pod.ObjectMeta.Namespace = "default"
pod.ObjectMeta.Name = "nginx"
pod.Spec = corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
pod.Spec = newPodSpec()
err := svr.provider.CreatePod(context.Background(), pod)
err := svr.createOrUpdatePod(context.Background(), pod.DeepCopy())
assert.Check(t, is.Nil(err))
assert.Check(t, is.Equal(svr.mock.creates, 1))
assert.Check(t, is.Equal(svr.mock.updates, 0))
assert.Check(t, is.Equal(svr.mock.creates.read(), 1))
assert.Check(t, is.Equal(svr.mock.updates.read(), 0))
pod2 := &corev1.Pod{}
pod2.ObjectMeta.Namespace = "default"
pod2.ObjectMeta.Name = "nginx"
pod2.Spec = corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12-perl",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
pod2 := pod.DeepCopy()
pod2.Spec.Containers[0].Image = "nginx:1.15.12-perl"
err = svr.createOrUpdatePod(context.Background(), pod2)
err = svr.createOrUpdatePod(context.Background(), pod2.DeepCopy())
assert.Check(t, is.Nil(err))
// createOrUpdate didn't call CreatePod but did call UpdatePod because the spec changed
assert.Check(t, is.Equal(svr.mock.creates, 1))
assert.Check(t, is.Equal(svr.mock.updates, 1))
assert.Check(t, is.Equal(svr.mock.creates.read(), 1))
assert.Check(t, is.Equal(svr.mock.updates.read(), 1))
}
func TestPodNoSpecChange(t *testing.T) {
@@ -274,32 +157,19 @@ func TestPodNoSpecChange(t *testing.T) {
pod := &corev1.Pod{}
pod.ObjectMeta.Namespace = "default"
pod.ObjectMeta.Name = "nginx"
pod.Spec = corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
pod.Spec = newPodSpec()
err := svr.mock.CreatePod(context.Background(), pod)
err := svr.createOrUpdatePod(context.Background(), pod.DeepCopy())
assert.Check(t, is.Nil(err))
assert.Check(t, is.Equal(svr.mock.creates, 1))
assert.Check(t, is.Equal(svr.mock.updates, 0))
assert.Check(t, is.Equal(svr.mock.creates.read(), 1))
assert.Check(t, is.Equal(svr.mock.updates.read(), 0))
err = svr.createOrUpdatePod(context.Background(), pod)
err = svr.createOrUpdatePod(context.Background(), pod.DeepCopy())
assert.Check(t, is.Nil(err))
// createOrUpdate didn't call CreatePod or UpdatePod, spec didn't change
assert.Check(t, is.Equal(svr.mock.creates, 1))
assert.Check(t, is.Equal(svr.mock.updates, 0))
assert.Check(t, is.Equal(svr.mock.creates.read(), 1))
assert.Check(t, is.Equal(svr.mock.updates.read(), 0))
}
func TestPodDelete(t *testing.T) {
@@ -322,14 +192,7 @@ func TestPodDelete(t *testing.T) {
pod := &corev1.Pod{}
pod.ObjectMeta.Namespace = "default"
pod.ObjectMeta.Name = "nginx"
pod.Spec = corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12",
},
},
}
pod.Spec = newPodSpec()
pc := c.client.CoreV1().Pods("default")
@@ -337,9 +200,9 @@ func TestPodDelete(t *testing.T) {
assert.NilError(t, err)
ctx := context.Background()
err = c.createOrUpdatePod(ctx, p) // make sure it's actually created
err = c.createOrUpdatePod(ctx, p.DeepCopy()) // make sure it's actually created
assert.NilError(t, err)
assert.Check(t, is.Equal(c.mock.creates, 1))
assert.Check(t, is.Equal(c.mock.creates.read(), 1))
err = c.deletePod(ctx, pod.Namespace, pod.Name)
assert.Equal(t, pkgerrors.Cause(err), err)
@@ -348,7 +211,7 @@ func TestPodDelete(t *testing.T) {
if tc.delErr == nil {
expectDeletes = 1
}
assert.Check(t, is.Equal(c.mock.deletes, expectDeletes))
assert.Check(t, is.Equal(c.mock.deletes.read(), expectDeletes))
expectDeleted := tc.delErr == nil || errdefs.IsNotFound(tc.delErr)
@@ -361,3 +224,123 @@ func TestPodDelete(t *testing.T) {
})
}
}
func TestFetchPodStatusFromProvider(t *testing.T) {
startedAt := metav1.NewTime(time.Now())
finishedAt := metav1.NewTime(startedAt.Add(time.Second * 10))
containerStateRunning := &corev1.ContainerStateRunning{StartedAt: startedAt}
containerStateTerminated := &corev1.ContainerStateTerminated{StartedAt: startedAt, FinishedAt: finishedAt}
containerStateWaiting := &corev1.ContainerStateWaiting{}
testCases := []struct {
desc string
running *corev1.ContainerStateRunning
terminated *corev1.ContainerStateTerminated
waiting *corev1.ContainerStateWaiting
expectedStartedAt metav1.Time
expectedFinishedAt metav1.Time
}{
{desc: "container in running state", running: containerStateRunning, expectedStartedAt: startedAt},
{desc: "container in terminated state", terminated: containerStateTerminated, expectedStartedAt: startedAt, expectedFinishedAt: finishedAt},
{desc: "container in waiting state", waiting: containerStateWaiting},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
c := newTestController()
pod := &corev1.Pod{}
pod.ObjectMeta.Namespace = "default"
pod.ObjectMeta.Name = "nginx"
pod.Status.Phase = corev1.PodRunning
containerStatus := corev1.ContainerStatus{}
if tc.running != nil {
containerStatus.State.Running = tc.running
} else if tc.terminated != nil {
containerStatus.State.Terminated = tc.terminated
} else if tc.waiting != nil {
containerStatus.State.Waiting = tc.waiting
}
pod.Status.ContainerStatuses = []corev1.ContainerStatus{containerStatus}
pc := c.client.CoreV1().Pods("default")
p, err := pc.Create(pod)
assert.NilError(t, err)
ctx := context.Background()
updatedPod, err := c.fetchPodStatusFromProvider(ctx, nil, p)
assert.NilError(t, err)
assert.Equal(t, updatedPod.Status.Phase, corev1.PodFailed)
assert.Assert(t, is.Len(updatedPod.Status.ContainerStatuses, 1))
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Running == nil)
// Test cases for running and terminated container state
if tc.running != nil || tc.terminated != nil {
// Ensure that the container is in terminated state and other states are nil
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Terminated != nil)
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Waiting == nil)
terminated := updatedPod.Status.ContainerStatuses[0].State.Terminated
assert.Equal(t, terminated.StartedAt, tc.expectedStartedAt)
assert.Assert(t, terminated.StartedAt.Before(&terminated.FinishedAt))
if tc.terminated != nil {
assert.Equal(t, terminated.FinishedAt, tc.expectedFinishedAt)
}
} else {
// Test case for waiting container state
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Terminated == nil)
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Waiting != nil)
}
})
}
}
func TestFetchPodStatusFromProviderWithExpiredPod(t *testing.T) {
c := newTestController()
pod := &corev1.Pod{}
pod.ObjectMeta.Namespace = "default"
pod.ObjectMeta.Name = "nginx"
pod.Status.Phase = corev1.PodRunning
containerStatus := corev1.ContainerStatus{}
// We should terminate containters in a pod that has not provided pod status update for more than a minute
startedAt := time.Now().Add(-(time.Minute + time.Second))
containerStatus.State.Running = &corev1.ContainerStateRunning{StartedAt: metav1.NewTime(startedAt)}
pod.ObjectMeta.CreationTimestamp.Time = startedAt
pod.Status.ContainerStatuses = []corev1.ContainerStatus{containerStatus}
pc := c.client.CoreV1().Pods("default")
p, err := pc.Create(pod)
assert.NilError(t, err)
ctx := context.Background()
updatedPod, err := c.fetchPodStatusFromProvider(ctx, nil, p)
assert.NilError(t, err)
assert.Equal(t, updatedPod.Status.Phase, corev1.PodFailed)
assert.Assert(t, is.Len(updatedPod.Status.ContainerStatuses, 1))
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Running == nil)
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Terminated != nil)
assert.Assert(t, updatedPod.Status.ContainerStatuses[0].State.Waiting == nil)
terminated := updatedPod.Status.ContainerStatuses[0].State.Terminated
assert.Equal(t, terminated.StartedAt, metav1.NewTime(startedAt))
assert.Assert(t, terminated.StartedAt.Before(&terminated.FinishedAt))
}
func newPodSpec() corev1.PodSpec {
return corev1.PodSpec{
Containers: []corev1.Container{
corev1.Container{
Name: "nginx",
Image: "nginx:1.15.12",
Ports: []corev1.ContainerPort{
corev1.ContainerPort{
ContainerPort: 443,
Protocol: "tcp",
},
},
},
},
}
}

View File

@@ -20,7 +20,6 @@ import (
"reflect"
"strconv"
"sync"
"time"
pkgerrors "github.com/pkg/errors"
"github.com/virtual-kubelet/virtual-kubelet/errdefs"
@@ -29,7 +28,6 @@ import (
"github.com/virtual-kubelet/virtual-kubelet/trace"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/util/wait"
corev1informers "k8s.io/client-go/informers/core/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
corev1listers "k8s.io/client-go/listers/core/v1"
@@ -55,23 +53,36 @@ type PodLifecycleHandler interface {
DeletePod(ctx context.Context, pod *corev1.Pod) error
// GetPod retrieves a pod by name from the provider (can be cached).
// The Pod returned is expected to be immutable, and may be accessed
// concurrently outside of the calling goroutine. Therefore it is recommended
// to return a version after DeepCopy.
GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error)
// GetPodStatus retrieves the status of a pod by name from the provider.
// The PodStatus returned is expected to be immutable, and may be accessed
// concurrently outside of the calling goroutine. Therefore it is recommended
// to return a version after DeepCopy.
GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error)
// GetPods retrieves a list of all pods running on the provider (can be cached).
// The Pods returned are expected to be immutable, and may be accessed
// concurrently outside of the calling goroutine. Therefore it is recommended
// to return a version after DeepCopy.
GetPods(context.Context) ([]*corev1.Pod, error)
}
// PodNotifier notifies callers of pod changes.
// Providers should implement this interface to enable callers to be notified
// of pod status updates asyncronously.
// of pod status updates asynchronously.
type PodNotifier interface {
// NotifyPods instructs the notifier to call the passed in function when
// the pod status changes.
// the pod status changes. It should be called when a pod's status changes.
//
// NotifyPods should not block callers.
// The provided pointer to a Pod is guaranteed to be used in a read-only
// fashion. The provided pod's PodStatus should be up to date when
// this function is called.
//
// NotifyPods will not block callers.
NotifyPods(context.Context, func(*corev1.Pod))
}
@@ -87,13 +98,38 @@ type PodController struct {
// recorder is an event recorder for recording Event resources to the Kubernetes API.
recorder record.EventRecorder
// ready is a channel which will be closed once the pod controller is fully up and running.
// this channel will never be closed if there is an error on startup.
ready chan struct{}
client corev1client.PodsGetter
resourceManager *manager.ResourceManager
k8sQ workqueue.RateLimitingInterface
// From the time of creation, to termination the knownPods map will contain the pods key
// (derived from Kubernetes' cache library) -> a *knownPod struct.
knownPods sync.Map
// ready is a channel which will be closed once the pod controller is fully up and running.
// this channel will never be closed if there is an error on startup.
ready chan struct{}
// done is closed when Run returns
// Once done is closed `err` may be set to a non-nil value
done chan struct{}
mu sync.Mutex
// err is set if there is an error while while running the pod controller.
// Typically this would be errors that occur during startup.
// Once err is set, `Run` should return.
//
// This is used since `pc.Run()` is typically called in a goroutine and managing
// this can be non-trivial for callers.
err error
}
type knownPod struct {
// You cannot read (or modify) the fields in this struct without taking the lock. The individual fields
// should be immutable to avoid having to hold the lock the entire time you're working with them
sync.Mutex
lastPodStatusReceivedFromProvider *corev1.Pod
}
// PodControllerConfig is used to configure a new PodController.
@@ -112,7 +148,7 @@ type PodControllerConfig struct {
// Informers used for filling details for things like downward API in pod spec.
//
// We are using informers here instead of listers because we'll need the
// We are using informers here instead of listeners because we'll need the
// informer for certain features (like notifications for updated ConfigMaps)
ConfigMapInformer corev1informers.ConfigMapInformer
SecretInformer corev1informers.SecretInformer
@@ -138,41 +174,69 @@ func NewPodController(cfg PodControllerConfig) (*PodController, error) {
if cfg.ServiceInformer == nil {
return nil, errdefs.InvalidInput("missing service informer")
}
if cfg.Provider == nil {
return nil, errdefs.InvalidInput("missing provider")
}
rm, err := manager.NewResourceManager(cfg.PodInformer.Lister(), cfg.SecretInformer.Lister(), cfg.ConfigMapInformer.Lister(), cfg.ServiceInformer.Lister())
if err != nil {
return nil, pkgerrors.Wrap(err, "could not create resource manager")
}
return &PodController{
pc := &PodController{
client: cfg.PodClient,
podsInformer: cfg.PodInformer,
podsLister: cfg.PodInformer.Lister(),
provider: cfg.Provider,
resourceManager: rm,
ready: make(chan struct{}),
done: make(chan struct{}),
recorder: cfg.EventRecorder,
}, nil
k8sQ: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "syncPodsFromKubernetes"),
}
return pc, nil
}
// Run will set up the event handlers for types we are interested in, as well as syncing informer caches and starting workers.
// It will block until the context is cancelled, at which point it will shutdown the work queue and wait for workers to finish processing their current work items.
func (pc *PodController) Run(ctx context.Context, podSyncWorkers int) error {
k8sQ := workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "syncPodsFromKubernetes")
defer k8sQ.ShutDown()
// Run will set up the event handlers for types we are interested in, as well
// as syncing informer caches and starting workers. It will block until the
// context is cancelled, at which point it will shutdown the work queue and
// wait for workers to finish processing their current work items prior to
// returning.
//
// Once this returns, you should not re-use the controller.
func (pc *PodController) Run(ctx context.Context, podSyncWorkers int) (retErr error) {
// Shutdowns are idempotent, so we can call it multiple times. This is in case we have to bail out early for some reason.
defer func() {
pc.k8sQ.ShutDown()
pc.mu.Lock()
pc.err = retErr
close(pc.done)
pc.mu.Unlock()
}()
podStatusQueue := workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "syncPodStatusFromProvider")
pc.runProviderSyncWorkers(ctx, podStatusQueue, podSyncWorkers)
pc.runSyncFromProvider(ctx, podStatusQueue)
defer podStatusQueue.ShutDown()
// Set up event handlers for when Pod resources change.
// Wait for the caches to be synced *before* starting to do work.
if ok := cache.WaitForCacheSync(ctx.Done(), pc.podsInformer.Informer().HasSynced); !ok {
return pkgerrors.New("failed to wait for caches to sync")
}
log.G(ctx).Info("Pod cache in-sync")
// Set up event handlers for when Pod resources change. Since the pod cache is in-sync, the informer will generate
// synthetic add events at this point. It again avoids the race condition of adding handlers while the cache is
// syncing.
pc.podsInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(pod interface{}) {
if key, err := cache.MetaNamespaceKeyFunc(pod); err != nil {
log.L.Error(err)
log.G(ctx).Error(err)
} else {
k8sQ.AddRateLimited(key)
pc.knownPods.Store(key, &knownPod{})
pc.k8sQ.AddRateLimited(key)
}
},
UpdateFunc: func(oldObj, newObj interface{}) {
@@ -189,37 +253,46 @@ func (pc *PodController) Run(ctx context.Context, podSyncWorkers int) error {
}
// At this point we know that something in .metadata or .spec has changed, so we must proceed to sync the pod.
if key, err := cache.MetaNamespaceKeyFunc(newPod); err != nil {
log.L.Error(err)
log.G(ctx).Error(err)
} else {
k8sQ.AddRateLimited(key)
pc.k8sQ.AddRateLimited(key)
}
},
DeleteFunc: func(pod interface{}) {
if key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(pod); err != nil {
log.L.Error(err)
log.G(ctx).Error(err)
} else {
k8sQ.AddRateLimited(key)
pc.knownPods.Delete(key)
pc.k8sQ.AddRateLimited(key)
}
},
})
// Wait for the caches to be synced *before* starting workers.
if ok := cache.WaitForCacheSync(ctx.Done(), pc.podsInformer.Informer().HasSynced); !ok {
return pkgerrors.New("failed to wait for caches to sync")
}
log.G(ctx).Info("Pod cache in-sync")
// Perform a reconciliation step that deletes any dangling pods from the provider.
// This happens only when the virtual-kubelet is starting, and operates on a "best-effort" basis.
// If by any reason the provider fails to delete a dangling pod, it will stay in the provider and deletion won't be retried.
pc.deleteDanglingPods(ctx, podSyncWorkers)
log.G(ctx).Info("starting workers")
wg := sync.WaitGroup{}
// Use the worker's "index" as its ID so we can use it for tracing.
for id := 0; id < podSyncWorkers; id++ {
go wait.Until(func() {
// Use the worker's "index" as its ID so we can use it for tracing.
pc.runWorker(ctx, strconv.Itoa(id), k8sQ)
}, time.Second, ctx.Done())
wg.Add(1)
workerID := strconv.Itoa(id)
go func() {
defer wg.Done()
pc.runSyncPodStatusFromProviderWorker(ctx, workerID, podStatusQueue)
}()
}
for id := 0; id < podSyncWorkers; id++ {
wg.Add(1)
workerID := strconv.Itoa(id)
go func() {
defer wg.Done()
pc.runSyncPodsFromKubernetesWorker(ctx, workerID, pc.k8sQ)
}()
}
close(pc.ready)
@@ -227,32 +300,49 @@ func (pc *PodController) Run(ctx context.Context, podSyncWorkers int) error {
log.G(ctx).Info("started workers")
<-ctx.Done()
log.G(ctx).Info("shutting down workers")
pc.k8sQ.ShutDown()
podStatusQueue.ShutDown()
wg.Wait()
return nil
}
// Ready returns a channel which gets closed once the PodController is ready to handle scheduled pods.
// This channel will never close if there is an error on startup.
// The status of this channel after sthudown is indeterminate.
// The status of this channel after shutdown is indeterminate.
func (pc *PodController) Ready() <-chan struct{} {
return pc.ready
}
// runWorker is a long-running function that will continually call the processNextWorkItem function in order to read and process an item on the work queue.
func (pc *PodController) runWorker(ctx context.Context, workerId string, q workqueue.RateLimitingInterface) {
for pc.processNextWorkItem(ctx, workerId, q) {
// Done returns a channel receiver which is closed when the pod controller has exited.
// Once the pod controller has exited you can call `pc.Err()` to see if any error occurred.
func (pc *PodController) Done() <-chan struct{} {
return pc.done
}
// Err returns any error that has occurred and caused the pod controller to exit.
func (pc *PodController) Err() error {
pc.mu.Lock()
defer pc.mu.Unlock()
return pc.err
}
// runSyncPodsFromKubernetesWorker is a long-running function that will continually call the processNextWorkItem function
// in order to read and process an item on the work queue that is generated by the pod informer.
func (pc *PodController) runSyncPodsFromKubernetesWorker(ctx context.Context, workerID string, q workqueue.RateLimitingInterface) {
for pc.processNextWorkItem(ctx, workerID, q) {
}
}
// processNextWorkItem will read a single work item off the work queue and attempt to process it,by calling the syncHandler.
func (pc *PodController) processNextWorkItem(ctx context.Context, workerId string, q workqueue.RateLimitingInterface) bool {
func (pc *PodController) processNextWorkItem(ctx context.Context, workerID string, q workqueue.RateLimitingInterface) bool {
// We create a span only after popping from the queue so that we can get an adequate picture of how long it took to process the item.
ctx, span := trace.StartSpan(ctx, "processNextWorkItem")
defer span.End()
// Add the ID of the current worker as an attribute to the current span.
ctx = span.WithField(ctx, "workerId", workerId)
ctx = span.WithField(ctx, "workerId", workerID)
return handleQueueItem(ctx, q, pc.syncHandler)
}

View File

@@ -0,0 +1,44 @@
package node
import (
"context"
"testing"
"time"
"gotest.tools/assert"
)
func TestPodControllerExitOnContextCancel(t *testing.T) {
tc := newTestController()
ctx := context.Background()
ctxRun, cancel := context.WithCancel(ctx)
done := make(chan error)
go func() {
done <- tc.Run(ctxRun, 1)
}()
ctxT, cancelT := context.WithTimeout(ctx, 30*time.Second)
select {
case <-ctx.Done():
assert.NilError(t, ctxT.Err())
case <-tc.Ready():
case <-tc.Done():
}
assert.NilError(t, tc.Err())
cancelT()
cancel()
ctxT, cancelT = context.WithTimeout(ctx, 30*time.Second)
defer cancelT()
select {
case <-ctxT.Done():
assert.NilError(t, ctxT.Err(), "timeout waiting for Run() to exit")
case err := <-done:
assert.NilError(t, err)
}
assert.NilError(t, tc.Err())
}

View File

@@ -16,7 +16,6 @@ package node
import (
"context"
"strconv"
"time"
pkgerrors "github.com/pkg/errors"
@@ -92,16 +91,7 @@ func handleQueueItem(ctx context.Context, q workqueue.RateLimitingInterface, han
return true
}
func (pc *PodController) runProviderSyncWorkers(ctx context.Context, q workqueue.RateLimitingInterface, numWorkers int) {
for i := 0; i < numWorkers; i++ {
go func(index int) {
workerID := strconv.Itoa(index)
pc.runProviderSyncWorker(ctx, workerID, q)
}(i)
}
}
func (pc *PodController) runProviderSyncWorker(ctx context.Context, workerID string, q workqueue.RateLimitingInterface) {
func (pc *PodController) runSyncPodStatusFromProviderWorker(ctx context.Context, workerID string, q workqueue.RateLimitingInterface) {
for pc.processPodStatusUpdate(ctx, workerID, q) {
}
}
@@ -116,7 +106,7 @@ func (pc *PodController) processPodStatusUpdate(ctx context.Context, workerID st
return handleQueueItem(ctx, q, pc.podStatusHandler)
}
// providerSyncLoop syncronizes pod states from the provider back to kubernetes
// providerSyncLoop synchronizes pod states from the provider back to kubernetes
// Deprecated: This is only used when the provider does not support async updates
// Providers should implement async update support, even if it just means copying
// something like this in.
@@ -134,7 +124,7 @@ func (pc *PodController) providerSyncLoop(ctx context.Context, q workqueue.RateL
t.Stop()
ctx, span := trace.StartSpan(ctx, "syncActualState")
pc.updatePodStatuses(ctx, q)
pc.fetchPodStatusesFromProvider(ctx, q)
span.End()
// restart the timer
@@ -146,7 +136,7 @@ func (pc *PodController) providerSyncLoop(ctx context.Context, q workqueue.RateL
func (pc *PodController) runSyncFromProvider(ctx context.Context, q workqueue.RateLimitingInterface) {
if pn, ok := pc.provider.(PodNotifier); ok {
pn.NotifyPods(ctx, func(pod *corev1.Pod) {
enqueuePodStatusUpdate(ctx, q, pod)
pc.enqueuePodStatusUpdate(ctx, q, pod.DeepCopy())
})
} else {
go pc.providerSyncLoop(ctx, q)

191
test/e2e/README.md Normal file
View File

@@ -0,0 +1,191 @@
# Importable End-To-End Test Suite
Virtual Kubelet (VK) provides an importable end-to-end (E2E) test suite containing a set of common integration tests. As a provider, you can import the test suite and use it to validate your VK implementation.
## Prerequisite
To run the E2E test suite, three things are required:
- A local Kubernetes cluster (we have tested with [Docker for Mac](https://docs.docker.com/docker-for-mac/install/) and [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/));
- Your _kubeconfig_ default context points to the local Kubernetes cluster;
- [skaffold](https://skaffold.dev/docs/getting-started/#installing-skaffold)
> The test suite is based on [VK 1.0](https://github.com/virtual-kubelet/virtual-kubelet/releases/tag/v1.0.0). If your VK implementation is based on legacy VK library (< v1.0.0), you will have to upgrade it to VK 1.0 using [virtual-kubelet/node-cli](https://github.com/virtual-kubelet/node-cli).
### Skaffold Folder
Before running the E2E test suite, you will need to copy the [`./hack`](../../hack) folder containing Skaffold-related files such as Dockerfile, manifests, and certificates to your VK project root. Skaffold essentially helps package your virtual kubelet into a container based on the given [`Dockerfile`](../../hack/skaffold/virtual-kubelet/Dockerfile) and deploy it as a pod (see [`pod.yml`](../../hack/skaffold/virtual-kubelet/pod.yml)) to your Kubernetes test cluster. In summary, you will likely need to modify the VK name in those files, customize the VK configuration file, and the API server certificates (`<vk-name>-crt.pem` and `<vk-name>-key.pem`) before running the test suite.
### Makefile.e2e
Also, you will need to copy [`Makefile.e2e`](../../Makefile.e2e) to your VK project root. It contains necessary `make` commands to run the E2E test suite. Do not forget to add `include Makefile.e2e` in your `Makefile`.
### File Structure
A minimal VK provider should now have a file structure similar to the one below:
```console
.
├── Makefile
├── Makefile.e2e
├── README.md
├── cmd
│   └── virtual-kubelet
│   └── main.go
├── go.mod
├── go.sum
├── hack
│   └── skaffold
│   └── virtual-kubelet
│   ├── Dockerfile
│   ├── base.yml
│   ├── pod.yml
│   ├── skaffold.yml
│   ├── vkubelet-provider-0-cfg.json
│   ├── vkubelet-provider-0-crt.pem
│   └── vkubelet-provider-0-key.pem
├── test
│   └── e2e
│ └── main_test.go # import and run the E2E test suite here
├── provider.go # provider-specific VK implementation
├── provider_test.go # unit test
```
## Importing the Test Suite
The test suite can be easily imported in your test files (e.g. `./test/e2e/main_test.go`) with the following import statement:
```go
import (
vke2e "github.com/virtual-kubelet/virtual-kubelet/test/e2e"
)
```
### Test Suite Customization
The test suite allows providers to customize the test suite using `EndToEndTestSuiteConfig`:
```go
// EndToEndTestSuiteConfig is the config passed to initialize the testing framework and test suite.
type EndToEndTestSuiteConfig struct {
// Kubeconfig is the path to the kubeconfig file to use when running the test suite outside a Kubernetes cluster.
Kubeconfig string
// Namespace is the name of the Kubernetes namespace to use for running the test suite (i.e. where to create pods).
Namespace string
// NodeName is the name of the virtual-kubelet node to test.
NodeName string
// WatchTimeout is the duration for which the framework watch a particular condition to be satisfied (e.g. watches a pod becoming ready)
WatchTimeout time.Duration
// Setup is a function that sets up provider-specific resource in the test suite
Setup suite.SetUpFunc
// Teardown is a function that tears down provider-specific resources from the test suite
Teardown suite.TeardownFunc
// ShouldSkipTest is a function that determines whether the test suite should skip certain tests
ShouldSkipTest suite.ShouldSkipTestFunc
}
```
> `Setup()` is invoked before running the E2E test suite, and `Teardown()` is invoked after all the E2E tests are finished.
You will need an `EndToEndTestSuiteConfig` to create an `EndToEndTestSuite` using `NewEndToEndTestSuite`. After that, invoke `Run` from `EndToEndTestSuite` to start the test suite. The code snippet below is a minimal example of how to import and run the test suite in your test file.
```go
package e2e
import (
"time"
vke2e "github.com/virtual-kubelet/virtual-kubelet/test/e2e"
)
var (
kubeconfig string
namespace string
nodeName string
)
// Read the following variables from command-line flags
func init() {
flag.StringVar(&kubeconfig, "kubeconfig", "", "path to the kubeconfig file to use when running the test suite outside a kubernetes cluster")
flag.StringVar(&namespace, "namespace", defaultNamespace, "the name of the kubernetes namespace to use for running the test suite (i.e. where to create pods)")
flag.StringVar(&nodeName, "node-name", defaultNodeName, "the name of the virtual-kubelet node to test")
flag.Parse()
}
func setup() error {
fmt.Println("Setting up end-to-end test suite...")
return nil
}
func teardown() error {
fmt.Println("Tearing down end-to-end test suite...")
return nil
}
func shouldSkipTest(testName string) bool {
// Skip the test 'TestGetStatsSummary'
return testName == "TestGetStatsSummary"
}
func TestEndToEnd(t *testing.T) {
config := vke2e.EndToEndTestSuiteConfig{
Kubeconfig: kubeconfig,
Namespace: namespace,
NodeName: nodeName,
Setup: setup,
Teardown: teardown,
ShouldSkipTest: shouldSkipTest,
WaitTimeout: 5 * time.Minute,
}
ts := vke2e.NewEndToEndTestSuite(config)
ts.Run(t)
}
```
## Running the Test Suite
Since our CI uses Minikube, we describe below how to run E2E on top of it.
To create a Minikube cluster, run the following command after [installing Minikube](https://github.com/kubernetes/minikube#installation):
```bash
minikube start
```
To run the E2E test suite, you can run the following command:
```bash
make e2e
```
You can see from the console output whether the tests in the test suite pass or not.
```console
...
=== RUN TestEndToEnd
=== RUN TestEndToEnd/TestCreatePodWithMandatoryInexistentConfigMap
=== RUN TestEndToEnd/TestCreatePodWithMandatoryInexistentSecrets
=== RUN TestEndToEnd/TestCreatePodWithOptionalInexistentConfigMap
=== RUN TestEndToEnd/TestCreatePodWithOptionalInexistentSecrets
=== RUN TestEndToEnd/TestGetStatsSummary
=== RUN TestEndToEnd/TestNodeCreateAfterDelete
=== RUN TestEndToEnd/TestPodLifecycleForceDelete
=== RUN TestEndToEnd/TestPodLifecycleGracefulDelete
--- PASS: TestEndToEnd (21.93s)
--- PASS: TestEndToEnd/TestCreatePodWithMandatoryInexistentConfigMap (0.03s)
--- PASS: TestEndToEnd/TestCreatePodWithMandatoryInexistentSecrets (0.03s)
--- PASS: TestEndToEnd/TestCreatePodWithOptionalInexistentConfigMap (0.55s)
--- PASS: TestEndToEnd/TestCreatePodWithOptionalInexistentSecrets (0.99s)
--- PASS: TestEndToEnd/TestGetStatsSummary (0.80s)
--- PASS: TestEndToEnd/TestNodeCreateAfterDelete (9.63s)
--- PASS: TestEndToEnd/TestPodLifecycleForceDelete (2.05s)
basic.go:158: Created pod: nginx-testpodlifecycleforcedelete-jz84g
basic.go:164: Pod nginx-testpodlifecycleforcedelete-jz84g ready
basic.go:197: Force deleted pod: nginx-testpodlifecycleforcedelete-jz84g
basic.go:214: Pod ended as phase: Running
--- PASS: TestEndToEnd/TestPodLifecycleGracefulDelete (1.04s)
basic.go:87: Created pod: nginx-testpodlifecyclegracefuldelete-r84v7
basic.go:93: Pod nginx-testpodlifecyclegracefuldelete-r84v7 ready
basic.go:120: Deleted pod: nginx-testpodlifecyclegracefuldelete-r84v7
PASS
...
```

View File

@@ -1,5 +1,3 @@
// +build e2e
package e2e
import (
@@ -22,9 +20,9 @@ const (
// TestGetStatsSummary creates a pod having two containers and queries the /stats/summary endpoint of the virtual-kubelet.
// It expects this endpoint to return stats for the current node, as well as for the aforementioned pod and each of its two containers.
func TestGetStatsSummary(t *testing.T) {
func (ts *EndToEndTestSuite) TestGetStatsSummary(t *testing.T) {
// Create a pod with prefix "nginx-" having three containers.
pod, err := f.CreatePod(f.CreateDummyPodObjectWithPrefix(t.Name(), "nginx-", "foo", "bar", "baz"))
pod, err := f.CreatePod(f.CreateDummyPodObjectWithPrefix(t.Name(), "nginx", "foo", "bar", "baz"))
if err != nil {
t.Fatal(err)
}
@@ -69,10 +67,10 @@ func TestGetStatsSummary(t *testing.T) {
// Then, it deletes the pods and verifies that the provider has been asked to delete it.
// These verifications are made using the /stats/summary endpoint of the virtual-kubelet, by checking for the presence or absence of the pods.
// Hence, the provider being tested must implement the PodMetricsProvider interface.
func TestPodLifecycleGracefulDelete(t *testing.T) {
func (ts *EndToEndTestSuite) TestPodLifecycleGracefulDelete(t *testing.T) {
// Create a pod with prefix "nginx-" having a single container.
podSpec := f.CreateDummyPodObjectWithPrefix(t.Name(), "nginx-", "foo")
podSpec.Spec.NodeName = nodeName
podSpec := f.CreateDummyPodObjectWithPrefix(t.Name(), "nginx", "foo")
podSpec.Spec.NodeName = f.NodeName
pod, err := f.CreatePod(podSpec)
if err != nil {
@@ -139,11 +137,11 @@ func TestPodLifecycleGracefulDelete(t *testing.T) {
assert.Assert(t, *podLast.ObjectMeta.GetDeletionGracePeriodSeconds() > 0)
}
// TestPodLifecycleNonGracefulDelete creates one podsand verifies that the provider has created them
// TestPodLifecycleForceDelete creates one podsand verifies that the provider has created them
// and put them in the running lifecycle. It then does a force delete on the pod, and verifies the provider
// has deleted it.
func TestPodLifecycleForceDelete(t *testing.T) {
podSpec := f.CreateDummyPodObjectWithPrefix(t.Name(), "nginx-", "foo")
func (ts *EndToEndTestSuite) TestPodLifecycleForceDelete(t *testing.T) {
podSpec := f.CreateDummyPodObjectWithPrefix(t.Name(), "nginx", "foo")
// Create a pod with prefix having a single container.
pod, err := f.CreatePod(podSpec)
if err != nil {
@@ -217,7 +215,7 @@ func TestPodLifecycleForceDelete(t *testing.T) {
// TestCreatePodWithOptionalInexistentSecrets tries to create a pod referencing optional, inexistent secrets.
// It then verifies that the pod is created successfully.
func TestCreatePodWithOptionalInexistentSecrets(t *testing.T) {
func (ts *EndToEndTestSuite) TestCreatePodWithOptionalInexistentSecrets(t *testing.T) {
// Create a pod with a single container referencing optional, inexistent secrets.
pod, err := f.CreatePod(f.CreatePodObjectWithOptionalSecretKey(t.Name()))
if err != nil {
@@ -251,7 +249,7 @@ func TestCreatePodWithOptionalInexistentSecrets(t *testing.T) {
// TestCreatePodWithMandatoryInexistentSecrets tries to create a pod referencing inexistent secrets.
// It then verifies that the pod is not created.
func TestCreatePodWithMandatoryInexistentSecrets(t *testing.T) {
func (ts *EndToEndTestSuite) TestCreatePodWithMandatoryInexistentSecrets(t *testing.T) {
// Create a pod with a single container referencing inexistent secrets.
pod, err := f.CreatePod(f.CreatePodObjectWithMandatorySecretKey(t.Name()))
if err != nil {
@@ -280,7 +278,7 @@ func TestCreatePodWithMandatoryInexistentSecrets(t *testing.T) {
// TestCreatePodWithOptionalInexistentConfigMap tries to create a pod referencing optional, inexistent config map.
// It then verifies that the pod is created successfully.
func TestCreatePodWithOptionalInexistentConfigMap(t *testing.T) {
func (ts *EndToEndTestSuite) TestCreatePodWithOptionalInexistentConfigMap(t *testing.T) {
// Create a pod with a single container referencing optional, inexistent config map.
pod, err := f.CreatePod(f.CreatePodObjectWithOptionalConfigMapKey(t.Name()))
if err != nil {
@@ -314,7 +312,7 @@ func TestCreatePodWithOptionalInexistentConfigMap(t *testing.T) {
// TestCreatePodWithMandatoryInexistentConfigMap tries to create a pod referencing inexistent secrets.
// It then verifies that the pod is not created.
func TestCreatePodWithMandatoryInexistentConfigMap(t *testing.T) {
func (ts *EndToEndTestSuite) TestCreatePodWithMandatoryInexistentConfigMap(t *testing.T) {
// Create a pod with a single container referencing inexistent config map.
pod, err := f.CreatePod(f.CreatePodObjectWithMandatoryConfigMapKey(t.Name()))
if err != nil {

View File

@@ -1,5 +1,3 @@
// +build e2e
package e2e
import (
@@ -17,7 +15,7 @@ import (
// TestNodeCreateAfterDelete makes sure that a node is automatically recreated
// if it is deleted while VK is running.
func TestNodeCreateAfterDelete(t *testing.T) {
func (ts *EndToEndTestSuite) TestNodeCreateAfterDelete(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()

104
test/e2e/suite.go Normal file
View File

@@ -0,0 +1,104 @@
package e2e
import (
"testing"
"time"
"github.com/virtual-kubelet/virtual-kubelet/internal/test/e2e/framework"
"github.com/virtual-kubelet/virtual-kubelet/internal/test/suite"
)
const defaultWatchTimeout = 2 * time.Minute
// f is a testing framework that is accessible across the e2e package
var f *framework.Framework
// EndToEndTestSuite holds the setup, teardown, and shouldSkipTest functions for a specific provider
type EndToEndTestSuite struct {
setup suite.SetUpFunc
teardown suite.TeardownFunc
shouldSkipTest suite.ShouldSkipTestFunc
}
// EndToEndTestSuiteConfig is the config passed to initialize the testing framework and test suite.
type EndToEndTestSuiteConfig struct {
// Kubeconfig is the path to the kubeconfig file to use when running the test suite outside a Kubernetes cluster.
Kubeconfig string
// Namespace is the name of the Kubernetes namespace to use for running the test suite (i.e. where to create pods).
Namespace string
// NodeName is the name of the virtual-kubelet node to test.
NodeName string
// WatchTimeout is the duration for which the framework watch a particular condition to be satisfied (e.g. watches a pod becoming ready)
WatchTimeout time.Duration
// Setup is a function that sets up provider-specific resource in the test suite
Setup suite.SetUpFunc
// Teardown is a function that tears down provider-specific resources from the test suite
Teardown suite.TeardownFunc
// ShouldSkipTest is a function that determines whether the test suite should skip certain tests
ShouldSkipTest suite.ShouldSkipTestFunc
}
// Setup runs the setup function from the provider and other
// procedures before running the test suite
func (ts *EndToEndTestSuite) Setup() {
if err := ts.setup(); err != nil {
panic(err)
}
// Wait for the virtual kubelet (deployed as a pod) to become fully ready
if _, err := f.WaitUntilPodReady(f.Namespace, f.NodeName); err != nil {
panic(err)
}
}
// Teardown runs the teardown function from the provider and other
// procedures after running the test suite
func (ts *EndToEndTestSuite) Teardown() {
if err := ts.teardown(); err != nil {
panic(err)
}
}
// ShouldSkipTest returns true if a provider wants to skip running a particular test
func (ts *EndToEndTestSuite) ShouldSkipTest(testName string) bool {
return ts.shouldSkipTest(testName)
}
// Run runs tests registered in the test suite
func (ts *EndToEndTestSuite) Run(t *testing.T) {
suite.Run(t, ts)
}
// NewEndToEndTestSuite returns a new EndToEndTestSuite given a test suite configuration,
// setup, and teardown functions from provider
func NewEndToEndTestSuite(cfg EndToEndTestSuiteConfig) *EndToEndTestSuite {
if cfg.Namespace == "" {
panic("Empty namespace")
} else if cfg.NodeName == "" {
panic("Empty node name")
}
if cfg.WatchTimeout == time.Duration(0) {
cfg.WatchTimeout = defaultWatchTimeout
}
f = framework.NewTestingFramework(cfg.Kubeconfig, cfg.Namespace, cfg.NodeName, cfg.WatchTimeout)
emptyFunc := func() error { return nil }
if cfg.Setup == nil {
cfg.Setup = emptyFunc
}
if cfg.Teardown == nil {
cfg.Teardown = emptyFunc
}
if cfg.ShouldSkipTest == nil {
// This will not skip any test in the test suite
cfg.ShouldSkipTest = func(_ string) bool { return false }
}
return &EndToEndTestSuite{
setup: cfg.Setup,
teardown: cfg.Teardown,
shouldSkipTest: cfg.ShouldSkipTest,
}
}

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package trace abstracts virtual-kubelet's tracing capabilties into a set of
// Package trace abstracts virtual-kubelet's tracing capabilities into a set of
// interfaces.
// While this does allow consumers to use whatever tracing library they want,
// the primary goal is to share logging data between the configured logger and
@@ -27,7 +27,7 @@ import (
// Tracer is the interface used for creating a tracing span
type Tracer interface {
// StartSpan starts a new span. The span details are emebedded into the returned
// StartSpan starts a new span. The span details are embedded into the returned
// context
StartSpan(context.Context, string) (context.Context, Span)
}

View File

@@ -26,69 +26,54 @@ Virtual Kubelet currently has a wide variety of providers:
## Adding new providers {#adding}
To add a new Virtual Kubelet provider, create a new directory for your provider in the [`providers`](https://github.com/virtual-kubelet/virtual-kubelet/tree/master/providers) directory.
To add a new Virtual Kubelet provider, create a new directory for your provider.
```shell
git clone https://github.com/virtual-kubelet/virtual-kubelet
cd virtual-kubelet
mkdir providers/my-provider
```
In that created directory, implement [`PodLifecycleHandler`](https://godoc.org/github.com/virtual-kubelet/virtual-kubelet/node#PodLifecycleHandler) interface in [Go](https://golang.org).
In that created directory, implement the [`Provider`](https://godoc.org/github.com/virtual-kubelet/virtual-kubelet/providers#Provider) interface in [Go](https://golang.org).
> For an example implementation of the Virtual Kubelet `Provider` interface, see the [Virtual Kubelet CRI Provider](https://github.com/virtual-kubelet/virtual-kubelet/tree/master/providers/cri), especially [`cri.go`](https://github.com/virtual-kubelet/virtual-kubelet/blob/master/providers/cri/cri.go).
> For an example implementation of the Virtual Kubelet `PodLifecycleHandler` interface, see the [Virtual Kubelet CRI Provider](https://github.com/virtual-kubelet/cri), especially [`cri.go`](https://github.com/virtual-kubelet/cri/blob/master/cri.go).
Each Virtual Kubelet provider can be configured using its own configuration file and environment variables.
You can see the list of required methods, with relevant descriptions of each method, below:
```go
// Provider contains the methods required to implement a Virtual Kubelet provider
type Provider interface {
// Takes a Kubernetes Pod and deploys it within the provider
CreatePod(ctx context.Context, pod *v1.Pod) error
// PodLifecycleHandler defines the interface used by the PodController to react
// to new and changed pods scheduled to the node that is being managed.
//
// Errors produced by these methods should implement an interface from
// github.com/virtual-kubelet/virtual-kubelet/errdefs package in order for the
// core logic to be able to understand the type of failure.
type PodLifecycleHandler interface {
// CreatePod takes a Kubernetes Pod and deploys it within the provider.
CreatePod(ctx context.Context, pod *corev1.Pod) error
// Takes a Kubernetes Pod and updates it within the provider
UpdatePod(ctx context.Context, pod *v1.Pod) error
// UpdatePod takes a Kubernetes Pod and updates it within the provider.
UpdatePod(ctx context.Context, pod *corev1.Pod) error
// Takes a Kubernetes Pod and deletes it from the provider
DeletePod(ctx context.Context, pod *v1.Pod) error
// DeletePod takes a Kubernetes Pod and deletes it from the provider.
DeletePod(ctx context.Context, pod *corev1.Pod) error
// Retrieves a pod by name from the provider (can be cached)
GetPod(ctx context.Context, namespace, name string) (*v1.Pod, error)
// GetPod retrieves a pod by name from the provider (can be cached).
// The Pod returned is expected to be immutable, and may be accessed
// concurrently outside of the calling goroutine. Therefore it is recommended
// to return a version after DeepCopy.
GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error)
// Retrieves the logs of a container by name from the provider
GetContainerLogs(ctx context.Context, namespace, podName, containerName string, tail int) (string, error)
// GetPodStatus retrieves the status of a pod by name from the provider.
// The PodStatus returned is expected to be immutable, and may be accessed
// concurrently outside of the calling goroutine. Therefore it is recommended
// to return a version after DeepCopy.
GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error)
// Executes a command in a container in the pod, copying data between
// in/out/err and the container's stdin/stdout/stderr
ExecInContainer(name string, uid types.UID, container string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error
// Retrieves the status of a pod by name from the provider
GetPodStatus(ctx context.Context, namespace, name string) (*v1.PodStatus, error)
// Retrieves a list of all pods running on the provider (can be cached)
GetPods(context.Context) ([]*v1.Pod, error)
// Returns a resource list with the capacity constraints of the provider
Capacity(context.Context) v1.ResourceList
// Returns a list of conditions (Ready, OutOfDisk, etc), which is polled
// periodically to update the node status within Kubernetes
NodeConditions(context.Context) []v1.NodeCondition
// Returns a list of addresses for the node status within Kubernetes
NodeAddresses(context.Context) []v1.NodeAddress
// Returns NodeDaemonEndpoints for the node status within Kubernetes.
NodeDaemonEndpoints(context.Context) *v1.NodeDaemonEndpoints
// Returns the operating system the provider is for
OperatingSystem() string
// GetPods retrieves a list of all pods running on the provider (can be cached).
// The Pods returned are expected to be immutable, and may be accessed
// concurrently outside of the calling goroutine. Therefore it is recommended
// to return a version after DeepCopy.
GetPods(context.Context) ([]*corev1.Pod, error)
}
```
In addition to `Provider`, there's an optional [`PodMetricsProvider`](https://godoc.org/github.com/virtual-kubelet/virtual-kubelet/providers#PodMetricsProvider) interface that providers can implement to expose Kubernetes Pod stats:
In addition to `PodLifecycleHandler`, there's an optional [`PodMetricsProvider`](https://godoc.org/github.com/virtual-kubelet/virtual-kubelet/cmd/virtual-kubelet/internal/provider#PodMetricsProvider) interface that providers can implement to expose Kubernetes Pod stats:
```go
type PodMetricsProvider interface {
@@ -102,8 +87,6 @@ For a Virtual Kubelet provider to be considered viable, it must support the foll
1. It must conform to the current API provided by Virtual Kubelet (see [above](#adding))
1. It won't have access to the [Kubernetes API server](https://kubernetes.io/docs/concepts/overview/kubernetes-api/), so it must provide a well-defined callback mechanism for fetching data like [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) and [ConfigMaps](https://kubernetes.io/docs/tutorials/configuration/).
In addition to implementing the `Provider` interface in `providers/<your provider>`, you also need to add your provider to the [`providers/register`](https://github.com/virtual-kubelet/virtual-kubelet/tree/master/providers/register) directory, in `provider_<your provider>.go`. Current examples include [`provider_azure.go`](https://github.com/virtual-kubelet/virtual-kubelet/blob/master/providers/register/provider_azure.go) and [`provider_aws.go`](https://github.com/virtual-kubelet/virtual-kubelet/blob/master/providers/register/provider_aws.go), which you can use as templates.
## Documentation
No Virtual Kubelet provider is complete without solid documentation. We strongly recommend providing a README for your provider in its directory. The READMEs for the currently existing implementations can provide a blueprint.

View File

@@ -27,7 +27,7 @@ and the YAML config in data/cli.yaml
It's possible to run the Virtual Kubelet as a Kubernetes Pod inside a [Minikube](https://kubernetes.io/docs/setup/minikube/) or [Docker for Desktop](https://docs.docker.com/docker-for-windows/kubernetes/) Kubernetes cluster.
> At this time, automation of this deployment is supported only for the [`mock`](https://github.com/virtual-kubelet/virtual-kubelet/tree/master/providers/mock) provider.
> At this time, automation of this deployment is supported only for the [`mock`](https://github.com/virtual-kubelet/virtual-kubelet/tree/master/cmd/virtual-kubelet/internal/provider/mock) provider.
In order to deploy the Virtual Kubelet, you need to install [Skaffold](https://skaffold.dev/), a Kubernetes development tool. You also need to make sure that your current [kubectl context](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) is either `minikube` or `docker-for-desktop` (depending on which Kubernetes platform you're using).
@@ -44,7 +44,7 @@ Then:
make skaffold
```
By default, this will run Skaffold in [development mode](https://github.com/GoogleContainerTools/skaffold#skaffold-dev), which will make Skaffold watch [`hack/skaffold/virtual-kubelet/Dockerfile`](https://github.com/virtual-kubelet/virtual-kubelet/blob/master/hack/skaffold/virtual-kubelet/Dockerfile) and its dependencies for changes and re-deploy the Virtual Kubelet when changes happen. It will also make Skaffold stream logs from the Virtual Kubelet Pod.
By default, this will run Skaffold in [development mode](https://github.com/GoogleContainerTools/skaffold#a-glance-at-skaffold-workflow-and-architecture), which will make Skaffold watch [`hack/skaffold/virtual-kubelet/Dockerfile`](https://github.com/virtual-kubelet/virtual-kubelet/blob/master/hack/skaffold/virtual-kubelet/Dockerfile) and its dependencies for changes and re-deploy the Virtual Kubelet when changes happen. It will also make Skaffold stream logs from the Virtual Kubelet Pod.
Alternative, you can run Skaffold outside of development mode—if you aren't concerned about continuous deployment and log streaming—by running:

View File

@@ -1,8 +1,17 @@
description: The command-line tool for running Virtual Kubelets
flags:
- name: --cluster-domain
arg: string
description: Kubernetes cluster domain
default: cluster.local
- name: --disable-taint
arg: bool
description: Disable the Virtual Kubelet [Node taint](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)
default: "false"
- name: --enable-node-lease
arg: bool
description: Use node leases (1.13) for node heartbeats
default: "false"
- name: --full-resync-period
arg: duration
description: How often to perform a full resync of Pods between Kubernetes and the provider
@@ -41,9 +50,13 @@ flags:
- name: --provider-config
arg: string
description: The Virtual Kubelet [provider](/docs/providers) configuration file
- name: --startup-timeout
arg: duration
description: How long to wait for the virtual-kubelet to start
default: 0
- name: --trace-exporter
arg: strings
description: The tracing exporter to use. Available exporters are `jaeger` and `ocagent`.
description: The tracing exporter to use. Available exporters are `jaeger` and `ocagent`
- name: --trace-sample-rate
arg: string
description: The probability of tracing samples
@@ -53,8 +66,4 @@ flags:
default: virtual-kubelet
- name: --trace-tag
arg: map
description: Tags to include with traces, in `key=value` form
- name: --version
description: The Virtual Kubelet version
- name: --help
description: Command help information
description: Tags to include with traces, in `key=value` form