VMware vSphere Integrated Containers provider (#206)
* Add Virtual Kubelet provider for VIC Initial virtual kubelet provider for VMware VIC. This provider currently handles creating and starting of a pod VM via the VIC portlayer and persona server. Image store handling via the VIC persona server. This provider currently requires the feature/wolfpack branch of VIC. * Added pod stop and delete. Also added node capacity. Added the ability to stop and delete pod VMs via VIC. Also retrieve node capacity information from the VCH. * Cleanup and readme file Some file clean up and added a Readme.md markdown file for the VIC provider. * Cleaned up errors, added function comments, moved operation code 1. Cleaned up error handling. Set standard for creating errors. 2. Added method prototype comments for all interface functions. 3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder. * Add mocking code and unit tests for podcache, podcreator, and podstarter Used the unit test framework used in VIC to handle assertions in the provider's unit test. Mocking code generated using OSS project mockery, which is compatible with the testify assertion framework. * Vendored packages for the VIC provider Requires feature/wolfpack branch of VIC and a few specific commit sha of projects used within VIC. * Implementation of POD Stopper and Deleter unit tests (#4) * Updated files for initial PR
This commit is contained in:
56
vendor/github.com/vmware/vic/doc/design/test/kubernetes.md
generated
vendored
Normal file
56
vendor/github.com/vmware/vic/doc/design/test/kubernetes.md
generated
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
# Kubernetes initial testing notes
|
||||
## Required HW setup:
|
||||
- Ubuntu 16.04
|
||||
- 4 CPU
|
||||
- 16GB memory
|
||||
- 80GB disk
|
||||
- (might need 8CPU/32GB of memory, as the above recommended was quite slow still)
|
||||
|
||||
## Initial install:
|
||||
- `sudo apt-get update`
|
||||
- `sudo apt-add-repository ppa:juju/stable`
|
||||
- `sudo apt-add-repository ppa:conjure-up/next`
|
||||
- `sudo apt update`
|
||||
- `sudo apt install conjure-up`
|
||||
|
||||
## Configure container hypervisor:
|
||||
- `newgrp lxd`
|
||||
- `sudo lxd init`
|
||||
|
||||
Walkthrough the config (just hit enter) - select NO when asked to setup IPv6
|
||||
|
||||
## Point juju at the new hypervisor:
|
||||
- `juju bootstrap localhost lxd-test`
|
||||
|
||||
## Start the k8s cluster:
|
||||
- `conjure-up canonical-kubernetes`
|
||||
|
||||
Just hit enter a few times until you get to the summary and hit Q
|
||||
|
||||
- `watch -c juju status --color`
|
||||
|
||||
Wait for quite a while until the cluster is completely up/active/idle. Can take upwards of an hour!
|
||||
|
||||
## Finalize setup:
|
||||
- `mkdir -p ~/.kube`
|
||||
- `juju scp kubernetes-master/0:config ~/.kube/config`
|
||||
- `juju scp kubernetes-master/0:kubectl ~/bin/kubectl`
|
||||
|
||||
## Verify it is working and the cluster is up:
|
||||
- `kubectl cluster-info`
|
||||
|
||||
## Example commands:
|
||||
- `kubectl run -i -t busybox --image=busybox --restart=Never`
|
||||
- `kubectl run nginx --image=nginx`
|
||||
|
||||
### Show the running pods:
|
||||
- `kubectl get pods`
|
||||
|
||||
### To scale up the cluster:
|
||||
- `juju add-unit kubernetes-worker`
|
||||
|
||||
### To show the controller:
|
||||
- `juju switch`
|
||||
|
||||
### To destroy the cluster:
|
||||
- `juju destroy-controller lxd-test --destroy-all-models`
|
||||
47
vendor/github.com/vmware/vic/doc/design/test/nsx.md
generated
vendored
Normal file
47
vendor/github.com/vmware/vic/doc/design/test/nsx.md
generated
vendored
Normal file
@@ -0,0 +1,47 @@
|
||||
## NSX Initial testing notes
|
||||
|
||||
##Required HW setup:
|
||||
- vSphere 6.0/6.5 Cluster setup (6.5 wan't supported for a while, but the latest releases supports it)
|
||||
- An ESX host in the cluster with a minimum of 4 CPU to host the NSX Manager Appliance
|
||||
|
||||
##Initial install:
|
||||
|
||||
###Deploy a Nimbus Cluster using 5-2-Cluster test.
|
||||
|
||||
###Add a beefy ESXi to the cluster to host the NSX appliance.
|
||||
|
||||
- `nimbus-esxdeploy --disk=40000000 --nics=2 --memory=90000 --cpus=4 nsx-esx 3620759`
|
||||
|
||||
###Update host password for the ESXi:
|
||||
|
||||
- `export GOVC_URL=root:@10.x.x.x`
|
||||
|
||||
- `govc host.account.update -id root -password xxxxxx`
|
||||
|
||||
###Add host to the VC Cluster:
|
||||
|
||||
- `export GOVC_URL="Administrator@vSphere.local":password@10.x.x.x`
|
||||
|
||||
- `govc cluster.add -hostname=10.x.x.x -username=root -dc=ha-datacenter -password=xxxx -noverify=true`
|
||||
|
||||
###Install the NSX manager using OVFTool.
|
||||
|
||||
- `ovftool nsx-manager-1.1.0.0.0.4788147.ova nsx-manager-1.1.0.0.0.4788147.ovf`
|
||||
|
||||
- `ovftool --datastore=${datastore} --name=${name} --net:"Network 1"="${network}" --diskMode=thin --powerOn --X:waitForIp --X:injectOvfEnv --X:enableHiddenProperties --prop:vami.domain.NSX=mgmt.local --prop:vami.searchpath.NSX=mgmt.local --prop:vami.DNS.NSX=8.8.8.8 --prop:vm.vmname=NSX nsx-manager-1.1.0.0.0.4788147.ovf 'vi://${user}:${password}@${host}`
|
||||
|
||||
###Add ESX nodes into the NSX Manager using the NSX REST API.
|
||||
|
||||
- `FABRIC->Nodes using ESXi credentials: root/password`
|
||||
|
||||
###Create the Transport Zone using the NSX REST api
|
||||
|
||||
###Create a logical Switch with the VLAN based Transport Zone
|
||||
|
||||
###Add the Logical Switch as a Transport node to the ESXi host
|
||||
|
||||
###Check if the switch is visible from running govc command (Upgrade to the latest govc 0.12.0 to get this working)
|
||||
- `govc ls network`
|
||||
|
||||
###Install VIC Appliance and Run Regression tests
|
||||
|
||||
Reference in New Issue
Block a user