* Add Virtual Kubelet provider for VIC Initial virtual kubelet provider for VMware VIC. This provider currently handles creating and starting of a pod VM via the VIC portlayer and persona server. Image store handling via the VIC persona server. This provider currently requires the feature/wolfpack branch of VIC. * Added pod stop and delete. Also added node capacity. Added the ability to stop and delete pod VMs via VIC. Also retrieve node capacity information from the VCH. * Cleanup and readme file Some file clean up and added a Readme.md markdown file for the VIC provider. * Cleaned up errors, added function comments, moved operation code 1. Cleaned up error handling. Set standard for creating errors. 2. Added method prototype comments for all interface functions. 3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder. * Add mocking code and unit tests for podcache, podcreator, and podstarter Used the unit test framework used in VIC to handle assertions in the provider's unit test. Mocking code generated using OSS project mockery, which is compatible with the testify assertion framework. * Vendored packages for the VIC provider Requires feature/wolfpack branch of VIC and a few specific commit sha of projects used within VIC. * Implementation of POD Stopper and Deleter unit tests (#4) * Updated files for initial PR
2.5 KiB
2.5 KiB
Test 5-1 - Distributed Switch
Purpose:
To verify the VIC appliance works in a variety of different vCenter networking configurations
References:
1 - VMware Distributed Switch Feature
Environment:
This test requires access to VMWare Nimbus cluster for dynamic ESXi and vCenter creation
Test Steps:
- Deploy a new vCenter in Nimbus
- Deploy three new ESXi hosts with 2 NICs each in Nimbus:
nimbus-esxdeploy --nics=2 esx-1 3620759nimbus-esxdeploy --nics=2 esx-2 3620759nimbus-esxdeploy --nics=2 esx-3 3620759 - After setting up your govc environment based on the new vCenter deployed, create a new datacenter:
govc datacenter.create ha-datacenter - Add each of the new hosts to the vCenter:
govc host.add -hostname=<ESXi IP> -username=<USER> -dc=ha-datacenter -password=<PW> -noverify=true - Create a new distributed switch:
govc dvs.create -dc=ha-datacenter test-ds - Create three new distributed switch port groups for management and vm network traffic:
govc dvs.portgroup.add -nports 12 -dc=ha-datacenter -dvs=test-ds managementgovc dvs.portgroup.add -nports 12 -dc=ha-datacenter -dvs=test-ds vm-networkgovc dvs.portgroup.add -nports 12 -dc=ha-datacenter -dvs=test-ds bridge - Add the three ESXi hosts to the portgroups:
govc dvs.add -dvs=test-ds -pnic=vmnic1 <ESXi IP1>govc dvs.add -dvs=test-ds -pnic=vmnic1 <ESXi IP2>govc dvs.add -dvs=test-ds -pnic=vmnic1 <ESXi IP3> - Deploy VCH Appliance to the new vCenter:
bin/vic-machine-linux create --target=<VC IP> --user=Administrator@vsphere.local --image-store=datastore1 --appliance-iso=bin/appliance.iso --bootstrap-iso=bin/bootstrap.iso --generate-cert=false --password=Admin\!23 --force=true --bridge-network=bridge --compute-resource=/ha-datacenter/host/<ESXi IP 1>/Resources --public-network=vm-network --name=VCH-test - Run a variety of docker commands on the VCH appliance
Expected Outcome:
The VCH appliance should deploy without error and each of the docker commands executed against it should return without error
Possible Problems:
- When you add an ESXi host to the vCenter it will overwrite its datastore name from datastore1 to datastore1 (n)
- govc requires an actual password so you need to change the default ESXi password before Step 4
- govc doesn't seem to be able to force a host NIC over to the new distributed switch, thus you need to create the ESXi hosts with 2 NICs in order to use the 2nd NIC for the distributed switch