Files
virtual-kubelet/vendor/github.com/vmware/vic/infra/machines/devbox
Loc Nguyen 513cebe7b7 VMware vSphere Integrated Containers provider (#206)
* Add Virtual Kubelet provider for VIC

Initial virtual kubelet provider for VMware VIC.  This provider currently
handles creating and starting of a pod VM via the VIC portlayer and persona
server.  Image store handling via the VIC persona server.  This provider
currently requires the feature/wolfpack branch of VIC.

* Added pod stop and delete.  Also added node capacity.

Added the ability to stop and delete pod VMs via VIC.  Also retrieve
node capacity information from the VCH.

* Cleanup and readme file

Some file clean up and added a Readme.md markdown file for the VIC
provider.

* Cleaned up errors, added function comments, moved operation code

1. Cleaned up error handling.  Set standard for creating errors.
2. Added method prototype comments for all interface functions.
3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder.

* Add mocking code and unit tests for podcache, podcreator, and podstarter

Used the unit test framework used in VIC to handle assertions in the provider's
unit test.  Mocking code generated using OSS project mockery, which is compatible
with the testify assertion framework.

* Vendored packages for the VIC provider

Requires feature/wolfpack branch of VIC and a few specific commit sha of
projects used within VIC.

* Implementation of POD Stopper and Deleter unit tests (#4)

* Updated files for initial PR
2018-06-04 15:41:32 -07:00
..

Vagrant Dev Box

Overview

This box is an Ubuntu 16.04 VM with the following setup by default:

  • Docker daemon with port forwarded to the Fusion/Workstation host at localhost:12375

  • Go toolchain

  • Additional tools (lsof, strace, etc)

Requirements

Provisioning

All files matching provision*.sh in this directory will be applied by the Vagrantfile, you can symlink custom scripts if needed. The scripts are not Vagrant specific and can be applied to a VM running on ESX for example.

Fusion/Workstation host usage

The following commands can be used from your Fusion or Workstation host.

Shared Folders

By default your GOPATH is shared with the same path as the host. This is useful if your editor runs on the host, then errors on the guest with filename:line info have the same path. For example, when running the following command within the top-level project directory:

vagrant ssh -- make -C $PWD all

Create the VM

vagrant up

SSH Access

vagrant ssh

Docker Access

DOCKER_HOST=localhost:12375 docker ps

Stop the VM

vagrant halt

Restart the VM

vagrant reload

Provision

After you've done a vagrant up, the provisioning can be applied without reloading via:

vagrant provision

Delete the VM

vagrant destroy

VM guest usage

To open a bash term in the VM, use vagrant ssh.

The following commands can be used from devbox VM guest.

cd $GOPATH/src/github.com/vmware/vic

Local Drone CI test

drone exec

Devbox on ESX

The devbox can be deployed to ESX, the same provisioning scripts are applied:

./deploy-esx.sh

SSH access

ssh-add ~/.vagrant.d/insecure_private_key
vmip=$(govc vm.ip $USER-ubuntu-1604)
ssh vagrant@$vmip

Shared folders

You can share your folder by first exporting via NFS:

echo "$HOME/vic $(govc vm.ip $USER-ubuntu-1604) -alldirs -mapall=$(id -u):$(id -g)" | sudo tee -a /etc/exports
sudo nfsd restart

Then mount within the ubuntu VM:

ssh vagrant@$vmip sudo mkdir -p $HOME/vic
ssh vagrant@$vmip sudo mount $(ipconfig getifaddr en1):$HOME/vic $HOME/vic

Note that you may need to use enN depending on the type of connection you have - use ifconfig to verify. Note also that nfs-common is not installed in the box by default.

You can also mount your folder within ESX:

govc datastore.create -type nfs -name nfsDatastore -remote-host $(ipconfig getifaddr en1) -remote-path $HOME/vic
esxip=$(govc host.info -json | jq -r '.HostSystems[].Config.Network.Vnic[] | select(.Device == "vmk0") | .Spec.Ip.IpAddress')
ssh root@$esxip mkdir -p $HOME
ssh root@$esxip /vmfs/volumes/nfsDatastore $HOME/vic

Add $esxip to /etc/exports and restart nfsd again.