Files
virtual-kubelet/vendor/github.com/vmware/vic/isos/bootstrap/bootstrap.debug
Loc Nguyen 513cebe7b7 VMware vSphere Integrated Containers provider (#206)
* Add Virtual Kubelet provider for VIC

Initial virtual kubelet provider for VMware VIC.  This provider currently
handles creating and starting of a pod VM via the VIC portlayer and persona
server.  Image store handling via the VIC persona server.  This provider
currently requires the feature/wolfpack branch of VIC.

* Added pod stop and delete.  Also added node capacity.

Added the ability to stop and delete pod VMs via VIC.  Also retrieve
node capacity information from the VCH.

* Cleanup and readme file

Some file clean up and added a Readme.md markdown file for the VIC
provider.

* Cleaned up errors, added function comments, moved operation code

1. Cleaned up error handling.  Set standard for creating errors.
2. Added method prototype comments for all interface functions.
3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder.

* Add mocking code and unit tests for podcache, podcreator, and podstarter

Used the unit test framework used in VIC to handle assertions in the provider's
unit test.  Mocking code generated using OSS project mockery, which is compatible
with the testify assertion framework.

* Vendored packages for the VIC provider

Requires feature/wolfpack branch of VIC and a few specific commit sha of
projects used within VIC.

* Implementation of POD Stopper and Deleter unit tests (#4)

* Updated files for initial PR
2018-06-04 15:41:32 -07:00

66 lines
2.6 KiB
Bash
Executable File

#!/bin/bash
MOUNTPOINT="/mnt/containerfs"
mkdir -p /mnt/containerfs
# see if we should bail to the bootstrap or pivot into the container
# do this before the fork so we don't have a backdoor call in the hot path
# NOTE: this is moved after the fork during debugging so we can chose on a per VM basis
SHELL=`/sbin/rpctool -get bootstrap-shell 2>/dev/null`
echo "Waiting for rootfs"
while [ ! -e /dev/disk/by-label/containerfs ]; do :;done
if mount -t ext4 /dev/disk/by-label/containerfs ${MOUNTPOINT}; then
# make the required directory structure, but presume that something in the daemon
# has done the *right* thing for /.tether* and created them where it won't show in a diff
# we do this to ensure that subsequent commands don't fail if the daemon hasn't prepped
# the structure
mkdir -p ${MOUNTPOINT}/.tether ${MOUNTPOINT}/.tether-init
# ensure that no matter what we have access to required devices
# WARNING WARNING WARNING WARNING WARNING
# if the tmpfs is not large enough odd hangs can occur and the ESX event log will
# report the guest disabling the CPU
mount -t tmpfs -o size=128m tmpfs ${MOUNTPOINT}/.tether/
# if we don't have a populated init layer, pull from guestinfo
if [ ! -f ${MOUNTPOINT}/.tether-init/docker-id ]; then
mount -t tmpfs -o size=1m tmpfs ${MOUNTPOINT}/.tether-init/
# create the assumed structure
# TODO: this cannot be in guest and still not show up in diffs
mkdir -p ${MOUNTPOINT}/dev ${MOUNTPOINT}/proc ${MOUNTPOINT}/sys ${MOUNTPOINT}/etc
# ln -sf /proc/mounts ${MOUNTPOINT}/etc/mtab
touch ${MOUNTPOINT}/etc/hostname
touch ${MOUNTPOINT}/etc/hosts
touch ${MOUNTPOINT}/etc/resolv.conf
fi
# this is so we're not exposing the raw container disk if we wouldn't be otherwise
# rm -f /mnt/.tether/volumes/containerfs
# enable full system functionality in the container
echo "Publishing modules within container"
mkdir -p ${MOUNTPOINT}/lib/modules
mount --bind /lib/modules ${MOUNTPOINT}/lib/modules
# switch to the new root
echo "prepping for switch to container filesystem"
cp /bin/tether ${MOUNTPOINT}/.tether/tether-debug
echo "switching to the new mount"
if [ "$SHELL" != "true" ]; then
systemctl switch-root ${MOUNTPOINT} /.tether/tether-debug 2>&1
else
systemctl switch-root ${MOUNTPOINT} /bin/sh 2>&1
# fail back to shell in bootstrap image without switch_root
/bin/ash
fi
else
# TODO: what do we do here? we really need to somehow report an error
# fail hard
echo "Unable to chroot into container filesystem"
fi