VMware vSphere Integrated Containers provider (#206)

* Add Virtual Kubelet provider for VIC

Initial virtual kubelet provider for VMware VIC.  This provider currently
handles creating and starting of a pod VM via the VIC portlayer and persona
server.  Image store handling via the VIC persona server.  This provider
currently requires the feature/wolfpack branch of VIC.

* Added pod stop and delete.  Also added node capacity.

Added the ability to stop and delete pod VMs via VIC.  Also retrieve
node capacity information from the VCH.

* Cleanup and readme file

Some file clean up and added a Readme.md markdown file for the VIC
provider.

* Cleaned up errors, added function comments, moved operation code

1. Cleaned up error handling.  Set standard for creating errors.
2. Added method prototype comments for all interface functions.
3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder.

* Add mocking code and unit tests for podcache, podcreator, and podstarter

Used the unit test framework used in VIC to handle assertions in the provider's
unit test.  Mocking code generated using OSS project mockery, which is compatible
with the testify assertion framework.

* Vendored packages for the VIC provider

Requires feature/wolfpack branch of VIC and a few specific commit sha of
projects used within VIC.

* Implementation of POD Stopper and Deleter unit tests (#4)

* Updated files for initial PR
This commit is contained in:
Loc Nguyen
2018-06-04 15:41:32 -07:00
committed by Ria Bhatia
parent 98a111e8b7
commit 513cebe7b7
6296 changed files with 1123685 additions and 8 deletions

View File

@@ -0,0 +1,57 @@
# Container Workflow
The following are guideline suggestions for those who want to use VIC to develop and deploy a containerized application. These guidelines pertain to VIC 0.7.0. While VIC continues to progress towards VIC 1.0, the current feature set requires some care in creating a containerized application and deploying it. We present these guidelines from a developer and a devops perspective.
An example workflow is presented here, in the form of a modified voting app, based on [Docker's voting app](https://github.com/docker/example-voting-app).
## Container Workflow Feature Set
General feature set (e.g. Docker run, ps, inspect, etc) used at the CLI will not be discussed. Only feature set important for containerizing apps and deploying them are discussed. These include volume and network support. It is also worth mentioning that basic Docker Compose support is available for application deployment.
#### Currently Available Features
1. Docker Compose (basic)
2. Registry pull from docker hub and private registry
3. Named Data Volumes
4. Anonymous Data Volumes
5. Bridged Networks
6. External Networks
7. Port Mapping
8. Network Links/Alias
#### Future Features
Be aware the following feature are not yet available and must be taken into account when containerizing an app and deploying it.
1. Docker build
2. Registry pushing
3. Concurrent data volume sharing between containers
4. Local host folder mapping to a container volume
5. Local host file mapping to a container
6. Docker copy files into a container, both running and stopped
7. Docker container inspect does not return all container network for a container
## Workflow Guidelines
Anything that can be performed with Docker Compose can be performed manually via the Docker CLI and via scripting using the CLI. This makes Compose a good baseline reference and our guidelines will use it for demonstration purposes. Our guideline uses Docker Compose 1.8.1. The list above in the Future Features section puts constraints on what types of containerized application can be deployed on VIC 0.7.0.
Please note, these guidelines and recommendations exist for the current feature set in VIC 0.7.0. As VIC approaches 1.0, many of these constraints will go away.
#### Guidelines for Building Container Images
The current lack of docker build and registry pushing means users will need to use regular Docker to build a container and to push it to the global hub or your corporate private registry. The example workflow using Docker's voting app will illustrate how to get around this constraint.
#### Guidelines for Sharing Config
VIC 0.7.0 current lack of data volume sharing and docker copy will put constraints on how configuration are provided to a containerized application. An example of configuration is your web server config files. Our recommendation for getting around the current limitation is to pass in configuration via command line arguments or environment variables. Add a script to the container image that ingest the command line argument/environment variable and pass these configuration to the contained application. A benefit of using environment variables to transfer configuration is the containerized app will more closely follow the popular 12-factor app model.
With no direct support for sharing volumes between containers processes that must share files have the following options:
1. build them into the same image and run in the same container
2. add a script to the container that mounts an NFS share (containers must be on the same network)
a. Run container with NFS server sharing a data volume
b. Mount NFS share in whichever containers need to share
TODO: Provide example of both
## Example Applications
We have taken Docker's voting app example and used the above guidelines to modify it for use on VIC 0.7.0. Please follow to this [page](voting_app.md) for more information.

View File

@@ -0,0 +1,14 @@
# Using vSphere Integrated Container Engine with VMware's Harbor
In this example, we will install VMware's Harbor registry and show how to get vSphere Integrated Container Engine (VIC Engine) 0.8.0 working with Harbor. With 0.8.0, the engine does not have an install-time mechanism to set up a self-signed certificate so we will show the manual steps for post-install setup as a workaround. We will not show how to setup Harbor with LDAP. For that, the reader may visit the [Harbor documentation](https://github.com/vmware/harbor/tree/master/docs) site for more information. Since there is a lot of documentation on the Harbor site for various setup, we will focus on setting up Harbor with a self-signed certificate and setting up VIC Engine to work with this Harbor instance.
## Prerequisite
The following example requires a vCenter installation.
Note: Certificate verification requires all machines using certificates are time/date accurate. This can be achieved using several options, suchas, vSphere web client, vSphere thick client for Windows or govc. In the following, we deploy this example on a vCenter where all ESXi hosts in the cluster have been set up with NTP and were sync'd prior to installing VIC Engine or Harbor.
## Workflows
1. [Deploy a VCH to use with Harbor](deploy_vch_with_harbor.md)
2. [Post-Install Usage](post_install_usage.md)

View File

@@ -0,0 +1,76 @@
# Deploying vSphere Integrated Container Engine with Harbor
## Prerequisite
Harbor requires 60GB or more free space on your datastore.
## Workflow
We will use VIC Engine 0.8.0 and Harbor 0.5.0 for this example. We will use Ubuntu as OS on our user machine.
If no server certificate and private key are provided during installation, Harbor will self generate these. It will also provide a self-generated CA (certificate authority) certificate if no server certificate and private key are provided during installation. The OVA installation guide for Harbor can be found in the [Harbor docs](https://github.com/vmware/harbor/blob/master/docs/installation_guide_ova.md). Harbor requires both an IP address and FQDN (fully qualified domain name) for the the server. There is also a DHCP install method available for debugging purposes, but it is not a recommended production deployment model.
We will assume a Harbor instance has been installed without server certificate and private key. We will also assume we have downloaded the CA cert using the Harbor instuctions. The last steps left to get Harbor working with vSphere Integrated Container Engine is to update standard docker with the Harbor CA cert and deploy a new VCH with the CA cert. The instructions are provided below.
<br><br>
## Update the user working machine with the CA.crt for standard docker
We must update the standard docker on our laptop so it knows of our CA certificate. Docker can look for additional CA certificates outside of the OS's CA bundle folder if we put new CA certificates in the right location, documented [here](https://docs.docker.com/engine/security/certificates/).
We create the necessary folder, copy our CA cert file there, and restart docker. This should be all that is necessary. We take the additional steps to verify that we can log onto our Harbor server.
```
loc@Devbox:~/mycerts$ sudo su
[sudo] password for loc:
root@Devbox:/home/loc/mycerts# mkdir -p /etc/docker/certs.d/<Harbor FQDN>
root@Devbox:/home/loc/mycerts# mkdir -p /etc/docker/certs.d/<Harbor IP>
root@Devbox:/home/loc/mycerts# cp ca.crt /etc/docker/certs.d/<Harbor FQDN>/
root@Devbox:/home/loc/mycerts# cp ca.crt /etc/docker/certs.d/<Harbor IP>/
root@Devbox:/home/loc/mycerts# exit
exit
loc@Devbox:~/mycerts$ sudo systemctl daemon-reload
loc@Devbox:~/mycerts$ sudo systemctl restart docker
loc@Devbox:~$ docker logout <Harbor FQDN>
Remove login credentials for <Harbor FQDN>
loc@Devbox:~$ docker logout <Harbor IP>
Remove login credentials for <Harbor IP>
loc@Devbox:~$ docker login <Harbor FQDN>
Username: loc
Password:
Login Succeeded
loc@Devbox:~$ docker login <Harbor IP>
Username: loc
Password:
Login Succeeded
loc@Devbox:~$ docker logout <Harbor FQDN>
Remove login credentials for <Harbor FQDN>
loc@Devbox:~$ docker logout <Harbor IP>
Remove login credentials for <Harbor IP>
```
Notice we create folders for both FQDN and IP in the docker cert folder and copy the CA cert to both. This will allow us to log into the Harbor from Docker using both FQDN and IP address.
<br><br>
## Install a VCH with the new CA certificate
In this step, we deploy a VCH and specify our CA cert via a --registry-ca parameter in vic-machine. This parameter is a list, meaning we can easily add multiple CA certs by specifying multiple --registry-ca parameters.
For simplicity, we will install a VCH with the --no-tls flag. This indicates we will not need TLS from a docker CLI to the VCH. This does NOT imply that access to Harbor will be performed without TLS.
```
root@Devbox:/home/loc/go/src/github.com/vmware/vic/bin# ./vic-machine-linux create --target=<vCenter_IP> --image-store="vsanDatastore" --name=vic-docker --user=root -password=<vCenter_password> --compute-resource="/dc1/host/cluster1/Resources" --bridge-network DPortGroup --force --no-tls --registry-ca=ca.crt
WARN[2016-11-11T11:46:37-08:00] Configuring without TLS - all communications will be insecure
...
INFO[2016-11-11T11:47:57-08:00] Installer completed successfully
```
<br>
Proceed to [Post-Install Usage](post_install_usage.md) for examples of how to use this deployed VCH with Harbor.

View File

@@ -0,0 +1,72 @@
# Using vSphere Integrated Container Engine with Harbor
Here we show an example of using a deployed VCH with Harbor as a private registry. We assume that one has been setup using either static IP or FQDN. We also assume standard docker has been updated with the certificate authority cert that can verify the deployed Harbor's server cert.
<br><br>
## Workflow
1. Develop or obtain a docker container image on a computer (or terminal) using standard docker. Tag the image for Harbor and push the image to the server.
2. Pull down the image from Harbor to a deployed VCH and use it.
<br><br>
## Push a container image to Harbor using standard docker
In this step, we pull the busybox container image from the docker hub down to our laptop, which had the CA certificate updated for docker use earlier. Then we tag the image for uploading to our Harbor registry and push the image up to it. Please note, we log onto the Harbor server before pushing the image up to it.
```
loc@Devbox:~/mycerts$ docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
56bec22e3559: Pull complete
Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
Status: Downloaded newer image for busybox:latest
loc@Devbox:~/mycerts$
loc@Devbox:~/mycerts$ docker tag busybox <Harbor FQDN or static IP>/test/busybox
loc@Devbox:~/mycerts$ docker login <Harbor FQDN or static IP>
Username: loc
Password:
Login Succeeded
loc@Devbox:~/mycerts$ docker push <Harbor FQDN or static IP>/test/busybox
The push refers to a repository [<Harbor FQDN or static IP>/test/busybox]
e88b3f82283b: Pushed
latest: digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 size: 527
```
## Pull the container image down to the VCH
Now, in another terminal, we can pull the image from Harbor to our VCH.
```
loc@Devbox:~$ export DOCKER_HOST=tcp://<Deployed VCH IP>:2375
loc@Devbox:~$ export DOCKER_API_VERSION=1.23
loc@Devbox:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
loc@Devbox:~$ docker pull <Harbor FQDN or static IP>/test/busybox
Using default tag: latest
Pulling from test/busybox
Error: image test/busybox not found
loc@Devbox:~$ docker login <Harbor FQDN or static IP>
Username: loc
Password:
Login Succeeded
loc@Devbox:~$ docker pull <Harbor FQDN or static IP>/test/busybox
Using default tag: latest
Pulling from test/busybox
56bec22e3559: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:97af7f861fb557c1eaafb721946af5c7aefaedd51f78d38fa1828d7ccaae4141
Status: Downloaded newer image for test/busybox:latest
loc@Devbox:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<Harbor FQDN or static IP>/test/busybox latest e292aa76ad3b 5 weeks ago 1.093 MB
loc@Devbox:~$
```
Note above, on our first attempt to pull the image down, it failed, with a 'not found' error message. Once we log into the Harbor server, our attempt to pull down the image succeeds.

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

@@ -0,0 +1,102 @@
# Running a private registry with VIC
In this example, we will run a private Docker registry on VIC and push and pull images using VIC. VMware also offers an enterprise ready registry named [Harbor](https://github.com/vmware/harbor) that can be used in place of the base Docker registry.
## Prerequisite
Before going through this example, we need to re-emphasize some concepts around Docker and VIC. The following examples are shown using Linux. For Windows and Mac users, these examples should not differ much.
Installing VIC will also require installing Docker locally. When Docker is installed, we will get both a client (CLI or command line interface) and a daemon that handles all local container operations. Local containers are those that run on the user local machine instead of a VMWare vSphere/ESXi environment. The CLI is important as it will be most user's touchpoint for working with containers on VIC and on their local system. The distinction between using the CLI against the two environment is very important in this example. By default, the CLI will use the local Docker daemon. After setting some environment variables, the CLI can be instructed to send all operations to VIC instead of the local Docker daemon. The two environment variable are DOCKER_HOST and DOCKER_API_VERSION.
In this example, we are deploying an insecure registry with no authentication for simplicity. We will also be targeting an ESXi environment.
## Workflow
In terminal #1: (local Docker)
1. Open a terminal and make sure that it will use the local Docker daemon. At a command prompt, issue
```
$> unset DOCKER_HOST
```
2. Install a VCH for a private registry using vic-machine
3. Run [Docker's registry](https://docs.docker.com/registry/) on the first VCH
4. Install a second VCH for running applications, making sure to specify --insecure-registry to ensure this second VIC can pull images from the insecure registry in the first VCH.
5. At a terminal command prompt, using regular Docker, tag the images to be destined for the registry.
6. Modify the docker systemd config file to allow pushing to an insecure registry
7. Restart the docker daemon
8. Push the image using the full tagged name (including host IP and port)
In terminal #2: (VIC VCH)
1. Open a terminal and make sure it is using the second VCH. At a command prompt, issue
```
$> export DOCKER_HOST=tcp://<VCH_IP>:<VCH_PORT>
$> export DOCKER_API_VERSION=1.23
```
2. Pull the image from the registry VCH
### Example run
terminal 1:
```
$> unset DOCKER_HOST
$> ./vic-machine-linux create --target=192.168.218.207 --image-store=datastore1 --name=vic-registry --user=root --password=vagrant --compute-resource="/ha-datacenter/host/esxbox.localdomain/Resources" --bridge-network=vic-network --no-tls --volume-store=datastore1/registry:default --force
...
INFO[2016-10-08T17:31:06-07:00] Initialization of appliance successful
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00] vic-admin portal:
INFO[2016-10-08T17:31:06-07:00] http://192.168.218.138:2378
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00] Docker environment variables:
INFO[2016-10-08T17:31:06-07:00] DOCKER_HOST=192.168.218.138:2375
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00] Connect to docker:
INFO[2016-10-08T17:31:06-07:00] docker -H 192.168.218.138:2375 info
INFO[2016-10-08T17:31:06-07:00] Installer completed successfully
$> DOCKER_HOST=tcp://192.168.218.138:2375 DOCKER_API_VERSION=1.23 docker run -d -p 5000:5000 --name registry registry:2
$> ./vic-machine-linux create --target=192.168.218.207 --image-store=datastore1 --name=vic-app --user=root --password=vagrant --compute-resource="/ha-datacenter/host/esxbox.localdomain/Resources" --bridge-network=vic-network --no-tls --volume-store=datastore1/vic-app:default --force --insecure-registry 192.168.218.138
...
INFO[2016-10-08T17:31:06-07:00] Initialization of appliance successful
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00] vic-admin portal:
INFO[2016-10-08T17:31:06-07:00] http://192.168.218.131:2378
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00] Docker environment variables:
INFO[2016-10-08T17:31:06-07:00] DOCKER_HOST=192.168.218.131:2375
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00]
INFO[2016-10-08T17:31:06-07:00] Connect to docker:
INFO[2016-10-08T17:31:06-07:00] docker -H 192.168.218.131:2375 info
INFO[2016-10-08T17:31:06-07:00] Installer completed successfully
$> sudo vi /lib/systemd/system/docker.service
$> sudo systemctl daemon-reload
$> sudo systemctl restart docker
$> docker tag busybox 192.168.218.138:5000/test/busybox
$> docker push 192.168.218.138:5000/test/busybox
```
terminal 2:
```
$> export DOCKER_HOST=tcp://192.168.218.131:2375
$> export DOCKER_API_VERSION=1.23
$> docker pull 192.168.218.138:5000/test/busybox
```
Note, in this example, we disabled TLS for simplicity. Also, we did not show what was modified in /lib/systemd/system/docker.service. That is shown below for the example above.
```
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd --tls=false -H fd:// --insecure-registry 192.168.218.138:5000
```
In the second step, we specify the necessary environment variables before the docker run command. On our Linux machine, this sets the variables for the duration of the operation, and once the docker run finishes, those variables are reverted to their previous values. We use the registry:2 image. It is important not to specify registry:2.0 in this example. Registry 2.0 has issues that prevents the example above from working.

View File

@@ -0,0 +1,106 @@
# Voting App on VIC
The [voting app](https://github.com/docker/example-voting-app) is one of Docker's example multi-tiered app. We will neither discuss the design of the app nor explain how it works. Instead, we will focus on it's Docker Compose yml file and use the [guidelines](README.md) to modify that yml file to make it work on VIC 0.7.0. You can find the modified compose yml file in the folder [vic/demos/compose/voting-app/](../../demos/compose/voting-app). We have only included the modified yml file as the below workflow will use the original source from github.
## Workflow
### Original Compose File
```
version: "2"
services:
vote:
build: ./vote
command: python app.py
volumes:
- ./vote:/app
ports:
- "5000:80"
redis:
image: redis:alpine
ports: ["6379"]
worker:
build: ./worker
db:
image: postgres:9.4
result:
build: ./result
command: nodemon --debug server.js
volumes:
- ./result:/app
ports:
- "5001:80"
- "5858:5858"
```
We see the above compose file are using two features that are not yet supported in VIC 0.7.0. The first is docker build. The second is local folder mapping to container volume. Let's walk through modifying this app and deploy it onto a vSphere environment.
### Getting the App Prepared
First, clone the repository from github. Note, we have included the modified compose file in our /demos folder, but in this exercise, we are going to modify this app from the sources from github.
Second, to get around the docker build directive, we follow the previously mentioned guidelines and use regular docker to build each component that requires a build. Then we tag the the images to upload to our private registry (or private account on Docker Hub). In this example, we are going to use VMWare's victest account on Docker Hub. You will not be able to use this account, but you can create your own and use that in place of the victest keywords below. Please note, the steps shown below are performed in a terminal using regular docker (as opposed to VIC's docker personality daemon). Note, it is possible to build and tag an image in one step. Below, the steps are broken into separate steps.
**build the images:**
$> cd example-voting-app
$> docker build -t vote ./vote
$> docker build -t vote-worker ./worker
$> docker build -t vote-result ./result
**tag the images for a registry:**
$> docker tag vote victest/vote
$> docker tag vote-worker victest/vote-worker
$> docker tag vote-result victest/vote-result
**push the images to the registry:**
$> docker login (... and provide credentials)
$> docker push victest/vote
$> docker push victest/vote-worker
$> docker push victest/vote-result
Second, we analyze the application. There doesn't appear to be a real need to map the local folder to the container volume so we remove the local folder mapping. We also remove all the build directives from the yml file.
### Updated Compose File for VIC 0.7.0
```
version: "2"
services:
vote:
image: victest/vote
command: python app.py
ports:
- "5000:80"
redis:
image: redis:alpine
ports: ["6379"]
worker:
image: victest/vote-worker
db:
image: postgres:9.4
result:
image: victest/vote-result
command: nodemon --debug server.js
ports:
- "5001:80"
- "5858:5858"
```
### Deploy to Your VCH
We assume a VCH has already been deployed with vic-machine and VCH_IP is the IP address of the deployed VCH. This IP should have been presented after the VCH was successfully installed. We also assume we are still in the example-voting-app folder, with the modified compose yml file.
$> docker-compose -H VCH_IP up -d
Now, use your web browser to navigate to "http://VCH_IP:5000" and "http://VCH_IP:5001" to verify the voting app is running.
That's really all there is to deploying **this** app. It is a contrived app and more complex containerized apps may have more steps to perform before it will on VIC 0.7.0.

View File

@@ -0,0 +1,136 @@
# Web-serving on vSphere Integrated Containers Engine
We take the Web-serving benchmark from CloudSuite (http://cloudsuite.ch/webserving/) as an example, to demonstrate how customers who are interested in the LEMP implementation of a web-serving application could deploy it on vSphere Integrated Containers Engine 0.7.0 using Docker Compose. This demo has three tiers deployed on three containerVMs: an Nginx Web server, a Memcached server, and a MySQL database server. The Web server runs Elgg (a social networking engine) and connects the Memcached server and the database server through the network.
## Workflow
### Build docker image for the Web server (on regular docker)
Note, in the original web-server docker image from Cloudesuite, the email verification for new user is not enabled. This demo is here for illustration only. **You can also skip this section and proceed to "[Compose File for vSphere Integrated Containers Engine](#compose-file-for-vsphere-integrated-containers-engine)" if you do not want to build your own image**.
Step I:
Download the original installation files from https://github.com/ParsaLab/cloudsuite/tree/master/benchmarks/web-serving/web_server
Step II:
In the Dockerfile, add “Run apt-get install y sendmail” and “EXPOSE 25”
Step III:
Replace “bootstrap.sh” with the following:
```
#!/bin/bash
hname=$(hostname)
line=$(cat /etc/hosts | grep '127.0.0.1')
line2=" web_server web_server.localdomain"
sed -i "/\b\(127.0.0.1\)\b/d" /etc/hosts
echo "$line $line2 $hname" >> /etc/hosts
cat /etc/hosts
service sendmail stop
service sendmail start
service php5-fpm restart
service nginx restart
```
Step IV: (In this example, we will deploy the image to the docker hub.)
- Build the image:
```
$> docker build -t repo/directory:tag .
```
- Login to your registry: (we use the default docker hub in this example)
```
$> docker login (input your credentials when needed)
```
- upload your image:
```
$> docker push repo/directory:tag
```
### Build docker image for the MySQL server (on regular docker)
This example application uses the database to store the address of the Web server. The original Dockerfile from Cloudsuite populates this with "http://web_server:8080", which is not usable in production. Using the suggestions we provided earlier, we modify the execute.sh script to replace the "web_server" text with the actual IP of our VCH. This script is executed when this database container is executed. In the Docker Compose file, we specify the IP address of our target VCH. You will see that in the modified compose yml file below.
This example illustrates passing config in via environment variables and having the script use those values to modify internal config in the running container. Another option is to use a script and command line arguments to pass config to a containerized app. Below, we will modify the Dockerfile and script. **You can also skip this section and proceed to "[Compose File for vSphere Integrated Containers Engine](#compose-file-for-vsphere-integrated-containers-engine)" if you do not want to build your own image**.
Step I:
Download the original installation files from https://github.com/ParsaLab/cloudsuite/tree/master/benchmarks/web-serving/db_server
Step II:
In the Dockerfile, comment out the following lines:
```
ENV web_host web_server
RUN sed -i -e"s/HOST_IP/${web_host}:8080/" /elgg_db.dump
CMD bash -c "/execute.sh ${root_password}"
```
Step III:
Replace “files/execute.sh” with the following:
```
#!/bin/bash
set -x
service mysql restart
# Wait for mysql to come up
while :; do mysql -uroot -p${root_password} -e "status" && break; sleep 1; done
mysql -uroot -p$root_password -e "create database ELGG_DB;"
bash -c 'sed -i -e"s/HOST_IP/${web_host}:8080/" /elgg_db.dump'
cat /elgg_db.dump | grep 8080
# Need bash -c for redirection
bash -c "mysql -uroot -p$root_password ELGG_DB < /elgg_db.dump"
mysql -uroot -p$root_password -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '$root_password' WITH GRANT OPTION; FLUSH PRIVILEGES;"
service mysql stop
/usr/sbin/mysqld
```
Step IV: Same as Step IV when creating the docker image for the Web server.
### Compose File for vSphere Integrated Containers Engine
```
version: '2'
networks:
my_net:
driver: bridge
services:
web_server:
image: victest/web_elgg
container_name: web_server
networks:
- my_net
ports:
- "8080:8080"
mysql_server:
image: victest/web_db
container_name: mysql_server
command: [bash, -c, "/execute.sh"]
networks:
- my_net
environment:
- web_host=192.168.60.130 # This is the VCH_IP
- root_password=root # Password for the root user
memcache_server:
image: cloudsuite/web-serving:memcached_server
container_name: memcache_server
networks:
- my_net
```
### Deploy to Your VCH
Once you already have a VCH deployed by vic-machine, go to the folder where you have the above “docker-compose.yml” file and execute the following command to start the Web-serving application:
```
$> docker-compose -H VCH_IP:VCH_PORT up d
```
Here VCH_IP and VCH_PORT can be found from the standard output when you use “vic-machine create” to launch the VCH. Now we are ready to view the website. Open a browser and navigate to http://VCH_IP:8080. We make sure to use the IP address of the VCH we deployed on as the IP of our Web server here. You should be able to see the following page:
![Web serving demo](images/elgg.png)
You can login in as the admin user (username: admin; password: admin1234), or register as a new user with a valid email address (Gmail does not work). You can also create your own content, invite friends, or chat with others. Enjoy!