VMware vSphere Integrated Containers provider (#206)
* Add Virtual Kubelet provider for VIC Initial virtual kubelet provider for VMware VIC. This provider currently handles creating and starting of a pod VM via the VIC portlayer and persona server. Image store handling via the VIC persona server. This provider currently requires the feature/wolfpack branch of VIC. * Added pod stop and delete. Also added node capacity. Added the ability to stop and delete pod VMs via VIC. Also retrieve node capacity information from the VCH. * Cleanup and readme file Some file clean up and added a Readme.md markdown file for the VIC provider. * Cleaned up errors, added function comments, moved operation code 1. Cleaned up error handling. Set standard for creating errors. 2. Added method prototype comments for all interface functions. 3. Moved PodCreator, PodStarter, PodStopper, and PodDeleter to a new folder. * Add mocking code and unit tests for podcache, podcreator, and podstarter Used the unit test framework used in VIC to handle assertions in the provider's unit test. Mocking code generated using OSS project mockery, which is compatible with the testify assertion framework. * Vendored packages for the VIC provider Requires feature/wolfpack branch of VIC and a few specific commit sha of projects used within VIC. * Implementation of POD Stopper and Deleter unit tests (#4) * Updated files for initial PR
This commit is contained in:
13
vendor/github.com/docker/libnetwork/docs/bridge.md
generated
vendored
Normal file
13
vendor/github.com/docker/libnetwork/docs/bridge.md
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
Bridge Driver
|
||||
=============
|
||||
|
||||
The bridge driver is an implementation that uses Linux Bridging and iptables to provide connectivity for containers
|
||||
It creates a single bridge, called `docker0` by default, and attaches a `veth pair` between the bridge and every endpoint.
|
||||
|
||||
## Configuration
|
||||
|
||||
The bridge driver supports configuration through the Docker Daemon flags.
|
||||
|
||||
## Usage
|
||||
|
||||
This driver is supported for the default "bridge" network only and it cannot be used for any other networks.
|
||||
BIN
vendor/github.com/docker/libnetwork/docs/cnm-model.jpg
generated
vendored
Normal file
BIN
vendor/github.com/docker/libnetwork/docs/cnm-model.jpg
generated
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 27 KiB |
159
vendor/github.com/docker/libnetwork/docs/design.md
generated
vendored
Normal file
159
vendor/github.com/docker/libnetwork/docs/design.md
generated
vendored
Normal file
@@ -0,0 +1,159 @@
|
||||
Design
|
||||
======
|
||||
|
||||
The vision and goals of libnetwork are highlighted in [roadmap](../ROADMAP.md).
|
||||
This document describes how libnetwork has been designed in order to achieve this.
|
||||
Requirements for individual releases can be found on the [Project Page](https://github.com/docker/libnetwork/wiki).
|
||||
|
||||
Many of the design decisions are inspired by the learnings from the Docker networking design as of Docker v1.6.
|
||||
Please refer to this [Docker v1.6 Design](legacy.md) document for more information on networking design as of Docker v1.6.
|
||||
|
||||
## Goal
|
||||
|
||||
libnetwork project will follow Docker and Linux philosophy of developing small, highly modular and composable tools that works well independently.
|
||||
Libnetwork aims to satisfy that composable need for Networking in Containers.
|
||||
|
||||
## The Container Network Model
|
||||
|
||||
Libnetwork implements Container Network Model (CNM) which formalizes the steps required to provide networking for containers while providing an abstraction that can be used to support multiple network drivers. The CNM is built on 3 main components (shown below)
|
||||
|
||||

|
||||
|
||||
**Sandbox**
|
||||
|
||||
A Sandbox contains the configuration of a container's network stack.
|
||||
This includes management of the container's interfaces, routing table and DNS settings.
|
||||
An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail or other similar concept.
|
||||
A Sandbox may contain *many* endpoints from *multiple* networks.
|
||||
|
||||
**Endpoint**
|
||||
|
||||
An Endpoint joins a Sandbox to a Network.
|
||||
An implementation of an Endpoint could be a `veth` pair, an Open vSwitch internal port or similar.
|
||||
An Endpoint can belong to *only one* network but may only belong to *one* Sandbox.
|
||||
|
||||
**Network**
|
||||
|
||||
A Network is a group of Endpoints that are able to communicate with each-other directly.
|
||||
An implementation of a Network could be a Linux bridge, a VLAN, etc.
|
||||
Networks consist of *many* endpoints.
|
||||
|
||||
## CNM Objects
|
||||
|
||||
**NetworkController**
|
||||
`NetworkController` object provides the entry-point into libnetwork that exposes simple APIs for the users (such as Docker Engine) to allocate and manage Networks. libnetwork supports multiple active drivers (both inbuilt and remote). `NetworkController` allows user to bind a particular driver to a given network.
|
||||
|
||||
**Driver**
|
||||
`Driver` is not an user visible object, but drivers provides the actual implementation that makes network work. `NetworkController` however provides an API to configure any specific driver with driver-specific options/labels that is transparent to libnetwork, but can be handled by the drivers directly. Drivers can be both inbuilt (such as Bridge, Host, None & overlay) and remote (from plugin providers) to satisfy various usecases & deployment scenarios. At this point, the Driver owns a network and is responsible for managing the network (including IPAM, etc.). This can be improved in the future by having multiple drivers participating in handling various network management functionalities.
|
||||
|
||||
**Network**
|
||||
`Network` object is an implementation of the `CNM : Network` as defined above. `NetworkController` provides APIs to create and manage `Network` object. Whenever a `Network` is created or updated, the corresponding `Driver` will be notified of the event. LibNetwork treats `Network` object at an abstract level to provide connectivity between a group of end-points that belong to the same network and isolate from the rest. The Driver performs the actual work of providing the required connectivity and isolation. The connectivity can be within the same host or across multiple-hosts. Hence `Network` has a global scope within a cluster.
|
||||
|
||||
**Endpoint**
|
||||
`Endpoint` represents a Service Endpoint. It provides the connectivity for services exposed by a container in a network with other services provided by other containers in the network. `Network` object provides APIs to create and manage endpoint. An endpoint can be attached to only one network. `Endpoint` creation calls are made to the corresponding `Driver` which is responsible for allocating resources for the corresponding `Sandbox`. Since Endpoint represents a Service and not necessarily a particular container, `Endpoint` has a global scope within a cluster as well.
|
||||
|
||||
**Sandbox**
|
||||
`Sandbox` object represents container's network configuration such as ip-address, mac-address, routes, DNS entries. A `Sandbox` object is created when the user requests to create an endpoint on a network. The `Driver` that handles the `Network` is responsible to allocate the required network resources (such as ip-address) and pass the info called `SandboxInfo` back to libnetwork. libnetwork will make use of OS specific constructs (example: netns for Linux) to populate the network configuration into the containers that is represented by the `Sandbox`. A `Sandbox` can have multiple endpoints attached to different networks. Since `Sandbox` is associated with a particular container in a given host, it has a local scope that represents the Host that the Container belong to.
|
||||
|
||||
**CNM Attributes**
|
||||
|
||||
***Options***
|
||||
`Options` provides a generic and flexible mechanism to pass `Driver` specific configuration option from the user to the `Driver` directly. `Options` are just key-value pairs of data with `key` represented by a string and `value` represented by a generic object (such as golang `interface{}`). Libnetwork will operate on the `Options` ONLY if the `key` matches any of the well-known `Label` defined in the `net-labels` package. `Options` also encompasses `Labels` as explained below. `Options` are generally NOT end-user visible (in UI), while `Labels` are.
|
||||
|
||||
***Labels***
|
||||
`Labels` are very similar to `Options` & in fact they are just a subset of `Options`. `Labels` are typically end-user visible and are represented in the UI explicitly using the `--labels` option. They are passed from the UI to the `Driver` so that `Driver` can make use of it and perform any `Driver` specific operation (such as a subnet to allocate IP-Addresses from in a Network).
|
||||
|
||||
## CNM Lifecycle
|
||||
|
||||
Consumers of the CNM, like Docker for example, interact through the CNM Objects and its APIs to network the containers that they manage.
|
||||
|
||||
0. `Drivers` registers with `NetworkController`. Build-in drivers registers inside of LibNetwork, while remote Drivers registers with LibNetwork via Plugin mechanism. (*plugin-mechanism is WIP*). Each `driver` handles a particular `networkType`.
|
||||
|
||||
1. `NetworkController` object is created using `libnetwork.New()` API to manage the allocation of Networks and optionally configure a `Driver` with driver specific `Options`.
|
||||
|
||||
2. `Network` is created using the controller's `NewNetwork()` API by providing a `name` and `networkType`. `networkType` parameter helps to choose a corresponding `Driver` and binds the created `Network` to that `Driver`. From this point, any operation on `Network` will be handled by that `Driver`.
|
||||
|
||||
3. `controller.NewNetwork()` API also takes in optional `options` parameter which carries Driver-specific options and `Labels`, which the Drivers can make use for its purpose.
|
||||
|
||||
4. `network.CreateEndpoint()` can be called to create a new Endpoint in a given network. This API also accepts optional `options` parameter which drivers can make use of. These 'options' carry both well-known labels and driver-specific labels. Drivers will in turn be called with `driver.CreateEndpoint` and it can choose to reserve IPv4/IPv6 addresses when an `Endpoint` is created in a `Network`. The `Driver` will assign these addresses using `InterfaceInfo` interface defined in the `driverapi`. The IP/IPv6 are needed to complete the endpoint as service definition along with the ports the endpoint exposes since essentially a service endpoint is nothing but a network address and the port number that the application container is listening on.
|
||||
|
||||
5. `endpoint.Join()` can be used to attach a container to an `Endpoint`. The Join operation will create a `Sandbox` if it doesn't exist already for that container. The Drivers can make use of the Sandbox Key to identify multiple endpoints attached to a same container. This API also accepts optional `options` parameter which drivers can make use of.
|
||||
* Though it is not a direct design issue of LibNetwork, it is highly encouraged to have users like `Docker` to call the endpoint.Join() during Container's `Start()` lifecycle that is invoked *before* the container is made operational. As part of Docker integration, this will be taken care of.
|
||||
* One of a FAQ on endpoint join() API is that, why do we need an API to create an Endpoint and another to join the endpoint.
|
||||
- The answer is based on the fact that Endpoint represents a Service which may or may not be backed by a Container. When an Endpoint is created, it will have its resources reserved so that any container can get attached to the endpoint later and get a consistent networking behaviour.
|
||||
|
||||
6. `endpoint.Leave()` can be invoked when a container is stopped. The `Driver` can cleanup the states that it allocated during the `Join()` call. LibNetwork will delete the `Sandbox` when the last referencing endpoint leaves the network. But LibNetwork keeps hold of the IP addresses as long as the endpoint is still present and will be reused when the container(or any container) joins again. This ensures that the container's resources are reused when they are Stopped and Started again.
|
||||
|
||||
7. `endpoint.Delete()` is used to delete an endpoint from a network. This results in deleting an endpoint and cleaning up the cached `sandbox.Info`.
|
||||
|
||||
8. `network.Delete()` is used to delete a network. LibNetwork will not allow the delete to proceed if there are any existing endpoints attached to the Network.
|
||||
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Networks & Endpoints
|
||||
|
||||
LibNetwork's Network and Endpoint APIs are primarily for managing the corresponding Objects and book-keeping them to provide a level of abstraction as required by the CNM. It delegates the actual implementation to the drivers which realize the functionality as promised in the CNM. For more information on these details, please see [the drivers section](#Drivers)
|
||||
|
||||
### Sandbox
|
||||
|
||||
Libnetwork provides a framework to implement of a Sandbox in multiple operating systems. Currently we have implemented Sandbox for Linux using `namespace_linux.go` and `configure_linux.go` in `sandbox` package
|
||||
This creates a Network Namespace for each sandbox which is uniquely identified by a path on the host filesystem.
|
||||
Netlink calls are used to move interfaces from the global namespace to the Sandbox namespace.
|
||||
Netlink is also used to manage the routing table in the namespace.
|
||||
|
||||
## Drivers
|
||||
|
||||
## API
|
||||
|
||||
Drivers are essentially an extension of libnetwork and provides the actual implementation for all of the LibNetwork APIs defined above. Hence there is an 1-1 correspondence for all the `Network` and `Endpoint` APIs, which includes :
|
||||
* `driver.Config`
|
||||
* `driver.CreateNetwork`
|
||||
* `driver.DeleteNetwork`
|
||||
* `driver.CreateEndpoint`
|
||||
* `driver.DeleteEndpoint`
|
||||
* `driver.Join`
|
||||
* `driver.Leave`
|
||||
|
||||
These Driver facing APIs makes use of unique identifiers (`networkid`,`endpointid`,...) instead of names (as seen in user-facing APIs).
|
||||
|
||||
The APIs are still work in progress and there can be changes to these based on the driver requirements especially when it comes to Multi-host networking.
|
||||
|
||||
### Driver semantics
|
||||
|
||||
* `Driver.CreateEndpoint`
|
||||
|
||||
This method is passed an interface `EndpointInfo`, with methods `Interface` and `AddInterface`.
|
||||
|
||||
If the value returned by `Interface` is non-nil, the driver is expected to make use of the interface information therein (e.g., treating the address or addresses as statically supplied), and must return an error if it cannot. If the value is `nil`, the driver should allocate exactly one _fresh_ interface, and use `AddInterface` to record them; or return an error if it cannot.
|
||||
|
||||
It is forbidden to use `AddInterface` if `Interface` is non-nil.
|
||||
|
||||
## Implementations
|
||||
|
||||
Libnetwork includes the following driver packages:
|
||||
|
||||
- null
|
||||
- bridge
|
||||
- overlay
|
||||
- remote
|
||||
|
||||
### Null
|
||||
|
||||
The null driver is a `noop` implementation of the driver API, used only in cases where no networking is desired. This is to provide backward compatibility to the Docker's `--net=none` option.
|
||||
|
||||
### Bridge
|
||||
|
||||
The `bridge` driver provides a Linux-specific bridging implementation based on the Linux Bridge.
|
||||
For more details, please [see the Bridge Driver documentation](bridge.md).
|
||||
|
||||
### Overlay
|
||||
|
||||
The `overlay` driver implements networking that can span multiple hosts using overlay network encapsulations such as VXLAN.
|
||||
For more details on its design, please see the [Overlay Driver Design](overlay.md).
|
||||
|
||||
### Remote
|
||||
|
||||
The `remote` package does not provide a driver, but provides a means of supporting drivers over a remote transport.
|
||||
This allows a driver to be written in a language of your choice.
|
||||
For further details, please see the [Remote Driver Design](remote.md).
|
||||
275
vendor/github.com/docker/libnetwork/docs/ipam.md
generated
vendored
Normal file
275
vendor/github.com/docker/libnetwork/docs/ipam.md
generated
vendored
Normal file
@@ -0,0 +1,275 @@
|
||||
# IPAM Driver
|
||||
|
||||
During the Network and Endpoints lifecyle, the CNM model controls the IP address assignment for network and endpoint interfaces via the IPAM driver(s).
|
||||
Libnetwork has a default, built-in IPAM driver and allows third party IPAM drivers to be dynamically plugged. On network creation, the user can specify which IPAM driver libnetwork needs to use for the network's IP address management. This document explains the APIs with which the IPAM driver needs to comply, and the corresponding HTTPS request/response body relevant for remote drivers.
|
||||
|
||||
|
||||
## Remote IPAM driver
|
||||
|
||||
On the same line of remote network driver registration (see [remote.md](./remote.md) for more details), libnetwork initializes the `ipams.remote` package with the `Init()` function. It passes a `ipamapi.Callback` as a parameter, which implements `RegisterIpamDriver()`. The remote driver package uses this interface to register remote drivers with libnetwork's `NetworkController`, by supplying it in a `plugins.Handle` callback. The remote drivers register and communicate with libnetwork via the Docker plugin package. The `ipams.remote` provides the proxy for the remote driver processes.
|
||||
|
||||
|
||||
## Protocol
|
||||
|
||||
Communication protocol is the same as the remote network driver.
|
||||
|
||||
## Handshake
|
||||
|
||||
During driver registration, libnetwork will query the remote driver about the default local and global address spaces strings, and about the driver capabilities.
|
||||
More detailed information can be found in the respective section in this document.
|
||||
|
||||
## Datastore Requirements
|
||||
|
||||
It is the remote driver's responsibility to manage its database.
|
||||
|
||||
## Ipam Contract
|
||||
|
||||
The IPAM driver (internal or remote) has to comply with the contract specified in `ipamapi.contract.go`:
|
||||
|
||||
```go
|
||||
// Ipam represents the interface the IPAM service plugins must implement
|
||||
// in order to allow injection/modification of IPAM database.
|
||||
type Ipam interface {
|
||||
// GetDefaultAddressSpaces returns the default local and global address spaces for this ipam
|
||||
GetDefaultAddressSpaces() (string, string, error)
|
||||
// RequestPool returns an address pool along with its unique id. Address space is a mandatory field
|
||||
// which denotes a set of non-overlapping pools. pool describes the pool of addresses in CIDR notation.
|
||||
// subpool indicates a smaller range of addresses from the pool, for now it is specified in CIDR notation.
|
||||
// Both pool and subpool are non mandatory fields. When they are not specified, Ipam driver may choose to
|
||||
// return a self chosen pool for this request. In such case the v6 flag needs to be set appropriately so
|
||||
// that the driver would return the expected ip version pool.
|
||||
RequestPool(addressSpace, pool, subPool string, options map[string]string, v6 bool) (string, *net.IPNet, map[string]string, error)
|
||||
// ReleasePool releases the address pool identified by the passed id
|
||||
ReleasePool(poolID string) error
|
||||
// Request address from the specified pool ID. Input options or preferred IP can be passed.
|
||||
RequestAddress(string, net.IP, map[string]string) (*net.IPNet, map[string]string, error)
|
||||
// Release the address from the specified pool ID
|
||||
ReleaseAddress(string, net.IP) error
|
||||
}
|
||||
```
|
||||
|
||||
The following sections explain the each of the above API's semantics, when they are called during network/endpoint lifecycle, and the corresponding payload for remote driver HTTP request/responses.
|
||||
|
||||
|
||||
## IPAM Configuration and flow
|
||||
|
||||
A libnetwork user can provide IPAM related configuration when creating a network, via the `NetworkOptionIpam` setter function.
|
||||
|
||||
```go
|
||||
func NetworkOptionIpam(ipamDriver string, addrSpace string, ipV4 []*IpamConf, ipV6 []*IpamConf, opts map[string]string) NetworkOption
|
||||
```
|
||||
|
||||
The caller has to provide the IPAM driver name and may provide the address space and a list of `IpamConf` structures for IPv4 and a list for IPv6. The IPAM driver name is the only mandatory field. If not provided, network creation will fail.
|
||||
|
||||
In the list of configurations, each element has the following form:
|
||||
|
||||
```go
|
||||
// IpamConf contains all the ipam related configurations for a network
|
||||
type IpamConf struct {
|
||||
// The master address pool for containers and network interfaces
|
||||
PreferredPool string
|
||||
// A subset of the master pool. If specified,
|
||||
// this becomes the container pool
|
||||
SubPool string
|
||||
// Input options for IPAM Driver (optional)
|
||||
Options map[string]string
|
||||
// Preferred Network Gateway address (optional)
|
||||
Gateway string
|
||||
// Auxiliary addresses for network driver. Must be within the master pool.
|
||||
// libnetwork will reserve them if they fall into the container pool
|
||||
AuxAddresses map[string]string
|
||||
}
|
||||
```
|
||||
|
||||
On network creation, libnetwork will iterate the list and perform the following requests to the IPAM driver:
|
||||
|
||||
1. Request the address pool and pass the options along via `RequestPool()`.
|
||||
2. Request the network gateway address if specified. Otherwise request any address from the pool to be used as network gateway. This is done via `RequestAddress()`.
|
||||
3. Request each of the specified auxiliary addresses via `RequestAddress()`.
|
||||
|
||||
If the list of IPv4 configurations is empty, libnetwork will automatically add one empty `IpamConf` structure. This will cause libnetwork to request IPAM driver an IPv4 address pool of the driver's choice on the configured address space, if specified, or on the IPAM driver default address space otherwise. If the IPAM driver is not able to provide an address pool, network creation will fail.
|
||||
If the list of IPv6 configurations is empty, libnetwork will not take any action.
|
||||
The data retrieved from the IPAM driver during the execution of point 1) to 3) will be stored in the network structure as a list of `IpamInfo` structures for IPv6 and for IPv6.
|
||||
|
||||
On endpoint creation, libnetwork will iterate over the list of configs and perform the following operation:
|
||||
|
||||
1. Request an IPv4 address from the IPv4 pool and assign it to the endpoint interface IPv4 address. If successful, stop iterating.
|
||||
2. Request an IPv6 address from the IPv6 pool (if exists) and assign it to the endpoint interface IPv6 address. If successful, stop iterating.
|
||||
|
||||
Endpoint creation will fail if any of the above operation does not succeed
|
||||
|
||||
On endpoint deletion, libnetwork will perform the following operations:
|
||||
|
||||
1. Release the endpoint interface IPv4 address
|
||||
2. Release the endpoint interface IPv6 address if present
|
||||
|
||||
On network deletion, libnetwork will iterate the list of `IpamData` structures and perform the following requests to ipam driver:
|
||||
|
||||
1. Release the network gateway address via `ReleaseAddress()`
|
||||
2. Release each of the auxiliary addresses via `ReleaseAddress()`
|
||||
3. Release the pool via `ReleasePool()`
|
||||
|
||||
### GetDefaultAddressSpaces
|
||||
|
||||
GetDefaultAddressSpaces returns the default local and global address space names for this IPAM. An address space is a set of non-overlapping address pools isolated from other address spaces' pools. In other words, same pool can exist on N different address spaces. An address space naturally maps to a tenant name.
|
||||
In libnetwork, the meaning associated to `local` or `global` address space is that a local address space doesn't need to get synchronized across the
|
||||
cluster whereas the global address spaces does. Unless specified otherwise in the IPAM configuration, libnetwork will request address pools from the default local or default global address space based on the scope of the network being created. For example, if not specified otherwise in the configuration, libnetwork will request address pool from the default local address space for a bridge network, whereas from the default global address space for an overlay network.
|
||||
|
||||
During registration, the remote driver will receive a POST message to the URL `/IpamDriver.GetDefaultAddressSpaces` with no payload. The driver's response should have the form:
|
||||
|
||||
|
||||
{
|
||||
"LocalDefaultAddressSpace": string
|
||||
"GlobalDefaultAddressSpace": string
|
||||
}
|
||||
|
||||
|
||||
|
||||
### RequestPool
|
||||
|
||||
This API is for registering a address pool with the IPAM driver. Multiple identical calls must return the same result.
|
||||
It is the IPAM driver's responsibility to keep a reference count for the pool.
|
||||
|
||||
```go
|
||||
RequestPool(addressSpace, pool, subPool string, options map[string]string, v6 bool) (string, *net.IPNet, map[string]string, error)
|
||||
```
|
||||
|
||||
|
||||
For this API, the remote driver will receive a POST message to the URL `/IpamDriver.RequestPool` with the following payload:
|
||||
|
||||
{
|
||||
"AddressSpace": string
|
||||
"Pool": string
|
||||
"SubPool": string
|
||||
"Options": map[string]string
|
||||
"V6": bool
|
||||
}
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* `AddressSpace` the IP address space
|
||||
* `Pool` The IPv4 or IPv6 address pool in CIDR format
|
||||
* `SubPool` An optional subset of the address pool, an ip range in CIDR format
|
||||
* `Options` A map of IPAM driver specific options
|
||||
* `V6` Whether an IPAM self-chosen pool should be IPv6
|
||||
|
||||
AddressSpace is the only mandatory field. If no `Pool` is specified IPAM driver may return a self chosen address pool. In such case, `V6` flag must be set if caller wants an IPAM-chosen IPv6 pool. A request with empty `Pool` and non-empty `SubPool` should be rejected as invalid.
|
||||
If a `Pool` is not specified IPAM will allocate one of the default pools. When `Pool` is not specified, the `V6` flag should be set if the network needs IPv6 addresses to be allocated.
|
||||
|
||||
A successful response is in the form:
|
||||
|
||||
|
||||
{
|
||||
"PoolID": string
|
||||
"Pool": string
|
||||
"Data": map[string]string
|
||||
}
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* `PoolID` is an identifier for this pool. Same pools must have same pool id.
|
||||
* `Pool` is the pool in CIDR format
|
||||
* `Data` is the IPAM driver supplied metadata for this pool
|
||||
|
||||
|
||||
### ReleasePool
|
||||
|
||||
This API is for releasing a previously registered address pool.
|
||||
|
||||
```go
|
||||
ReleasePool(poolID string) error
|
||||
```
|
||||
|
||||
For this API, the remote driver will receive a POST message to the URL `/IpamDriver.ReleasePool` with the following payload:
|
||||
|
||||
{
|
||||
"PoolID": string
|
||||
}
|
||||
|
||||
Where:
|
||||
|
||||
* `PoolID` is the pool identifier
|
||||
|
||||
A successful response is empty:
|
||||
|
||||
{}
|
||||
|
||||
### RequestAddress
|
||||
|
||||
This API is for reserving an ip address.
|
||||
|
||||
```go
|
||||
RequestAddress(string, net.IP, map[string]string) (*net.IPNet, map[string]string, error)
|
||||
```
|
||||
|
||||
For this API, the remote driver will receive a POST message to the URL `/IpamDriver.RequestAddress` with the following payload:
|
||||
|
||||
{
|
||||
"PoolID": string
|
||||
"Address": string
|
||||
"Options": map[string]string
|
||||
}
|
||||
|
||||
Where:
|
||||
|
||||
* `PoolID` is the pool identifier
|
||||
* `Address` is the preferred address in regular IP form (A.B.C.D). If empty, the IPAM driver chooses any available address on the pool
|
||||
* `Options` are IPAM driver specific options
|
||||
|
||||
|
||||
A successful response is in the form:
|
||||
|
||||
|
||||
{
|
||||
Address: string
|
||||
Data: map[string]string
|
||||
}
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* `Address` is the allocated address in CIDR format (A.B.C.D/MM)
|
||||
* `Data` is some IPAM driver specific metadata
|
||||
|
||||
### ReleaseAddress
|
||||
|
||||
This API is for releasing an IP address.
|
||||
|
||||
For this API, the remote driver will receive a POST message to the URL `/IpamDriver.RleaseAddress` with the following payload:
|
||||
|
||||
{
|
||||
"PoolID": string
|
||||
"Address: string
|
||||
}
|
||||
|
||||
Where:
|
||||
|
||||
* `PoolID` is the pool identifier
|
||||
* `Address` is the IP address to release
|
||||
|
||||
|
||||
|
||||
### GetCapabilities
|
||||
|
||||
During the driver registration, libnetwork will query the driver about its capabilities. It is not mandatory for the driver to support this URL endpoint. If driver does not support it, registration will succeed with empty capabilities automatically added to the internal driver handle.
|
||||
|
||||
During registration, the remote driver will receive a POST message to the URL `/IpamDriver.GetCapabilities` with no payload. The driver's response should have the form:
|
||||
|
||||
|
||||
{
|
||||
"RequiresMACAddress": bool
|
||||
}
|
||||
|
||||
|
||||
|
||||
## Capabilities
|
||||
|
||||
Capabilities are requirements, features the remote ipam driver can express during registration with libnetwork.
|
||||
As of now libnetwork accepts the following capabilities:
|
||||
|
||||
### RequiresMACAddress
|
||||
|
||||
It is a boolean value which tells libnetwork whether the ipam driver needs to know the interface MAC address in order to properly process the `RequestAddress()` call.
|
||||
If true, on `CreateEndpoint()` request, libnetwork will generate a random MAC address for the endpoint (if an explicit MAC address was not already provided by the user) and pass it to `RequestAddress()` when requesting the IP address inside the options map. The key will be the `netlabel.MacAddress` constant: `"com.docker.network.endpoint.macaddress"`.
|
||||
15
vendor/github.com/docker/libnetwork/docs/legacy.md
generated
vendored
Normal file
15
vendor/github.com/docker/libnetwork/docs/legacy.md
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
|
||||
This document provides a TLD&R version of https://docs.docker.com/v1.6/articles/networking/.
|
||||
If more interested in detailed operational design, please refer to this link.
|
||||
|
||||
## Docker Networking design as of Docker v1.6
|
||||
|
||||
Prior to libnetwork, Docker Networking was handled in both Docker Engine and libcontainer.
|
||||
Docker Engine makes use of the Bridge Driver to provide single-host networking solution with the help of linux bridge and IPTables.
|
||||
Docker Engine provides simple configurations such as `--link`, `--expose`,... to enable container connectivity within the same host by abstracting away networking configuration completely from the Containers.
|
||||
For external connectivity, it relied upon NAT & Port-mapping
|
||||
|
||||
Docker Engine was responsible for providing the configuration for the container's networking stack.
|
||||
|
||||
Libcontainer would then use this information to create the necessary networking devices and move them in to a network namespace.
|
||||
This namespace would then be used when the container is started.
|
||||
153
vendor/github.com/docker/libnetwork/docs/overlay.md
generated
vendored
Normal file
153
vendor/github.com/docker/libnetwork/docs/overlay.md
generated
vendored
Normal file
@@ -0,0 +1,153 @@
|
||||
# Overlay Driver
|
||||
|
||||
### Design
|
||||
TODO
|
||||
|
||||
### Multi-Host Overlay Driver Quick Start
|
||||
|
||||
This example is to provision two Docker Hosts with the **experimental** Libnetwork overlay network driver.
|
||||
|
||||
### Pre-Requisites
|
||||
|
||||
- Kernel >= 3.16
|
||||
- Experimental Docker client
|
||||
|
||||
### Install Docker Experimental
|
||||
|
||||
Follow Docker experimental installation instructions at: [https://github.com/docker/docker/tree/master/experimental](https://github.com/docker/docker/tree/master/experimental)
|
||||
|
||||
To ensure you are running the experimental Docker branch, check the version and look for the experimental tag:
|
||||
|
||||
```
|
||||
$ docker -v
|
||||
Docker version 1.8.0-dev, build f39b9a0, experimental
|
||||
```
|
||||
|
||||
### Install and Bootstrap K/V Store
|
||||
|
||||
|
||||
Multi-host networking uses a pluggable Key-Value store backend to distribute states using `libkv`.
|
||||
`libkv` supports multiple pluggable backends such as `consul`, `etcd` & `zookeeper` (more to come).
|
||||
|
||||
In this example we will use `consul`
|
||||
|
||||
Install:
|
||||
|
||||
```
|
||||
$ curl -OL https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip
|
||||
$ unzip 0.5.2_linux_amd64.zip
|
||||
$ mv consul /usr/local/bin/
|
||||
```
|
||||
|
||||
**host-1** Start Consul as a server in bootstrap mode:
|
||||
|
||||
```
|
||||
$ consul agent -server -bootstrap -data-dir /tmp/consul -bind=<host-1-ip-address>
|
||||
```
|
||||
|
||||
**host-2** Start the Consul agent:
|
||||
|
||||
```
|
||||
$ consul agent -data-dir /tmp/consul -bind=<host-2-ip-address>
|
||||
$ consul join <host-1-ip-address>
|
||||
```
|
||||
|
||||
|
||||
### Start the Docker Daemon with the Network Driver Daemon Flags
|
||||
|
||||
**host-1** Docker daemon:
|
||||
|
||||
```
|
||||
$ docker -d --kv-store=consul:localhost:8500 --label=com.docker.network.driver.overlay.bind_interface=eth0
|
||||
```
|
||||
|
||||
**host-2** Start the Docker Daemon with the neighbor ID configuration:
|
||||
|
||||
```
|
||||
$ docker -d --kv-store=consul:localhost:8500 --label=com.docker.network.driver.overlay.bind_interface=eth0 --label=com.docker.network.driver.overlay.neighbor_ip=<host-1-ip-address>
|
||||
```
|
||||
|
||||
### QuickStart Containers Attached to a Network
|
||||
|
||||
**host-1** Start a container that publishes a service svc1 in the network dev that is managed by overlay driver.
|
||||
|
||||
```
|
||||
$ docker run -i -t --publish-service=svc1.dev.overlay debian
|
||||
root@21578ff721a9:/# ip add show eth0
|
||||
34: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
|
||||
link/ether 02:42:ec:41:35:bf brd ff:ff:ff:ff:ff:ff
|
||||
inet 172.21.0.16/16 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::42:ecff:fe41:35bf/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
**host-2** Start a container that publishes a service svc2 in the network dev that is managed by overlay driver.
|
||||
|
||||
```
|
||||
$ docker run -i -t --publish-service=svc2.dev.overlay debian
|
||||
root@d217828eb876:/# ping svc1
|
||||
PING svc1 (172.21.0.16): 56 data bytes
|
||||
64 bytes from 172.21.0.16: icmp_seq=0 ttl=64 time=0.706 ms
|
||||
64 bytes from 172.21.0.16: icmp_seq=1 ttl=64 time=0.687 ms
|
||||
64 bytes from 172.21.0.16: icmp_seq=2 ttl=64 time=0.841 ms
|
||||
```
|
||||
### Detailed Setup
|
||||
|
||||
You can also setup networks and services and then attach a running container to them.
|
||||
|
||||
**host-1**:
|
||||
|
||||
```
|
||||
docker network create -d overlay prod
|
||||
docker network ls
|
||||
docker network info prod
|
||||
docker service publish db1.prod
|
||||
cid=$(docker run -itd -p 8000:8000 ubuntu)
|
||||
docker service attach $cid db1.prod
|
||||
```
|
||||
|
||||
**host-2**:
|
||||
|
||||
```
|
||||
docker network ls
|
||||
docker network info prod
|
||||
docker service publish db2.prod
|
||||
cid=$(docker run -itd -p 8000:8000 ubuntu)
|
||||
docker service attach $cid db2.prod
|
||||
```
|
||||
|
||||
Once a container is started, a container on `host-1` and `host-2` both containers should be able to ping one another via IP, service name, \<service name>.\<network name>
|
||||
|
||||
|
||||
View information about the networks and services using `ls` and `info` subcommands like so:
|
||||
|
||||
```
|
||||
$ docker service ls
|
||||
SERVICE ID NAME NETWORK CONTAINER
|
||||
0771deb5f84b db2 prod 0e54a527f22c
|
||||
aea23b224acf db1 prod 4b0a309ca311
|
||||
|
||||
$ docker network info prod
|
||||
Network Id: 5ac68be2518959b48ad102e9ec3d8f42fb2ec72056aa9592eb5abd0252203012
|
||||
Name: prod
|
||||
Type: overlay
|
||||
|
||||
$ docker service info db1.prod
|
||||
Service Id: aea23b224acfd2da9b893870e0d632499188a1a4b3881515ba042928a9d3f465
|
||||
Name: db1
|
||||
Network: prod
|
||||
```
|
||||
|
||||
To detach and unpublish a service:
|
||||
|
||||
```
|
||||
$ docker service detach $cid <service>.<network>
|
||||
$ docker service unpublish <service>.<network>
|
||||
|
||||
# Example:
|
||||
$ docker service detach $cid db2.prod
|
||||
$ docker service unpublish db2.prod
|
||||
```
|
||||
|
||||
To reiterate, this is experimental, and will be under active development.
|
||||
300
vendor/github.com/docker/libnetwork/docs/remote.md
generated
vendored
Normal file
300
vendor/github.com/docker/libnetwork/docs/remote.md
generated
vendored
Normal file
@@ -0,0 +1,300 @@
|
||||
Remote Drivers
|
||||
==============
|
||||
|
||||
The `drivers.remote` package provides the integration point for dynamically-registered drivers. Unlike the other driver packages, it does not provide a single implementation of a driver; rather, it provides a proxy for remote driver processes, which are registered and communicate with LibNetwork via the Docker plugin package.
|
||||
|
||||
For the semantics of driver methods, which correspond to the protocol below, please see the [overall design](design.md).
|
||||
|
||||
## LibNetwork integration with the Docker `plugins` package
|
||||
|
||||
When LibNetwork initialises the `drivers.remote` package with the `Init()` function, it passes a `DriverCallback` as a parameter, which implements `RegisterDriver()`. The remote driver package uses this interface to register remote drivers with LibNetwork's `NetworkController`, by supplying it in a `plugins.Handle` callback.
|
||||
|
||||
The callback is invoked when a driver is loaded with the `plugins.Get` API call. How that comes about is out of scope here (but it might be, for instance, when that driver is mentioned by the user).
|
||||
|
||||
This design ensures that the details of driver registration mechanism are owned by the remote driver package, and it doesn't expose any of the driver layer to the North of LibNetwork.
|
||||
|
||||
## Implementation
|
||||
|
||||
The remote driver implementation uses a `plugins.Client` to communicate with the remote driver process. The `driverapi.Driver` methods are implemented as RPCs over the plugin client.
|
||||
|
||||
The payloads of these RPCs are mostly direct translations into JSON of the arguments given to the method. There are some exceptions to account for the use of the interfaces `InterfaceInfo` and `JoinInfo`, and data types that do not serialise to JSON well (e.g., `net.IPNet`). The protocol is detailed below under "Protocol".
|
||||
|
||||
## Usage
|
||||
|
||||
A remote driver proxy follows all the rules of any other in-built driver and has exactly the same `Driver` interface exposed. LibNetwork will also support driver-specific `options` and user-supplied `labels` which may influence the behaviour of a remote driver process.
|
||||
|
||||
## Protocol
|
||||
|
||||
The remote driver protocol is a set of RPCs, issued as HTTP POSTs with JSON payloads. The proxy issues requests, and the remote driver process is expected to respond usually with a JSON payload of its own, although in some cases these are empty maps.
|
||||
|
||||
### Errors
|
||||
|
||||
If the remote process cannot decode, or otherwise detects a syntactic problem with the HTTP request or payload, it must respond with an HTTP error status (4xx or 5xx).
|
||||
|
||||
If the remote process can decode the request, but cannot complete the operation, it must send a response in the form
|
||||
|
||||
{
|
||||
"Err": string
|
||||
}
|
||||
|
||||
The string value supplied may appear in logs, so should not include confidential information.
|
||||
|
||||
### Handshake
|
||||
|
||||
When loaded, a remote driver process receives an HTTP POST on the URL `/Plugin.Activate` with no payload. It must respond with a manifest of the form
|
||||
|
||||
{
|
||||
"Implements": ["NetworkDriver"]
|
||||
}
|
||||
|
||||
Other entries in the list value are allowed; `"NetworkDriver"` indicates that the plugin should be registered with LibNetwork as a driver.
|
||||
|
||||
### Set capability
|
||||
|
||||
After Handshake, the remote driver will receive another POST message to the URL `/NetworkDriver.GetCapabilities` with no payload. The driver's response should have the form:
|
||||
|
||||
{
|
||||
"Scope": "local"
|
||||
}
|
||||
|
||||
Value of "Scope" should be either "local" or "global" which indicates the capability of remote driver, values beyond these will fail driver's registration and return an error to the caller.
|
||||
|
||||
### Create network
|
||||
|
||||
When the proxy is asked to create a network, the remote process shall receive a POST to the URL `/NetworkDriver.CreateNetwork` of the form
|
||||
|
||||
{
|
||||
"NetworkID": string,
|
||||
"IPv4Data" : [
|
||||
{
|
||||
"AddressSpace": string,
|
||||
"Pool": ipv4-cidr-string,
|
||||
"Gateway" : ipv4-address"
|
||||
"AuxAddresses": {
|
||||
"<identifier1>" : "<ipv4-address1>",
|
||||
"<identifier2>" : "<ipv4-address2>",
|
||||
...
|
||||
}
|
||||
},
|
||||
],
|
||||
"IPv6Data" : [
|
||||
{
|
||||
"AddressSpace": string,
|
||||
"Pool": ipv6-cidr-string,
|
||||
"Gateway" : ipv6-address"
|
||||
"AuxAddresses": {
|
||||
"<identifier1>" : "<ipv6-address1>",
|
||||
"<identifier2>" : "<ipv6-address2>",
|
||||
...
|
||||
}
|
||||
},
|
||||
],
|
||||
"Options": {
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
* `NetworkID` value is generated by LibNetwork which represents an unique network.
|
||||
* `Options` value is the arbitrary map given to the proxy by LibNetwork.
|
||||
* `IPv4Data` and `IPv6Data` are the ip-addressing data configured by the user and managed by IPAM driver. The network driver is expected to honor the ip-addressing data supplied by IPAM driver. The data include,
|
||||
* `AddressSpace` : A unique string represents an isolated space for IP Addressing
|
||||
* `Pool` : A range of IP Addresses represted in CIDR format address/mask. Since, the IPAM driver is responsible for allocating container ip-addresses, the network driver can make use of this information for the network plumbing purposes.
|
||||
* `Gateway` : Optionally, the IPAM driver may provide a Gateway for the subnet represented by the Pool. the network driver can make use of this information for the network plumbing purposes.
|
||||
* `AuxAddresses` : A list of pre-allocated ip-addresses with an associated identifier as provided by the user to assist network driver if it requires specific ip-addresses for its operation.
|
||||
|
||||
The response indicating success is empty:
|
||||
|
||||
`{}`
|
||||
|
||||
### Delete network
|
||||
|
||||
When a network owned by the remote driver is deleted, the remote process shall receive a POST to the URL `/NetworkDriver.DeleteNetwork` of the form
|
||||
|
||||
{
|
||||
"NetworkID": string
|
||||
}
|
||||
|
||||
The success response is empty:
|
||||
|
||||
{}
|
||||
|
||||
### Create endpoint
|
||||
|
||||
When the proxy is asked to create an endpoint, the remote process shall receive a POST to the URL `/NetworkDriver.CreateEndpoint` of the form
|
||||
|
||||
{
|
||||
"NetworkID": string,
|
||||
"EndpointID": string,
|
||||
"Options": {
|
||||
...
|
||||
},
|
||||
"Interface": {
|
||||
"Address": string,
|
||||
"AddressIPv6": string,
|
||||
"MacAddress": string
|
||||
}
|
||||
}
|
||||
|
||||
The `NetworkID` is the generated identifier for the network to which the endpoint belongs; the `EndpointID` is a generated identifier for the endpoint.
|
||||
|
||||
`Options` is an arbitrary map as supplied to the proxy.
|
||||
|
||||
The `Interface` value is of the form given. The fields in the `Interface` may be empty; and the `Interface` itself may be empty. If supplied, `Address` is an IPv4 address and subnet in CIDR notation; e.g., `"192.168.34.12/16"`. If supplied, `AddressIPv6` is an IPv6 address and subnet in CIDR notation. `MacAddress` is a MAC address as a string; e.g., `"6e:75:32:60:44:c9"`.
|
||||
|
||||
A success response is of the form
|
||||
|
||||
{
|
||||
"Interface": {
|
||||
"Address": string,
|
||||
"AddressIPv6": string,
|
||||
"MacAddress": string
|
||||
}
|
||||
}
|
||||
|
||||
with values in the `Interface` as above. As far as the value of `Interface` is concerned, `MacAddress` and either or both of `Address` and `AddressIPv6` must be given.
|
||||
|
||||
If the remote process was supplied a non-empty value in `Interface`, it must respond with an empty `Interface` value. LibNetwork will treat it as an error if it supplies a non-empty value and receives a non-empty value back, and roll back the operation.
|
||||
|
||||
### Endpoint operational info
|
||||
|
||||
The proxy may be asked for "operational info" on an endpoint. When this happens, the remote process shall receive a POST to `/NetworkDriver.EndpointOperInfo` of the form
|
||||
|
||||
{
|
||||
"NetworkID": string,
|
||||
"EndpointID": string
|
||||
}
|
||||
|
||||
where `NetworkID` and `EndpointID` have meanings as above. It must send a response of the form
|
||||
|
||||
{
|
||||
"Value": { ... }
|
||||
}
|
||||
|
||||
where the value of the `Value` field is an arbitrary (possibly empty) map.
|
||||
|
||||
### Delete endpoint
|
||||
|
||||
When an endpoint is deleted, the remote process shall receive a POST to the URL `/NetworkDriver.DeleteEndpoint` with a body of the form
|
||||
|
||||
{
|
||||
"NetworkID": string,
|
||||
"EndpointID": string
|
||||
}
|
||||
|
||||
where `NetworkID` and `EndpointID` have meanings as above. A success response is empty:
|
||||
|
||||
{}
|
||||
|
||||
### Join
|
||||
|
||||
When a sandbox is given an endpoint, the remote process shall receive a POST to the URL `NetworkDriver.Join` of the form
|
||||
|
||||
{
|
||||
"NetworkID": string,
|
||||
"EndpointID": string,
|
||||
"SandboxKey": string,
|
||||
"Options": { ... }
|
||||
}
|
||||
|
||||
The `NetworkID` and `EndpointID` have meanings as above. The `SandboxKey` identifies the sandbox. `Options` is an arbitrary map as supplied to the proxy.
|
||||
|
||||
The response must have the form
|
||||
|
||||
{
|
||||
"InterfaceName": {
|
||||
SrcName: string,
|
||||
DstPrefix: string
|
||||
},
|
||||
"Gateway": string,
|
||||
"GatewayIPv6": string,
|
||||
"StaticRoutes": [{
|
||||
"Destination": string,
|
||||
"RouteType": int,
|
||||
"NextHop": string,
|
||||
}, ...]
|
||||
}
|
||||
|
||||
`Gateway` is optional and if supplied is an IP address as a string; e.g., `"192.168.0.1"`. `GatewayIPv6` is optional and if supplied is an IPv6 address as a string; e.g., `"fe80::7809:baff:fec6:7744"`.
|
||||
|
||||
The entries in `InterfaceName` represent actual OS level interfaces that should be moved by LibNetwork into the sandbox; the `SrcName` is the name of the OS level interface that the remote process created, and the `DstPrefix` is a prefix for the name the OS level interface should have after it has been moved into the sandbox (LibNetwork will append an index to make sure the actual name does not collide with others).
|
||||
|
||||
The entries in `"StaticRoutes"` represent routes that should be added to an interface once it has been moved into the sandbox. Since there may be zero or more routes for an interface, unlike the interface name they can be supplied in any order.
|
||||
|
||||
Routes are either given a `RouteType` of `0` and a value for `NextHop`; or, a `RouteType` of `1` and no value for `NextHop`, meaning a connected route.
|
||||
|
||||
If no gateway and no default static route is set by the driver in the Join response, libnetwork will add an additional interface to the sandbox connecting to a default gateway network (a bridge network named *docker_gwbridge*) and program the default gateway into the sandbox accordingly, pointing to the interface address of the bridge *docker_gwbridge*.
|
||||
|
||||
### Leave
|
||||
|
||||
If the proxy is asked to remove an endpoint from a sandbox, the remote process shall receive a POST to the URL `/NetworkDriver.Leave` of the form
|
||||
|
||||
{
|
||||
"NetworkID": string,
|
||||
"EndpointID": string
|
||||
}
|
||||
|
||||
where `NetworkID` and `EndpointID` have meanings as above. The success response is empty:
|
||||
|
||||
{}
|
||||
|
||||
### DiscoverNew Notification
|
||||
|
||||
libnetwork listens to inbuilt docker discovery notifications and passes it along to the interested drivers.
|
||||
|
||||
When the proxy receives a DiscoverNew notification, the remote process shall receive a POST to the URL `/NetworkDriver.DiscoverNew` of the form
|
||||
|
||||
{
|
||||
"DiscoveryType": int,
|
||||
"DiscoveryData": {
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
`DiscoveryType` represents the discovery type. Each Discovery Type is represented by a number.
|
||||
`DiscoveryData` carries discovery data the structure of which is determined by the DiscoveryType
|
||||
|
||||
The response indicating success is empty:
|
||||
|
||||
`{}`
|
||||
|
||||
* Node Discovery
|
||||
|
||||
Node Discovery is represented by a `DiscoveryType` value of `1` and the corresponding `DiscoveryData` will carry Node discovery data.
|
||||
|
||||
{
|
||||
"DiscoveryType": int,
|
||||
"DiscoveryData": {
|
||||
"Address" : string
|
||||
"self" : bool
|
||||
}
|
||||
}
|
||||
|
||||
### DiscoverDelete Notification
|
||||
|
||||
When the proxy receives a DiscoverDelete notification, the remote process shall receive a POST to the URL `/NetworkDriver.DiscoverDelete` of the form
|
||||
|
||||
{
|
||||
"DiscoveryType": int,
|
||||
"DiscoveryData": {
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
`DiscoveryType` represents the discovery type. Each Discovery Type is represented by a number.
|
||||
`DiscoveryData` carries discovery data the structure of which is determined by the DiscoveryType
|
||||
|
||||
The response indicating success is empty:
|
||||
|
||||
`{}`
|
||||
|
||||
* Node Discovery
|
||||
|
||||
Similar to the DiscoverNew call, Node Discovery is represented by a `DiscoveryType` value of `1` and the corresponding `DiscoveryData` will carry Node discovery data to be delted.
|
||||
|
||||
{
|
||||
"DiscoveryType": int,
|
||||
"DiscoveryData": {
|
||||
"Address" : string
|
||||
"self" : bool
|
||||
}
|
||||
}
|
||||
16
vendor/github.com/docker/libnetwork/docs/vagrant-systemd/docker.service
generated
vendored
Normal file
16
vendor/github.com/docker/libnetwork/docs/vagrant-systemd/docker.service
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
[Unit]
|
||||
Description=Docker Application Container Engine
|
||||
Documentation=https://docs.docker.com
|
||||
After=network.target docker.socket
|
||||
Requires=docker.socket
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/default/docker
|
||||
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS
|
||||
MountFlags=slave
|
||||
LimitNOFILE=1048576
|
||||
LimitNPROC=1048576
|
||||
LimitCORE=infinity
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
185
vendor/github.com/docker/libnetwork/docs/vagrant.md
generated
vendored
Normal file
185
vendor/github.com/docker/libnetwork/docs/vagrant.md
generated
vendored
Normal file
@@ -0,0 +1,185 @@
|
||||
# Vagrant Setup to Test the Overlay Driver
|
||||
|
||||
This documentation highlights how to use Vagrant to start a three nodes setup to test Docker network.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
This was tested on:
|
||||
|
||||
- Vagrant 1.7.2
|
||||
- VirtualBox 4.3.26
|
||||
|
||||
## Machine Setup
|
||||
|
||||
The Vagrantfile provided will start three virtual machines. One will act as a consul server, and the other two will act as Docker host.
|
||||
The experimental version of Docker is installed.
|
||||
|
||||
- `consul-server` is the Consul server node, based on Ubuntu 14.04, this has IP 192.168.33.10
|
||||
- `net-1` is the first Docker host based on Ubuntu 14.10, this has IP 192.168.33.11
|
||||
- `net-2` is the second Docker host based on Ubuntu 14.10, this has IP 192.168.33.12
|
||||
|
||||
## Getting Started
|
||||
|
||||
Clone this repo, change to the `docs` directory and let Vagrant do the work.
|
||||
|
||||
$ vagrant up
|
||||
$ vagrant status
|
||||
Current machine states:
|
||||
|
||||
consul-server running (virtualbox)
|
||||
net-1 running (virtualbox)
|
||||
net-2 running (virtualbox)
|
||||
|
||||
You are now ready to SSH to the Docker hosts and start containers.
|
||||
|
||||
$ vagrant ssh net-1
|
||||
vagrant@net-1:~$ docker version
|
||||
Client version: 1.8.0-dev
|
||||
...<snip>...
|
||||
|
||||
Check that Docker network is functional by listing the default networks:
|
||||
|
||||
vagrant@net-1:~$ docker network ls
|
||||
NETWORK ID NAME TYPE
|
||||
4275f8b3a821 none null
|
||||
80eba28ed4a7 host host
|
||||
64322973b4aa bridge bridge
|
||||
|
||||
No services has been published so far, so the `docker service ls` will return an empty list:
|
||||
|
||||
$ docker service ls
|
||||
SERVICE ID NAME NETWORK CONTAINER
|
||||
|
||||
Start a container and check the content of `/etc/hosts`.
|
||||
|
||||
$ docker run -it --rm ubuntu:14.04 bash
|
||||
root@df479e660658:/# cat /etc/hosts
|
||||
172.21.0.3 df479e660658
|
||||
127.0.0.1 localhost
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
fe00::0 ip6-localnet
|
||||
ff00::0 ip6-mcastprefix
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
172.21.0.3 distracted_bohr
|
||||
172.21.0.3 distracted_bohr.multihost
|
||||
|
||||
In a separate terminal on `net-1` list the networks again. You will see that the _multihost_ overlay now appears.
|
||||
The overlay network _multihost_ is your default network. This was setup by the Docker daemon during the Vagrant provisioning. Check `/etc/default/docker` to see the options that were set.
|
||||
|
||||
vagrant@net-1:~$ docker network ls
|
||||
NETWORK ID NAME TYPE
|
||||
4275f8b3a821 none null
|
||||
80eba28ed4a7 host host
|
||||
64322973b4aa bridge bridge
|
||||
b5c9f05f1f8f multihost overlay
|
||||
|
||||
Now in a separate terminal, SSH to `net-2`, check the network and services. The networks will be the same, and the default network will also be _multihost_ of type overlay. But the service will show the container started on `net-1`:
|
||||
|
||||
$ vagrant ssh net-2
|
||||
vagrant@net-2:~$ docker service ls
|
||||
SERVICE ID NAME NETWORK CONTAINER
|
||||
b00f2bfd81ac distracted_bohr multihost df479e660658
|
||||
|
||||
Start a container on `net-2` and check the `/etc/hosts`.
|
||||
|
||||
vagrant@net-2:~$ docker run -ti --rm ubuntu:14.04 bash
|
||||
root@2ac726b4ce60:/# cat /etc/hosts
|
||||
172.21.0.4 2ac726b4ce60
|
||||
127.0.0.1 localhost
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
fe00::0 ip6-localnet
|
||||
ff00::0 ip6-mcastprefix
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
172.21.0.3 distracted_bohr
|
||||
172.21.0.3 distracted_bohr.multihost
|
||||
172.21.0.4 modest_curie
|
||||
172.21.0.4 modest_curie.multihost
|
||||
|
||||
You will see not only the container that you just started on `net-2` but also the container that you started earlier on `net-1`.
|
||||
And of course you will be able to ping each container.
|
||||
|
||||
## Creating a Non Default Overlay Network
|
||||
|
||||
In the previous test we started containers with regular options `-ti --rm` and these containers got placed automatically in the default network which was set to be the _multihost_ network of type overlay.
|
||||
|
||||
But you could create your own overlay network and start containers in it. Let's create a new overlay network.
|
||||
On one of your Docker hosts, `net-1` or `net-2` do:
|
||||
|
||||
$ docker network create -d overlay foobar
|
||||
8805e22ad6e29cd7abb95597c91420fdcac54f33fcdd6fbca6dd4ec9710dd6a4
|
||||
$ docker network ls
|
||||
NETWORK ID NAME TYPE
|
||||
a77e16a1e394 host host
|
||||
684a4bb4c471 bridge bridge
|
||||
8805e22ad6e2 foobar overlay
|
||||
b5c9f05f1f8f multihost overlay
|
||||
67d5a33a2e54 none null
|
||||
|
||||
Automatically, the second host will also see this network. To start a container on this new network, simply use the `--publish-service` option of `docker run` like so:
|
||||
|
||||
$ docker run -it --rm --publish-service=bar.foobar.overlay ubuntu:14.04 bash
|
||||
|
||||
Note, that you could directly start a container with a new overlay using the `--publish-service` option and it will create the network automatically.
|
||||
|
||||
Check the docker services now:
|
||||
|
||||
$ docker service ls
|
||||
SERVICE ID NAME NETWORK CONTAINER
|
||||
b1ffdbfb1ac6 bar foobar 6635a3822135
|
||||
|
||||
Repeat the getting started steps, by starting another container in this new overlay on the other host, check the `/etc/hosts` file and try to ping each container.
|
||||
|
||||
## A look at the interfaces
|
||||
|
||||
This new Docker multihost networking is made possible via VXLAN tunnels and the use of network namespaces.
|
||||
Check the [design](design.md) documentation for all the details. But to explore these concepts a bit, nothing beats an example.
|
||||
|
||||
With a running container in one overlay, check the network namespace:
|
||||
|
||||
$ docker inspect -f '{{ .NetworkSettings.SandboxKey}}' 6635a3822135
|
||||
/var/run/docker/netns/6635a3822135
|
||||
|
||||
This is a none default location for network namespaces which might confuse things a bit. So let's become root, head over to this directory that contains the network namespaces of the containers and check the interfaces:
|
||||
|
||||
$ sudo su
|
||||
root@net-2:/home/vagrant# cd /var/run/docker/
|
||||
root@net-2:/var/run/docker# ls netns
|
||||
6635a3822135
|
||||
8805e22ad6e2
|
||||
|
||||
To be able to check the interfaces in those network namespace using `ip` command, just create a symlink for `netns` that points to `/var/run/docker/netns`:
|
||||
|
||||
root@net-2:/var/run# ln -s /var/run/docker/netns netns
|
||||
root@net-2:/var/run# ip netns show
|
||||
6635a3822135
|
||||
8805e22ad6e2
|
||||
|
||||
The two namespace ID return are the ones of the running container on that host and the one of the actual overlay network the container is in.
|
||||
Let's check the interfaces in the container:
|
||||
|
||||
root@net-2:/var/run/docker# ip netns exec 6635a3822135 ip addr show eth0
|
||||
15: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
|
||||
link/ether 02:42:b3:91:22:c3 brd ff:ff:ff:ff:ff:ff
|
||||
inet 172.21.0.5/16 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::42:b3ff:fe91:22c3/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Indeed we get back the network interface of our running container, same MAC address, same IP.
|
||||
If we check the links of the overlay namespace we see our vxlan interface and the VLAN ID being used.
|
||||
|
||||
root@net-2:/var/run/docker# ip netns exec 8805e22ad6e2 ip -d link show
|
||||
...<snip>...
|
||||
14: vxlan1: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default
|
||||
link/ether 7a:af:20:ee:e3:81 brd ff:ff:ff:ff:ff:ff promiscuity 1
|
||||
vxlan id 256 srcport 32768 61000 dstport 8472 proxy l2miss l3miss ageing 300
|
||||
bridge_slave
|
||||
16: veth2: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP mode DEFAULT group default qlen 1000
|
||||
link/ether 46:b1:e2:5c:48:a8 brd ff:ff:ff:ff:ff:ff promiscuity 1
|
||||
veth
|
||||
bridge_slave
|
||||
|
||||
If you sniff packets on these interfaces you will see the traffic between your containers.
|
||||
|
||||
Reference in New Issue
Block a user