-
Notifications
You must be signed in to change notification settings - Fork 3
Guest Placement
Draft guest placement description.
There is a "job runner" daemon. This daemon is started on every node and, at startup, attempts to get an exclusive lock via etcd. Only one daemon can hold the lock, the others will block/loop attempting to get the lock.
XXX: should we have a generic lock wrapper for running processes that may need to get a lock? we may need "lock" processes that we did not write - dhcpd, etc.
A guest create job is created by inserting a new value into the "/guests" prefix. This must contain a valid network ID and flavor ID. This job can be added directly to etcd via curl, etcdctl, etc or via the to-be-written "End-user" API.
A simple queue mechanism using CreateInOrder in the "/queue" prefix. A value is added that is simply the ID of the new guest.
The daemon watches the "/queue" prefix. On a new entry, it will then get the guest referenced and gather candidate hypervisors by:
- Get all subnets in the requested network
- XXX: should we only pick subnets with available IP's? how to determine or just let randomness select
- Find all hypervisors with these subnets
- Select hypervisors that are "alive" - have active heartbeats
- select hypervisors that have available resources by subtracting current deployed guests from hypervisors resources (disk and memory)
- randomly select a hypervisor
- reserve an IP address on subnet that hypervisor has for selected network
- make an HTTP call to hypervisor agent API. on failure, release IP and select a different hypervisor until none are left.
- record status (where?)
Initially, only a single placement can happen at a time. TODO: performance test.