This is the Octavia provider driver for F5 BigIP appliances. It communicates with BigIP devices via the declarative AS3 API. The worker uses the driver-agent API, but it hooks more deeply into Octavia (similar to the Octavia Amphora driver) than the Provider Agents concept permits, e. g. by accessing the database directly.
There are lots of F5-specific configuration options. They can be found in octavia_f5/common/config.py
.
- If
agent_scheduler
in the[networking]
section of the configuration is set toloadbalancer
, new load balancers are scheduled to the device with the least amount of load balancers. This is the default. If it is set tolistener
, new load balancers are scheduled to the device with the least amount of listeners.
Mapping happens in octavia_f5/restclient/as3objects/service.py
.
Openstack listener type | AS3 service class | Notes |
---|---|---|
TCP | Service_L4 | Uses L4 acceleration |
UDP | Service_UDP | |
HTTP | Service_HTTP | |
HTTPS | Service_L4 | Uses L4 acceleration, since HTTPS simply gets passed through without decryption |
PROXY | Service_TCP | Does not use L4 acceleration, since it's incompatible with the Proxy Protocol iRule |
TERMINATED_HTTPS | Service_HTTPS |
Since health monitors have different semantics in Octavia than on the BigIP (and inconsistent naming across API and database), we have to map Octavia health monitor parameters to AS3/BigIP parameters in a specific way. We try to name the parameters on the Elektra web GUI in an explanatory way.
Elektra web GUI | CLI/API | database | AS3/BigIP |
---|---|---|---|
max_retries | rise_threshold | ||
Max Retries[1] | max_retries_down | fall_threshold | timeout[2] |
Probe Timeout | timeout | timeout | |
Interval | delay | delay | interval |
[1] Original Elektra PR superseeded by new Elektra PR which has been merged
[2] Calculated from database parameters like this: fall_threshold * delay + 1
(see code)
This provider driver uses Octavias mariadb database to store some data, but doesn't define any new tables. Instead, otherwise unused tables/columns are used in a specific way:
- The load_balancer table is used like this:
server_group_id
holds the name of the device the load balancer is scheduled to. Compare withcompute_flavor
in theamphora
table below. Note thatserver_group_id
is not shown by the CLI when runningopenstack loadbalancer show
.
- The amphora table is used in two ways:
- For each load balancer an amphora entry is created. This is done to prevent problems with Octavias health manager, which makes assumptions about amphora entries.
compute_flavor
holds the name of the device the load balancer is scheduled to. Compare withserver_group_id
in theload_balancer
table above. This can be used to query the device viaopenstack loadbalancer amphora show $LB_ID
.- Since an amphora table entry is never updated as long as its respective load balancer lives, the
updated_at
field will always benull
until the load balancer is being deleted, which will update the amphora entry status toDELETED
as well.
- For each F5 device that is managed by a provider driver worker a special entry is created in the
amphora
table.compute_flavor
holds the name of the managed F5 devicecached_zone
holds the hostnameload_balancer_id
will always be nullrole
(must contain one of the values defined in theamphora_roles
table) holds information about whether the device is in active status (MASTER
) or standby status (BACKUP
)status
(must contain one of the values defined in theprovisioning_status
table) holds device state.ALLOCATED
means the the device is offline (no entry in device status response)READY
means the device is onlineBOOTING
if it was offline and is now back online. In this case the device receives a full sync and the status is set toREADY
.
- If
vrrp_interface
is set to 'disabled' for a given F5 amphora entry, the scheduler will not take that device into account when scheduling new load balancers. vrrp_priority
holds the amount of listeners on that device
- For each load balancer an amphora entry is created. This is done to prevent problems with Octavias health manager, which makes assumptions about amphora entries.
octavia_f5/api
: Driver, running in Octavia main process (extendsAmphoraProviderDriver
)octavia_f5/cmd
: Entry points for house-keeping and status manager.house_keeping
: DB cleanup. Uses Octavia classDatabaseCleanup
octavia_f5/controller
: Communication with BigIP devicestatus_manager
: Manages table entries representing BigIP devicescontroller_worker
: REST endpoints for Octavia, synchronization loopsync_manager
: Builds AS3 declarations and sends them to the BigIP device.status
: Methods for setting status in database. Used bycontroller_worker
.
db
: Repository classes (CRUD abstractions over sqlalchemy ORM objects)network
: Layer 2 network drivers (Neutron hierarchical port binding driver, no-op driver)restclient
: Classes for building AS3 declarations. Used bysync_manager
andstatus_manager
.