diff --git a/Docs/containers.md b/Docs/containers.md
index 2129198b70..ef2a792717 100644
--- a/Docs/containers.md
+++ b/Docs/containers.md
@@ -3,12 +3,12 @@
Singularity supports containers in a few different contexts:
###Mesos Containerizer
-The default Mesos containerizer for processes which sets resource limits/etc. Enabled by adding `mesos` to the `--containerizers` argument when running `mesos-slave`. If this is enabled, commands with no `containerInfo` definition will run under this containerizer.
+The default [mesos containerizer](http://mesos.apache.org/documentation/latest/mesos-containerizer/) for processes which sets resource limits/etc. Enabled by adding `mesos` to the `--containerizers` argument when running `mesos-slave`. The mesos containerizer can isolate the task via cpu, memory or other parameters specified using the `--isolation` flag when starting the mesos slave. Deploys with no `containerInfo` definition will try to run under this containerizer by default.
###Mesos Docker Containerizer
-The [built-in docker containerizer](https://mesos.apache.org/documentation/latest/docker-containerizer/) which comes with Mesos. This will manage the starting and stopping of your docker container as well as mapping ports, adding environment variables, and mapping volumes in the container to the Mesos sandbox for that task. Enable this containerizer by adding `docker` to the arguments of `--containerizers` when running `mesos-slave`.
+The [docker containerizer](https://mesos.apache.org/documentation/latest/docker-containerizer/) that ships with Mesos, will manage the starting and stopping of your docker container as well as mapping ports, adding environment variables, and mapping volumes in the container to the Mesos sandbox for that task. You can enable this containerizer by adding `docker` to the arguments of `--containerizers` when running `mesos-slave`.
-To use Singularity with the Docker containerizer, add a `containerInfo` to the SingularityDeploy object when creating a deploy (without specifying a `customExecutorCmd`). The Singularity deploy object's `containerInfo` field mirrors the Mesos `containerInfo` definition:
+To use Singularity with the Docker containerizer, add a [`containerInfo` field](reference/api.md#model-SingularityContainerInfo) with a `type` of `DOCKER` to the [SingularityDeploy](reference/api.md#model-SingularityDeploy) object when creating a deploy (without specifying a `customExecutorCmd`). The Singularity deploy object's [`containerInfo` field](reference/api.md#model-SingularityContainerInfo) mirrors the Mesos `containerInfo` definition:
```
{
@@ -20,13 +20,13 @@ To use Singularity with the Docker containerizer, add a `containerInfo` to the S
"portMappings": [
{
"containerPortType": "FROM_OFFER", # FROM_OFFER or LITERAL
- "containerPort": 0, # If type is FROM_OFFER this is the index of the port assigned by Mesos. (ie 0 -> $PORT0)
+ "containerPort": 0, # If type is FROM_OFFER this is the index of the port assigned by Mesos. (ie 0 -> first assigned port)
"hostPortType": "FROM_OFFER",
"hostPort": 0
}
]
},
- "volumes": [
+ "volumes": [ # The sandbox for the task will always be added as a volume at /mnt/mesos/sandbox within the container
{
"containerPath": "/etc/example",
"hostPath": "/etc/example"
@@ -49,17 +49,18 @@ When the SingularityExecutor is given a task with `containerInfo` of type `DOCKE
- map all specified environment variables to the container
- assign and map ports and specified volumes
- map the Mesos sandbox task directory to `/mnt/mesos/sandbox` in the container
-- create and start the container, directing output to the configured `runContext.logFile`
+- create and start the container, directing output to the configured `logFile`
- run a `docker stop` when receiving a `SIGTERM`, try to remove the stopped container, and exit with the container's exit code
-A few special notes and variables that are set:
-- `MESOS_SANDBOX`: The Mesos sandbox directory as seen inside the container (generally `/mnt/mesos/sandbox`)
-- `LOG_DIR`: The log directory that SingularityExecutor will use for logrotating/uploading/etc, generally mapped to `/mnt/mesos/sandbox/logs` in the container
+A few special notes and environment variables that are set:
+- Environment variables:
+ - `MESOS_SANDBOX`: The Mesos sandbox directory as seen inside the container (generally `/mnt/mesos/sandbox`)
+ - `LOG_DIR`: The log directory that SingularityExecutor will use for logrotating/uploading/etc, generally mapped to `/mnt/mesos/sandbox/logs` in the container
- The Docker working directory will be set to the `taskAppDirectory` in the Mesos sandbox
-- The container name will be the task id
+- The container name will be a configured prefix (`se-` by default) and the the task id (`SingularityExcutorCleanup` uses this to optionally clean up old contaienrs that are managed by Singularity)
- SingularityExecutor will explicitly try to pull the image (ie, must be from a repository reachable by the slave)
-Here is an example deploy to get you started:
+Here is an example deploy you can use with the [docker-compose](development/docker.md) setup to get you started:
```
{
diff --git a/Docs/database.md b/Docs/database.md
index e66805a1c5..51074d4a05 100644
--- a/Docs/database.md
+++ b/Docs/database.md
@@ -39,3 +39,16 @@ INFO [2013-12-23 18:42:10,327] liquibase: ChangeSet migrations.sql::1::tpetr ra
```
More information about `db` tasks can be found in the dropwizard-migrations [docs](http://dropwizard.io/manual/migrations), and more information about the migration file syntax can be found in the liquibase [docs](http://www.liquibase.org/documentation/yaml_format.html).
+
+### Purging Old Tasks
+
+You can optionally purge old task data from the db by specifying `historyPurging` configuration. In the configuration for SingularityService, you can have a section similar to the following (default values shown):
+
+```
+historyPurging:
+ deleteTaskHistoryAfterDays: 365 # purge tasks older than this
+ deleteTaskHistoryAfterTasksPerRequest: 1000 # How many tasks per request before purge
+ deleteTaskHistoryBytesInsteadOfEntireRow: true # Keep the row, just delete data to save space
+ checkTaskHistoryEveryHours: 24 # how often to check for tasks to purge
+ enabled: false # determines if we should run the purge
+```
diff --git a/Docs/details.md b/Docs/details.md
index 7675c32f6e..6f4f530950 100644
--- a/Docs/details.md
+++ b/Docs/details.md
@@ -6,7 +6,7 @@
Singularity is an essential part of the HubSpot Platform and is ideal for deploying micro-services. It is optimized to manage thousands of concurrently running processes in hundreds of servers.
## How it Works
-Singularity is an [**Apache Mesos framework**](http://Mesos.apache.org/documentation/latest/Mesos-frameworks/). It runs as a *task scheduler* on top of **Mesos Clusters** taking advantage of Apache Mesos' scalability, fault-tolerance, and resource isolation. [Apache Mesos](http://Mesos.apache.org/documentation/latest/Mesos-architecture/) is a cluster manager that simplifies the complexity of running different types of applications on a shared pool of servers. In Mesos terminology, *Mesos applications* that use the Mesos APIs to schedule tasks in a cluster are called [*frameworks*](http://Mesos.apache.org/documentation/latest/app-framework-development-guide/).
+Singularity is an [**Apache Mesos framework**](http://mesos.apache.org/documentation/latest/mesos-frameworks/). It runs as a *task scheduler* on top of **Mesos Clusters** taking advantage of Apache Mesos' scalability, fault-tolerance, and resource isolation. [Apache Mesos](http://mesos.apache.org/documentation/latest/mesos-architecture/) is a cluster manager that simplifies the complexity of running different types of applications on a shared pool of servers. In Mesos terminology, *Mesos applications* that use the Mesos APIs to schedule tasks in a cluster are called [*frameworks*](http://mesos.apache.org/documentation/latest/app-framework-development-guide/).
![Mesos Frameworks](images/Mesos_Frameworks.png)
@@ -29,13 +29,28 @@ The *Mesos master* determines how many resources are offered to each framework a
As depicted in the figure, Singularity implements the two basic framework components as well as a few more to solve common complex / tedious problems such as task cleanup and log tailing / archiving without requiring developers to implement it for each task they want to run:
### Singularity Scheduler
-The scheduler is the core of Singularity: a DropWizard API that implements the Mesos Scheduler Driver. The scheduler matches client deploy requests to Mesos resource offers and acts as a web service offering a JSON REST API for accepting deploy requests.
+The scheduler is the core of Singularity: a [DropWizard](http://www.dropwizard.io/) API that implements the Mesos Scheduler Driver. The scheduler matches client deploy requests to Mesos resource offers and acts as a web service offering a JSON REST API for accepting deploy requests.
Clients use the Singularity API to register the type of deployable item that they want to run (web service, worker, cron job) and the corresponding runtime settings (cron schedule, # of instances, whether instances are load balanced, rack awareness, etc.).
After a deployable item (a **request**, in API terms) has been registered, clients can post *Deploy requests* for that item. Deploy requests contain information about the command to run, the executor to use, executor specific data, required cpu, memory and port resources, health check URLs and a variety of other runtime configuration options. The Singularity scheduler will then attempt to match Mesos offers (which in turn include resources as well as rack information and what else is running on slave hosts) with its list of *Deploy requests* that have yet to be fulfilled.
-Rollback of failed deploys, health checking and load balancing are also part of the advanced functionality the Singularity Scheduler offers. When a service or worker instance fails in a new deploy, the Singularity scheduler will rollback all instances to the version running before the deploy- keeping the deploys always consistent. After the scheduler makes sure that a Mesos task (corresponding to a service instance) has entered the TASK_RUNNING state it will use the provided health check URL and the specified health check timeout settings to perform health checks. If health checks go well, the next step is to perform load balancing of service instances. Load balancing is attempted only if the corresponding deployable item has been defined to be *loadBalanced*. To perform load balancing between service instances, Singularity supports a rich integration with a specific Load Balancer API. Singularity will post requests to the Load Balancer API to add the newly deployed service instances and to remove those that were previously running. Check [Integration with Load Balancers](development/lbs.md) to learn more. Singularity also provides generic webhooks which allow third party integrations, which can be registered to follow request, deploy, or task updates.
+
+Rollback of failed deploys, health checking and load balancing are also part of the advanced functionality the Singularity Scheduler offers. A new deploy for a long runing service will run as shown in the diagram below.
+
+![Singularity Deploy](images/deploy.png)
+
+When a service or worker instance fails in a new deploy, the Singularity scheduler will rollback all instances to the version running before the deploy, keeping the deploys always consistent. After the scheduler makes sure that a Mesos task (corresponding to a service instance) has entered the TASK_RUNNING state it will use the provided health check URL and the specified health check timeout settings to perform health checks. If health checks go well, the next step is to perform load balancing of service instances. Load balancing is attempted only if the corresponding deployable item has been defined to be *loadBalanced*. To perform load balancing between service instances, Singularity supports a rich integration with a specific Load Balancer API. Singularity will post requests to the Load Balancer API to add the newly deployed service instances and to remove those that were previously running. Check [Integration with Load Balancers](development/lbs.md) to learn more. Singularity also provides generic webhooks which allow third party integrations, which can be registered to follow request, deploy, or task updates.
+
+
+#### Slave Placement
+
+When matching a Mesos resource offer to a deploy, Singularity can use one of several strategies to determine if the host in the offer is appropriate for the task in question, or `SlavePlacement` in Singularity terms. Available placement strategies are:
+
+- `GREEDY`: uses whatever slaves are available
+- `SEPARATE_BY_DEPLOY`/`SEPARATE`: ensures no 2 instances / tasks of the same request *and* deploy id are ever placed on the same slave
+- `SEPARATE_BY_REQUEST`: ensures no two tasks belonging to the same request (regardless if deploy id) are placed on the same host
+- `OPTIMISTIC`: attempts to spread out tasks but may schedule some on the same slave
#### Singularity Scheduler Dependencies
The Singularity scheduler uses ZooKeeper as a distributed replication log to maintain state and keep track of registered deployable items, the active deploys for these items and the running tasks that fulfill the deploys. As shown in the drawing, the same ZooKeeper quorum utilized by Mesos masters and slaves can be reused for Singularity.
@@ -49,6 +64,7 @@ The [*Singularity UI*](ui.md) is a single page static web application served fro
It is a fully-featured application which provides historical as well as active task information. It allows users to view task logs and interact directly with tasks and deploy requests.
+
### Optional Slave Components
#### Singularity Executor
diff --git a/Docs/development/docker.md b/Docs/development/docker.md
index 4ab58b949f..e9fed05bd1 100644
--- a/Docs/development/docker.md
+++ b/Docs/development/docker.md
@@ -1,30 +1,32 @@
## Setup
-For developing or testing our Singularity with Docker, you will need to install [docker](https://docs.docker.com/installation/) and [docker-compose](https://docs.docker.com/compose/#installation-and-set-up).
+For developing or testing out Singularity with Docker, you will need to install [docker](https://docs.docker.com/installation/) and [docker-compose](https://docs.docker.com/compose/#installation-and-set-up).
## Example cluster with Docker Compose
Run `docker-compose pull` first to get all of the needed images. *Note: This may take a few minutes*
-Then simply run `docker-compose up` to start containers for...
+Then simply run `docker-compose up` and it will start containers for...
- mesos master
- mesos slave (docker/mesos containerizers enabled)
-- zookeeper host
+- zookeeper
- Singularity
- [Baragon Service](https://github.com/HubSpot/Baragon) for load balancer management
- [Baragon Agent](https://github.com/HubSpot/Baragon) + Nginx as a load balancer
...and the following UIs will be available:
-- Singularity UI => http://localhost:7099/singularity
-- Baragon UI => http://localhost:8080/baragon/v2/ui
+- Singularity UI => [http://localhost:7099/singularity](http://localhost:7099/singularity)
+- Baragon UI => [http://localhost:8080/baragon/v2/ui](http://localhost:8080/baragon/v2/ui)
*if using [boot2docker](http://boot2docker.io/) or another vm, replace localhost with the ip of your vm*
+The docker-compose example clsuter will always run off of the most recent release tag.
+
## Developing With Docker
### `dev`
-In the root of this project is a `dev` wrapper script to make developing easier. You can do the following:
+In the root of this project is a `dev` wrapper script to make developing easier. It will run using images from the current snapshot version. You can do the following:
```
./dev pull # Get the latest images from docker hub
diff --git a/Docs/images/deploy.png b/Docs/images/deploy.png
new file mode 100644
index 0000000000..c0b3a6f489
Binary files /dev/null and b/Docs/images/deploy.png differ
diff --git a/Docs/install.md b/Docs/install.md
index a957520d55..baef93696a 100644
--- a/Docs/install.md
+++ b/Docs/install.md
@@ -6,12 +6,13 @@
Singularity uses Zookeeper as its primary datastore -- it cannot run without it.
-Chef recipe: https://supermarket.chef.io/cookbooks/zookeeper
-Puppet module: https://forge.puppetlabs.com/deric/zookeeper
+Chef recipe: [https://supermarket.chef.io/cookbooks/zookeeper](https://supermarket.chef.io/cookbooks/zookeeper)
-More info on how to manually set up a Zookeeper cluster lives here: https://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup
+Puppet module: [https://forge.puppetlabs.com/deric/zookeeper](https://forge.puppetlabs.com/deric/zookeeper)
-For testing or local development purposes, a single-node cluster running on your local machine is fine.
+More info on how to manually set up a Zookeeper cluster lives [here](https://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup)
+
+For testing or local development purposes, a single-node cluster running on your local machine is fine. If using the [docker testing/development setup](development/docker.md), this will already be present.
### 2. Set up MySQL (optional)
@@ -19,7 +20,7 @@ Singularity can be configured to move stale data from Zookeeper to MySQL after a
### 3. Set up a Mesos cluster
-Mesosphere provides a good tutorial for setting up a Mesos cluster: http://mesosphere.com/docs/getting-started/datacenter/install/. Don't bother setting up Marathon, it isn't necessary for Singularity.
+Mesosphere provides a good tutorial for setting up a Mesos cluster: http://mesosphere.com/docs/getting-started/datacenter/install/. You can skip the section on setting up Marathon since Singularity will be our framework instead.
### 4. Build or download the Singularity JAR
@@ -31,7 +32,7 @@ Run `mvn clean package` in the root of the Singularity repository. The Singulari
#### Downloading a precompiled JAR
-Singularity JARs are published to Maven Central for each release. You can view the list of SingularityService (the executable piece of Singularity) here: http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.hubspot%22%20AND%20a%3A%22SingularityService%22
+Singularity JARs are published to Maven Central for each release. You can view the list of SingularityService (the executable piece of Singularity) JARs [here](http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.hubspot%22%20AND%20a%3A%22SingularityService%22)
Be sure to only use the `shaded.jar` links -- the other JARs won't work.
@@ -40,6 +41,8 @@ Be sure to only use the `shaded.jar` links -- the other JARs won't work.
Singularity requires a YAML file with some configuration values in order to start up. Here's an example:
```yaml
+
+# Run SingularityService on port 7099 and log to /var/log/singularity-access.log
server:
type: simple
applicationContextPath: /singularity
@@ -49,8 +52,8 @@ server:
requestLog:
appenders:
- type: file
- currentLogFilename: ../logs/access.log
- archivedLogFilenamePattern: ../logs/access-%d.log.gz
+ currentLogFilename: /var/log/singularity-access.log
+ archivedLogFilenamePattern: /var/log/singularity-access-%d.log.gz
database: # omit this entirely if not using MySQL
driverClass: com.mysql.jdbc.Driver
diff --git a/Docs/reference/configuration.md b/Docs/reference/configuration.md
index 6476a7c03f..8bb25fbd69 100644
--- a/Docs/reference/configuration.md
+++ b/Docs/reference/configuration.md
@@ -2,24 +2,31 @@
Singularity (Service) is configured by DropWizard via a YAML file referenced on the command line. Top-level configuration elements reside at the root of the configuration file alongside [DropWizard configuration](https://dropwizard.github.io/dropwizard/manual/configuration.html).
-- [Root Configuration](#root-configuration)
- - [Common Configuration](#common-configuration)
- - [General](#general)
- - [Healthchecks and New Task Checks](#healthchecks-and-new-task-checks)
- - [Limits](#limits)
- - [Cooldown](#cooldown)
- - [Load Balancer API](#load-balancer-api)
- - [User Interface](#user-interface)
- - [Internal Scheduler Configuration](#internal-scheduler-configuration)
- - [Pollers](#pollers)
- - [Mesos](#mesos)
- - [Thread Pools](#thread-pools)
- - [Operational](#operational)
-- [Mesos Configuration](#mesos-configuration)
- - [Framework](#framework)
- - [Resource Limits](#resource-limits)
- - [Racks](#racks)
- - [Slaves](#slaves)
+- [SingularityService Configuration](#root-configuration)
+ - [Root Configuration](#root-configuration)
+ - [Common Configuration](#common-configuration)
+ - [General](#general)
+ - [Healthchecks and New Task Checks](#healthchecks-and-new-task-checks)
+ - [Limits](#limits)
+ - [Cooldown](#cooldown)
+ - [Load Balancer API](#load-balancer-api)
+ - [User Interface](#user-interface)
+ - [Internal Scheduler Configuration](#internal-scheduler-configuration)
+ - [Pollers](#pollers)
+ - [Mesos](#mesos)
+ - [Thread Pools](#thread-pools)
+ - [Operational](#operational)
+ - [Mesos Configuration](#mesos-configuration)
+ - [Framework](#framework)
+ - [Resource Limits](#resource-limits)
+ - [Racks](#racks)
+ - [Slaves](#slaves)
+ - [Database](#database)
+ - [History Purging](#history-purging)
+ - [S3](#s3)
+ - [Sentry](#sentry)
+ - [Email/SMTP](#smtp)
+ - [UI Configuration](#ui-configuration)
## Root Configuration ##
@@ -32,7 +39,7 @@ These are settings that are more likely to be altered.
|-----------|---------|-------------|------|
| allowRequestsWithoutOwners | true | If false, submitting a request without at least one owner will return a 400 | boolean |
| commonHostnameSuffixToOmit | null | If specified, will remove this hostname suffix from all taskIds | string |
-| defaultSlavePlacement | GREEDY | The slavePlacement strategy when not specified in a request. GREEDY uses whatever slaves are available, SEPARATE_BY_DEPLOY (same as SEPARATE) ensures no 2 instances / tasks of the same request and deploy id are ever placed on the same slave, SEPARATE_BY_REQUEST ensures no two tasks belonging to the same request (regardless if deploy id) are placed on the same host, and OPTIMISTIC attempts to spread out tasks but may schedule some on the same slave | enum / string [GREEDY, OPTIMISTIC, SEPARATE (deprecated), SEPARATE_BY_DEPLOY, SEPARATE_BY_REQUEST]
+| defaultSlavePlacement | GREEDY | See [Slave Placement](../details.md#user-content-placement) | enum / string [GREEDY, OPTIMISTIC, SEPARATE (deprecated), SEPARATE_BY_DEPLOY, SEPARATE_BY_REQUEST]
| defaultValueForKillTasksOfPausedRequests | true | When a task is paused, the API allows for the tasks of that request to optionally not be killed. If that parameter is not set in the pause request, this value is used | boolean |
| deltaAfterWhichTasksAreLateMillis | 30000 (30 seconds) | The amount of time after a task's schedule time that Singularity will classify it (in state API and dashboard) as a late task | long |
| deployHealthyBySeconds | 120 | Default amount of time to allow pending deploys to run for before transitioning them into active deploys. If more than this time passes before a deploy can be considered healthy (all of its tasks either make it to TASK_RUNNING or pass healthchecks), then the deploy will be rejected | long |
@@ -88,6 +95,7 @@ These settings are less likely to be changed, but were included in the configura
| cleanupEverySeconds | 5 | Will cleanup request, task, and other queues on this interval | long |
| persistHistoryEverySeconds | 3600 (1 hour) | Moves stale historical task data from ZooKeeper into MySQL, setting to 0 will disable history persistence | long |
| saveStateEverySeconds | 60 | State about this Singularity instance is saved (available over API) on this interval | long |
+| checkScheduledJobsEveryMillis | 600000 (10 mins) | Check for new scheduled jobs and those running into the next scheduled time on this interval | long |
#### Mesos ####
| Parameter | Default | Description | Type |
@@ -115,6 +123,14 @@ These settings are less likely to be changed, but were included in the configura
| sandboxHttpTimeoutMillis | 5000 (5 seconds) | Sandbox HTTP calls will timeout after this amount of time (fetching logs for emails / UI)
| newTaskCheckerBaseDelaySeconds | 1 | Added to the the amount of deploy to wait before checking a new task | long |
| allowTestResourceCalls | false | If true, allows calls to be made to the test resource, which can test internal methods | boolean |
+| deleteDeploysFromZkWhenNoDatabaseAfterHours | 336 (14 days) | Delete deploys from zk when they are older than this if we are not using a database | long |
+| deleteStaleRequestsFromZkWhenNoDatabaseAfterHours | 336 (14 days) | Delete stale requests after this amount of time if we are not using a database | long |
+| deleteTasksFromZkWhenNoDatabaseAfterHours | 168 (7 days) | Delete old tasks from zk after this amount of time if we are not using a database | long |
+| deleteDeadSlavesAfterHours | 168 (7 days) | Remove dead slaves from the list after this amount of time | long |
+| deleteUndeliverableWebhooksAfterHours | 168 (7 days) | Delete (and stop retrying) failed webhooks after this amount of time | long |
+| waitForListeners | true | If true, the event system waits for all listeners having processed an event. | boolean |
+| warnIfScheduledJobIsRunningForAtLeastMillis | 86400000 (1 day) | Warn if a scheduled job has been running for this long | long |
+| warnIfScheduledJobIsRunningPastNextRunPct | 200 | Warn if a scheduled job has run this much past its next scheduled run time (e.g. 200 => ran through next two run times) | int |
## Mesos Configuration ##
@@ -153,3 +169,101 @@ These settings should live under the "mesos" field inside the root configuration
| slaveHttpPort | 5051 | The port to talk to slaves on | int |
| slaveHttpsPort | absent | The HTTPS port to talk to slaves on | Integer (Optional) |
+## Database ##
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| database | | The database connection for SingularityService follows the [dropwizard DataSourceFactory format](http://www.dropwizard.io/0.7.0/dropwizard-db/apidocs/io/dropwizard/db/DataSourceFactory.html) | [DataSourceFactory](http://www.dropwizard.io/0.7.0/dropwizard-db/apidocs/io/dropwizard/db/DataSourceFactory.html) |
+
+#### History Purging ####
+
+These settings live under the "historyPuring" field in the root configuration
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| deleteTaskHistoryAfterDays | 365 | Purge tasks older than this many days | int |
+| deleteTaskHistoryAfterTasksPerRequest | 10000 | Purge oldest tasks when there are more than this many associated with a single request | int |
+| deleteTaskHistoryBytesInsteadOfEntireRow | true | Only delete the taskHistoryBytes instead of the entire record of the task (e.g. to save space)| boolean |
+| checkTaskHistoryEveryHours | 24 | Run the purge every x hours | int |
+| enabled | false | Should we run the database purge | boolean |
+
+## S3 ##
+
+These settings live under the "s3" field in the root configuration. If using the SingularityS3Uploader, this section will need to be provided in order to view lists of and download s3 logs from the SingularityUI.
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| maxS3Thread | 3 | Max threads to run for fetching logs from s3 | int |
+| waitForS3ListSeconds | 5 | Timeout in seconds for fetching list of s3 logs | int |
+| waitForS3LinksSeconds | 1 | Timeout in seconds for creating new s3 links | int |
+| expireS3LinksAfterMillis | 86400000 (1 day) | Expire generated s3 log links after this amount of time | long |
+| s3Bucket | | S3 bucket to search for logs | String |
+| groupOverrides | | Extra s3 configurations provided such that individual requests may use separate s3 buckets. Each S3GroupOverrideConfiguration has a name specified by the Map key and consists of an s3Bueckt, s3AccessKey, and s3SecretKey |Map |
+| s3KeyFormat | | Search for logs with keys in this format, should be the same as the key format set in the SingularityS3Uploader | String |
+| s3AccessKey | | aws access key for the specified s3 bucket | String |
+| s3SecretKey | | aws secret key for the specified s3 bucket | String |
+
+## Sentry ##
+
+These settings live under the "sentry" field in the root config and enable Singularity error reporting to [sentry](https://getsentry.com/welcome/).
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| dsn | | Sentry DSN (Data Source Name) | String |
+| prefix| "" | Prefix string for event culprit naming and messages | String |
+
+## SMTP ##
+
+These settings live under the "smtp" field in teh root config.
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| username | | smtp username | String |
+| password | | smtp password | String |
+| taskLogLength | 512 | Send this many lines of a tasks log in emails | int |
+| host | localhost | Host for smtp session | String |
+| port | 25 | Port for smtp session | int |
+| from | "singularity-no-reply@example.com" | Send emails form this address | String |
+| mailMaxThreads | 3 | max threads for email sending process | int |
+| admins | [] | List of admin user emails | List\ |
+| rateLimitAfterNotifications | 5 | Rate limit email sending after this many notifications have been sent in `rateLimitPeriodMillis` | int |
+| rateLimitPeriodMillis | 60000 (10 mins) | time period for `rateLimitAfterNotifications` | long |
+| rateLimitCooldownMillis | 3600000 (1 hour) | Cooldown time before rate limiting is removed | long |
+| taskEmailTailFiles | [stdout, stderr] | Send the tail of these files in messages about tasks | List\ |
+| emails | See below | See below | Map\\> |
+
+#### Emails List ####
+
+The emails list determines what emails to send notifications to and for what events. You can specify a map of [`EmailType`](https://github.com/HubSpot/Singularity/blob/master/SingularityService/src/main/java/com/hubspot/singularity/config/EmailConfigurationEnums.java) to a list of [`EmailDestination`s](https://github.com/HubSpot/Singularity/blob/master/SingularityService/src/main/java/com/hubspot/singularity/config/EmailConfigurationEnums.java)
+
+`EmailType` corressponds to different events that could trigger emails such as `TASK_LOST` or `TASK_FAILED`
+
+`EmailDestination` corressponds to one of `OWNERS` (as listed on the Singularity Request), `ACTION_TAKER` (user who triggered the action causing the email update), or `ADMINS` (specified in config as seen above)
+
+## UI Configuration ##
+
+These settings live under the "ui" field in the root config.
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| title | "Singularity" | Title shown in the left of the menu bar in ui | String |
+| navColor | "" | Color for nav bar | String |
+| baseUrl | | Base url where the ui will be hosted (e.g. http://localhost:7099/singularity) | String |
+| runningTaskLogPath | stdout | Generate link to this log for running tasks on the request page | String |
+| finishedTaskLogPath | stdout | Generate link to this log for finished tasks on the request page | String |
+| hideNewDeployButton | false | Don't show the 'New Deploy' button | boolean |
+| hideNewRequestButton | false | Don't show the 'New Request' button | boolean |
+| rootUrlMode | INDEX_CATCHALL | `INDEX_CATCHALL`: UI is served off of / using a catchall resource. `UI_REDIRECT`: UI is served off of /ui, path and index redirects there. `DISABLED`: UI is served off of /ui and the root resource is not served at all | enum / String `INDEX_CATCHALL`, `UI_REDIRECT`, `DISABLED` |
+
+## Zookeeper ##
+
+These settings live under the "zookeeper" field in the root config.
+
+| Parameter | Default | Description | Type |
+|-----------|---------|-------------|------|
+| quorum | | Comma separated host:port list of zk hosts | String |
+| sessionTimeoutMillis | 600_000 | zookeeper session timeout | int |
+| connectTimeoutMillis | 60_000 | Connect to zookeeper timeout | int |
+| retryBaseSleepTimeMilliseconds | 1_000 | Wait time between zookeeper connection retries | int |
+| retryMaxTries | 3 | Max retries to obtain a zookeeper connection before aborting | int |
+| zkNamespace | | Path under which to store Singularity data in zk (e.g. /singularity) | String |
diff --git a/Docs/reference/examples.md b/Docs/reference/examples.md
index b04395c8a8..1848b3a03c 100644
--- a/Docs/reference/examples.md
+++ b/Docs/reference/examples.md
@@ -1,5 +1,14 @@
# Singularity Deploy Examples
+- [Creating A Request](#creating-a-request)
+- [Basic Service Using the Mesos Executor](#basic-service-using-the-mesos-executor)
+- [Basic Service USing Allocated Ports](#basic-service-using-allocated-ports)
+- [Basic Load Balanced Service with Allocated Ports](#basic-load-balanced-service-with-allocated-ports)
+- [Scaling Up Services](#scaling-up)
+- [Docker Service with Host Networking](#docker-service-with-host-networking)
+- [Docker Service with Bridge Networking](#docker-service-with-bridge-networking)
+- [Load Balanced Docker Service Using The SingularityExecutor](#load-balanced-docker-service-using-the-singularityexecutor)
+
These examples assume you have [installed singularity](../install.md).
The services deployed will be a [build](https://github.com/micktwomey/docker-sample-dropwizard-service) of the [Dropwizard getting started example](https://dropwizard.github.io/dropwizard/getting-started.html) and a [simple python service](https://github.com/micktwomey/docker-sample-web-service).
@@ -324,3 +333,27 @@ To deploy this service instead change the Docker image being used:
}
}
```
+
+## Load Balanced Docker Service Using The SingularityExecutor
+
+As we saw above we can add the `loadBalancerGroups` and `serviceBasePath` fields to our deploy and have our service be load balanced.
+
+Now, we also want to add in the SingularityExecutor. The SingularityExecutor [also has docker support](../containers.md) (separate form the mesos docekr containerizer). We can instead use the SingularityExecutor by adding the following to our deploy JSON:
+
+```json
+"customExecutorCmd": "/usr/local/bin/singularity-executor", # as configured in the example cluster
+# Extra settings the SingularityExecutor can use if needed
+"executorData": {
+ "cmd":"",
+ "embeddedArtifacts":[],
+ "externalArtifacts": [],
+ "s3Artifacts": [],
+ "successfulExitCodes": [0],
+ "user": "root",
+ "extraCmdLineArgs": [],
+ "loggingExtraFields": {},
+ "maxTaskThreads": 2048
+}
+```
+
+`POST`ing this to Singularity we now have a docker container with mapped ports connected to a load balancer and running via the SingularityExecutor.
diff --git a/README.md b/README.md
index ff98dfb482..3b4d7762a3 100644
--- a/README.md
+++ b/README.md
@@ -16,11 +16,11 @@ For a more thorough explanation of the concepts behind Singularity and Mesos cli
- [JSON REST API and Java Client](Docs/reference/api.md)
- [Fully featured web application (replaces and improves Mesos Master UI)](Docs/ui.md)
- Rich load balancer integration with [Baragon](https://github.com/HubSpot/Baragon)
- - Deployments, automatic rollbacks, and healthchecks
+ - [Deployments, automatic rollbacks, and healthchecks](Docs/details.md#user-content-deploys)
- [Webhooks for third party integrations](Docs/webhooks.md)
- Configurable email alerts to service owners
- [Historical deployment and task data](Docs/database.md)
- - [Custom executor with extended log features](Docs/details.md#optional-slave-components)
+ - [Custom executor with extended log features](Docs/details.md#user-content-optional-components)
----------
@@ -30,17 +30,17 @@ If you want to give Singularity a try, you can install [docker](https://docs.doc
Run `docker-compose pull` first to get all of the needed images. *Note: This may take a few minutes*
-Then simply run `docker-compose up` to start containers for...
+Then simply run `docker-compose up` and it will start containers for...
- mesos master
- mesos slave (docker/mesos containerizers enabled)
-- zookeeper host
+- zookeeper
- Singularity
- [Baragon Service](https://github.com/HubSpot/Baragon) for load balancer management
- [Baragon Agent](https://github.com/HubSpot/Baragon) + Nginx as a load balancer
...and the following UIs will be available:
-- Singularity UI => http://localhost:7099/singularity
-- Baragon UI => http://localhost:8080/baragon/v2/ui
+- Singularity UI => [http://localhost:7099/singularity](http://localhost:7099/singularity)
+- Baragon UI => [http://localhost:8080/baragon/v2/ui](http://localhost:8080/baragon/v2/ui)
*if using [boot2docker](http://boot2docker.io/) or another vm, replace localhost with the ip of your vm*
@@ -72,7 +72,7 @@ Then simply run `docker-compose up` to start containers for...
- [API](Docs/reference/api.md)
- [Configuration](Docs/reference/configuration.md)
- [Examples](Docs/reference/examples.md)
- - [Custom Executor Components](Docs/details.md#optional-slave-components)
+ - [Custom Executor Components](Docs/details.md#user-content-optional-components)
#### Development ####