Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update PG Snapshot restore instructions #1799

Merged
merged 5 commits into from
Aug 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion apps/trouble-host-unavailable.html.markerb
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Volumes are pinned to physical hosts, so when there's a host outage the volume i

If you're running a high availability Postgres cluster with multiple nodes, a host issue impacting one of your nodes shouldn't cause an issue; by default we run each node on a separate host. If the host your primary node is on goes down, the cluster will fail over to a healthy node.

However, if your database is running on a single Machine, and you don’t have any replicas to fail over to, then you won’t be able to connect during the host outage. Similar to the single volume steps above, you can create a new Postgres app from your most recent volume snapshot using `fly postgres create --snapshot-id <snapshot_id>`. See [Backup, restores, & snapshots](/docs/postgres/managing/backup-and-restore/) for details.
However, if your database is running on a single Machine, and you don’t have any replicas to fail over to, then you won’t be able to connect during the host outage. Similar to the single volume steps above, you can create a new Postgres app from your most recent volume snapshot using `fly postgres create --snapshot-id <snapshot_id> --image-ref <image-version>`. See [Backup, restores, & snapshots](/docs/postgres/managing/backup-and-restore/) for details.

## Prevent downtime when there's a single host issue

Expand Down
21 changes: 18 additions & 3 deletions postgres/managing/backup-and-restore.html.markerb
Original file line number Diff line number Diff line change
Expand Up @@ -84,12 +84,27 @@ vs_OPQXXna6kA2Qnhz8 26 MiB 2 days ago

The values under the `ID` columns are what will be used to restore a snapshot.

## Identifying your Postgres image version
Depending on when you created your Postgres cluster, it may be running an older image than the default for newly created clusters. Different Postgres major versions may not be fully compatible, so it's important to use the same version for your restored cluster.

To see your Postgres image and version, run `fly image show`.

```cmd
fly image show -a <postgres-app-name>
```
```output
MACHINE ID REGISTRY REPOSITORY TAG VERSION DIGEST LABELS
e286004f696700 registry-1.docker.io flyio/postgres 14.6 v0.0.41 sha256:3c25db96357a78e827ca7d fly.app_role=postgres_clusterfly.pg-version=14.6-1.pgdg110+1fly.version=v0.0.41
```
The values under the `REPOSITORY and `TAG columns are the image you'll use to restore the snapshot.
Legacy postgres images use the `flyio/postgres` repository, while new Postgres Flex images use the flyio/`postgres-flex` repository. In the above example, the machine is running a legacy `flyio/postgres:14.6` image.

## Restoring from a snapshot

To restore a Postgres application from a snapshot, simply specify the `--snapshot-id` argument when running the `create` command as shown below:
To restore a Postgres application from a snapshot, simply specify the `--snapshot-id` argument and the `--image-ref` argument when running the `create` command as shown below:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One has to also specify --stolon if it's a Stolon-based cluster (14.x and lower, I think). For a few people who tried a restore, even with the correct image, without --stolon replication is not set up correctly. Running with --stolon fixes things.


```cmd
fly postgres create --snapshot-id <snapshot-id>
fly postgres create --snapshot-id <snapshot-id> --image-ref <image-version>
```
```output
? App Name: my-app-db-restored
Expand All @@ -108,7 +123,7 @@ Postgres cluster my-app-db-restored created
Save your credentials in a secure place, you won't be able to see them again!
```

This provisions and launches a new Fly Postgres database server with the specified snapshot.
This provisions and launches a new Fly Postgres database server with the specified snapshot and the same image as your existing database.

<div class="important icon">
<b>Important:</b> The size of the volume provisioned for the new Fly Postgres application must be equal to or greater than the volume from where the snapshot was taken.
Expand Down
Loading