ARCHIVE NOTICE
cronjobs to manage the Tarkov mysql database. The cronjobs are essentially forked from the cronjobs in kokarn's tarkov-data-manager.
In order to get data from the primary Tarkov mysql server to the API, we run cronjobs that sync data from the database to the cloudflare kv workers.
These cronjobs run in GitHub actions and their schedules can be found in the section below.
Below is an example of a cron that can be adapted as needed:
name: <cron-name>
on:
push:
branches: [main] # Run the job on commits to main
schedule:
- cron: "*/10 * * * *" # Every 5 minutes is the quickest GitHub supports
jobs:
<cron-name>:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: satackey/action-docker-layer-caching@46d2c640b1d8ef50d185452ad6fb324e6bd1d052 # [email protected]
continue-on-error: true
- name: <cron-name>
run: docker-compose up --build --exit-code-from tarkov-cron
env:
TARKOV_CRON: <cron-name> # the name of the script in the ./jobs folder to run
CLOUDFLARE_TOKEN: ${{ secrets.CLOUDFLARE_TOKEN }}
PSCALE_USER: ${{ secrets.PSCALE_USER }}
PSCALE_PASS: ${{ secrets.PSCALE_PASS }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
To test locally, it is highly suggested to use Docker (and docker-compose) as this is what runs in CI with GitHub Actions
Setup:
- Install Docker
- Install docker-compose
Run the following commands in a bash terminal to setup your environment variables correctly:
export CLOUDFLARE_TOKEN=<token>
export PSCALE_USER=<planetscale-username>
export PSCALE_PASS=<planetscale-password>
export AWS_ACCESS_KEY_ID=<aws-access-key-id>
export AWS_SECRET_ACCESS_KEY=<aws-secret-access-key>
export WEBHOOK_URL=<discord-webhook-url> # optional
Run:
TARKOV_CRON=update-hideout docker-compose up --build
The syntax of the command above can be explained as follows:
TARKOV_CRON=<cron-command-to-run> docker-compose up --build
Where
<cron-command-to-run>
is the name of a script in the./jobs
folder