Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor consolidation of Environment Variables #614

Merged
merged 16 commits into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .env.example → .env.template
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
# .env.example
# .env.template
# This file can be used as a template for the .env file. Copy this file to .env and modify the values as needed.

################
# Lumigator container control
# Set to "TRUE" if the containers need to be up and running after
# a test target failed (e.g. in CI where containers are inspected
# for logs after failed steps)
KEEP_CONTAINERS_UP="FALSE"
################

################
# Lumigator API configuration
# LUMI_API_CORS_ALLOWED_ORIGINS:
# Comma separated list of origins (See: https://developer.mozilla.org/en-US/docs/Glossary/Origin)
Expand All @@ -15,6 +19,9 @@ KEEP_CONTAINERS_UP="FALSE"
# To allow CORS requests from anywhere specify "*" as any, or the only value.
# e.g. "*"
LUMI_API_CORS_ALLOWED_ORIGINS=${LUMI_API_CORS_ALLOWED_ORIGINS:-http://localhost,http://localhost:3000}
################

################
# AWS Variables for S3 Object Storage
# Configure these for AWS access, or use defaults for local development with minio.
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID:-lumigator}
Expand All @@ -23,6 +30,8 @@ AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-us-east-2}
# Default is the default api port used by minio
AWS_ENDPOINT_URL=${AWS_ENDPOINT_URL:-http://localhost:9000}
S3_BUCKET=${S3_BUCKET:-lumigator-storage}

#######################
# Ray Cluster Configuration
# These settings are for the local Ray setup. To use an external Ray cluster, you MUST use an external S3-compatible storage
# to ensure the Ray workers can access data from your Lumigator server.
Expand Down
36 changes: 24 additions & 12 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,13 +1,11 @@
.PHONY: local-up local-down local-logs clean-docker-buildcache clean-docker-images clean-docker-containers start-lumigator-external-services start-lumigator stop-lumigator test-sdk-unit test-sdk-integration test-sdk-integration-containers test-sdk test-backend-unit test-backend-integration test-backend-integration-containers test-backend test-jobs-evaluation-unit test-jobs-inference-unit test-jobs test-all
.PHONY: local-up local-down local-logs clean-docker-buildcache clean-docker-images clean-docker-containers start-lumigator-external-services start-lumigator stop-lumigator test-sdk-unit test-sdk-integration test-sdk-integration-containers test-sdk test-backend-unit test-backend-integration test-backend-integration-containers test-backend test-jobs-evaluation-unit test-jobs-inference-unit test-jobs test-all check-dot-env

SHELL:=/bin/bash
UNAME:= $(shell uname -o)
PROJECT_ROOT := $(shell git rev-parse --show-toplevel)
CONTAINERS_RUNNING := $(shell docker ps -q --filter "name=lumigator-")

KEEP_CONTAINERS_UP := $(shell grep -E '^KEEP_CONTAINERS_UP=' .env | cut -d'=' -f2 | tr -d '"')

KEEP_CONTAINERS_UP ?= "FALSE"
KEEP_CONTAINERS_UP := $(shell grep -E '^KEEP_CONTAINERS_UP=' .env | cut -d'=' -f2 | tr -d '"' || echo "FALSE")

# used in docker-compose to choose the right Ray image
ARCH := $(shell uname -m)
Expand Down Expand Up @@ -71,11 +69,11 @@ endef
LOCAL_DOCKERCOMPOSE_FILE:= docker-compose.yaml
DEV_DOCKER_COMPOSE_FILE:= .devcontainer/docker-compose.override.yaml

.env:
@if [ ! -f .env ]; then cp .env.example .env; echo ".env created from .env.example"; fi
check-dot-env:
@if [ ! -f .env ]; then cp .env.template .env; echo ".env created from .env.template"; fi

njbrake marked this conversation as resolved.
Show resolved Hide resolved
# Launches Lumigator in 'development' mode (all services running locally, code mounted in)
local-up: .env
local-up: check-dot-env
uv run pre-commit install
RAY_ARCH_SUFFIX=$(RAY_ARCH_SUFFIX) docker compose --profile local -f $(LOCAL_DOCKERCOMPOSE_FILE) -f ${DEV_DOCKER_COMPOSE_FILE} up --watch --build

Expand All @@ -86,15 +84,15 @@ local-logs:
docker compose -f $(LOCAL_DOCKERCOMPOSE_FILE) logs

# Launches lumigator in 'user-local' mode (All services running locally, using latest docker container, no code mounted in)
start-lumigator: .env
start-lumigator: check-dot-env
RAY_ARCH_SUFFIX=$(RAY_ARCH_SUFFIX) docker compose --profile local -f $(LOCAL_DOCKERCOMPOSE_FILE) up -d

# Launches lumigator with no code mounted in, and forces build of containers (used in CI for integration tests)
start-lumigator-build: .env
start-lumigator-build: check-dot-env
RAY_ARCH_SUFFIX=$(RAY_ARCH_SUFFIX) docker compose --profile local -f $(LOCAL_DOCKERCOMPOSE_FILE) up -d --build

# Launches lumigator without local dependencies (ray, S3)
start-lumigator-external-services: .env
start-lumigator-external-services: check-dot-env
docker compose -f $(LOCAL_DOCKERCOMPOSE_FILE) up -d

stop-lumigator:
Expand Down Expand Up @@ -146,12 +144,26 @@ test-sdk: test-sdk-unit test-sdk-integration-containers
# start them if they are not present or use the currently running ones.
test-backend-unit:
cd lumigator/python/mzai/backend/; \
SQLALCHEMY_DATABASE_URL=sqlite:////tmp/local.db uv run pytest -o python_files="backend/tests/unit/*/test_*.py"
S3_BUCKET=lumigator-storage \
RAY_HEAD_NODE_HOST=localhost \
RAY_DASHBOARD_PORT=8265 \
SQLALCHEMY_DATABASE_URL=sqlite:////tmp/local.db \
uv run pytest -o python_files="backend/tests/unit/*/test_*.py"

njbrake marked this conversation as resolved.
Show resolved Hide resolved
test-backend-integration:
cd lumigator/python/mzai/backend/; \
docker container list --all; \
SQLALCHEMY_DATABASE_URL=sqlite:////tmp/local.db RAY_WORKER_GPUS="0.0" RAY_WORKER_GPUS_FRACTION="0.0" INFERENCE_PIP_REQS=../jobs/inference/requirements_cpu.txt INFERENCE_WORK_DIR=../jobs/inference EVALUATOR_PIP_REQS=../jobs/evaluator/requirements.txt EVALUATOR_WORK_DIR=../jobs/evaluator uv run pytest -s -o python_files="backend/tests/integration/*/test_*.py"
S3_BUCKET=lumigator-storage \
RAY_HEAD_NODE_HOST=localhost \
RAY_DASHBOARD_PORT=8265 \
SQLALCHEMY_DATABASE_URL=sqlite:////tmp/local.db \
RAY_WORKER_GPUS="0.0" \
RAY_WORKER_GPUS_FRACTION="0.0" \
INFERENCE_PIP_REQS=../jobs/inference/requirements_cpu.txt \
INFERENCE_WORK_DIR=../jobs/inference \
EVALUATOR_PIP_REQS=../jobs/evaluator/requirements.txt \
EVALUATOR_WORK_DIR=../jobs/evaluator \
uv run pytest -s -o python_files="backend/tests/integration/*/test_*.py"

test-backend-integration-containers:
ifeq ($(CONTAINERS_RUNNING),)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user-guides/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ text data.
```{note}
You can also use the OpenAI GPT family of models or the Mistal API to run an inference job. To do
so, you need to set the appropriate environment variables: `OPENAI_API_KEY` or `MISTRAL_API_KEY`.
Refer to the `.env.example` file in the repository for more details.
Refer to the `.env.template` file in the repository for more details.
```

## What You'll Need
Expand Down
6 changes: 3 additions & 3 deletions lumigator/python/mzai/backend/backend/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@ class BackendSettings(BaseSettings):

# AWS
S3_ENDPOINT_URL: str | None = None
S3_BUCKET: str = "lumigator-storage"
S3_BUCKET: str # Default is specified in .env file
S3_URL_EXPIRATION: int = 3600 # Time in seconds for pre-signed url expiration
S3_DATASETS_PREFIX: str = "datasets"
S3_JOB_RESULTS_PREFIX: str = "jobs/results"
S3_JOB_RESULTS_FILENAME: str = "{job_name}/{job_id}/results.json"

# Ray
RAY_HEAD_NODE_HOST: str = "localhost"
RAY_DASHBOARD_PORT: int = 8265
RAY_HEAD_NODE_HOST: str # Default is specified in .env file
RAY_DASHBOARD_PORT: int # Default is specified in .env file
RAY_SERVE_INFERENCE_PORT: int = 8000
# the following vars will be copied, if present, from Ray head to workers
# Secrets should be added directly to ray by setting env vars on the ray head/worker nodes
Expand Down
Loading