Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autolaunch Cluster when starting Edge-Endpoint #69

Merged
merged 2 commits into from
Jun 27, 2024

Conversation

honeytung
Copy link
Member

@honeytung honeytung commented Jun 26, 2024

Modified the Dockerfile to launch cluster when bastion launches. This will make the setup process easier, especially when integrating with glhub.

@honeytung honeytung requested a review from tyler-romero June 26, 2024 22:34
@@ -51,5 +51,11 @@ RUN arkade version && \
RUN mkdir -p /app/edge-endpoint
COPY . /app/edge-endpoint

# Set environment variables for running cluster setup
ENV INFERENCE_FLAVOR="CPU"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We dont always want the inference flavor to be CPU, can we avoid hardcoding this? Is it possible to just set this in balena's env var menu?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I guess something like this might be better:

ARG INFERENCE_FLAVOR_ARG="CPU"
ENV INFERENCE_FLAVOR=${EDGE_INFERENCE_FLAVOR:-$INFERENCE_FLAVOR_ARG}

Defaults to CPU when EDGE_INFERENCE_FLAVOR is not set.

CMD ["/bin/sh", "-c", "./edge-endpoint/deploy/bin/cluster_setup.sh && tail -f /dev/null"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this at one point and struggled to get it to work! Glad you figured it out!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it was not working at first, then I found out the inference-model just takes a while to spin up. 😅

Copy link
Member

@tyler-romero tyler-romero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after fixing the hardcoded inference flavor

@honeytung honeytung merged commit 659a352 into main Jun 27, 2024
5 of 6 checks passed
@honeytung honeytung deleted the harry/auto-launch-cluster branch June 27, 2024 18:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants