Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support combining the command trigger with other triggers #7

Open
mikkelhegn opened this issue Apr 3, 2024 · 16 comments
Open

Support combining the command trigger with other triggers #7

mikkelhegn opened this issue Apr 3, 2024 · 16 comments

Comments

@mikkelhegn
Copy link
Member

Currently if I combine the command trigger with an http trigger, Spin exits after having run the command trigger, maning that my http trigger does not stay up listening. One scenrio for this use-case would be to run some initialization logic as a command trigger, for an api being served using the http trigger.

spin_manifest_version = 2

[application]
authors = ["Mikkel Mørk Hegnhøj <[email protected]>"]
description = "Just a command trigger"
name = "command-trigger"
version = "0.1.0"

[[trigger.command]]
component = "command"

[component.command]
source = "../command/target/wasm32-wasi/release/command.wasm"
build = ["cargo component build --target wasm32-wasi --release --manifest-path ../command/Cargo.toml"]

[[trigger.http]]
route = "/..."
component = "another"

[component.another]
source = "another/target/another.wasm"
[component.another.build]
command = "npm run build"
workdir = "another"
> spin up
Logging component stdio to ".spin/logs/"
Logging component stdio to ".spin/logs/"
Hello, world!

Serving http://127.0.0.1:3000
Available Routes:
  another: http://127.0.0.1:3000 (wildcard)

> _
@radu-matei
Copy link
Member

There are two things that come out of the above for me:

  • the multi-trigger support in Spin needs to account for a trigger exiting
  • what you're actually asking for here is a guarantee that the command trigger runs and finishes before the HTTP components, so an "init component" that would be based on the command trigger.

Thanks for validating this scenario!

@itowlson
Copy link

itowlson commented Apr 3, 2024

Requirement 2 (ordered execution) is a significant change to the trigger model. The "standard" use case is to have long-lived triggers running alongside each other: there is no sense of ordering or blocking.

That's not saying we can't or shouldn't do it; it's saying we will need to have a think about how we express ordering behaviour.

@itowlson
Copy link

itowlson commented Apr 3, 2024

I worry this takes us down the road of needing sophisticated coordination and control for triggers, because the next thing that happens is "oh now I want to run two commands in sequence but this other one as a sidecar" and I'm not sure I love that becoming Spin's problem.

@lann
Copy link

lann commented Apr 3, 2024

Another option would be a e.g. --sleep-forever-on-completion flag that causes this trigger to...sleep forever on completion. If all its resources were dropped before sleeping this process would likely have very little overhead.

@mikkelhegn
Copy link
Member Author

I'm not 100% sure that ordering is needed for the case of "initialization". I'd assume another component - e.g., an http trigger - which needs the initialization to complete, still needs to take that into account in a way, and deal with it gracefully. That of course if only true if it's not another command-triggered component "waiting" for the "init", and then retry would suddenly become a thing...

And maybe retry actually is a case, or do we expect the developer to handle that?

@itowlson
Copy link

itowlson commented Apr 3, 2024

@mikkelhegn If you are talking about an initialisation command, then it would be rude for the HTTP trigger to accept requests that were doomed to fail (no matter how gracefully) before initialisation was complete - or, perhaps worse, while initialisation was half-complete and the thing being initialised was in a transitional state. This, after all, is why init containers rather than just sidecars... grin

@itowlson
Copy link

itowlson commented Apr 3, 2024

@lann My suspicion (and, to be clear, all of this is suspicion, unquantified suspicion) is that the "take the app down on command exit" / "leave the app running on command exit" decision is part of the app definition, rather than something that the person running the app has to configure each time they run it.

@mikkelhegn
Copy link
Member Author

@mikkelhegn If you are talking about an initialisation command, then it would be rude for the HTTP trigger to accept requests that were doomed to fail (no matter how gracefully) before initialisation was complete - or, perhaps worse, while initialisation was half-complete and the thing being initialised was in a transitional state. This, after all, is why init containers rather than just sidecars... grin

How does this work (with init containers) if there are multiple instances of the app being deployed? In a scenario where the initialization had somethign to do with db migration or some other sort of thing. I guess this is going down the path of leader-election, etc. That might end up being a sliding path towards these apps not truely being stateless anymore.

@lann
Copy link

lann commented Apr 4, 2024

It would be helpful to understand what use cases we're trying to enable here. Some k8s init container scenarios might not be applicable to Spin, and we might be able to give different solutions for others.

@mikkelhegn
Copy link
Member Author

We could start with a database migration scenario - e.g., https://orm.drizzle.team/learn/tutorials/drizzle-with-turso - or do you think that should be solved external from a Spin app (e.g., in CD)?

@lann
Copy link

lann commented Apr 5, 2024

Database migrations don't (typically) need to run before an app starts, but they do need to somehow block new version readiness. In k8s terms this could be a readiness probe that effectively checks "is the database on the latest schema?" which runs in parallel with the migration.

From this command trigger's perspective, as long as the migration command can signal a failure that causes the whole deployment to fail I don't think we would need more advanced lifecycle management.

@lann
Copy link

lann commented Apr 5, 2024

☝️ Actually, maybe this points at a more general strategy for "init commands"; the command is expected to update some global state to indicate that it has succeeded, which gates a readiness check.

@lann
Copy link

lann commented Apr 5, 2024

Another feature that might be useful here would be for a command trigger to be able to run multiple commands serially, e.g. a migration command followed by a database sync command.

@ThorstenHans
Copy link
Contributor

@mikkelhegn and me were talking about this yesterday. That said, I wanna use the chance to share my point of view and a potential simple solution for this - or to reduce confusion.

The command trigger allows developers to use Spin to craft things like InitContainers, Jobs and CronJobs in the realm of Kubernetes, which is awesome and definitely required 🥳 .

However, I would suggest re-naming the trigger to once (or something with similar meaning). Terminating the host process is the right thing IMO especially when considering scenarios where users use the techniques mentioned in the previous sentence.

Following this approach, we could use the Orchestrator to do - well - orchestration 😄 and would not have to roll our own.

Although @mikkelhegn wants to achieve the same -seeding a database before starting an API -, it leads to another topic - which is the --sqlite flag of spin up.

I'll file a dedicated issue at fermyon/spin about that and link it here for reference (In essence I would really welcome a more sophisticated initialization capability for spin up as well).

@ThorstenHans
Copy link
Contributor

See fermyon/spin#2443

@mikkelhegn
Copy link
Member Author

Also touches on this: #5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants