Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open Routing Architecture #451

Closed
felixguendling opened this issue Feb 24, 2024 · 1 comment
Closed

Open Routing Architecture #451

felixguendling opened this issue Feb 24, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@felixguendling
Copy link
Member

Moving discussion from transitous to here, since it's more a MOTIS issue, not a transitous issue:


Regarding the integration of new routing services, I was thinking a bit about since some months now. This is what I came up with. It also would solve a lot of nitpicks I currently have with MOTIS as it is now (like really old Flatbuffers version and the fact that there's no distinction between internal and external APIs hence no API versioning/stability).

I was also thinking about a more generic API or basically a new architecture for MOTIS:

image

In contrast to the monolithic architecture of MOTIS, this would be service based. So the orchestrator service could be written in any programming language (my favorite would be Rust). The other routing services (mainly for first mile + last mile) could be addressed via a load-balancer reverse-proxy or directly.

  • To make a developer setup as simple as possible, a docker-compose.yml could be provided that already sets up the orchestrator, nigiri and some street routing services.
  • For small production instances, docker-compose.yml would still be sufficient while larger setups have the option to use Kubernetes to scale each services separately. This is important for the following reason:

image

This way, the expensive OSRM part is needed only once (requests need maybe 10-20% CPU from OSRM and the rest is in nigiri - so having 5-10x more nigiri instances with less memory makes sense).

Basically, this architecture would eliminate MOTIS as it is now and would therefore be "MOTIS 2.0".

The communication protocol between the intermodal orchestrator and the street routing services could be standardized. Then:

  • Those services can either implement this protocol directly
  • You could have a proxy that converts requests (no need to touch any code in the orchestrator)
  • Or inside the orchestrator there could be a RoutingService interface and if you want to add a new routing service, all you need to do is to implement this interface.

This would bring a lot more scalability, extensibility and flexibility.

Comment by @jbruechert

Making the routing part swappable sounds great!

I would appreciate if the new architecture would still be simple enough to be able to run outside docker, since I basically can not use docker in the environment the current Transitous (development) instance is running in.

As far as I understand the current architecture, all communication between modules already goes through flatbuffers, which should in principle allow things like splitting out modules into a new process dynamically with a config option, since you can easily serialize / deserialize the communication between them already.

Whether the different modules are then implemented as shared libraries that can be either loaded in the main process or in a separate runner, or whether always the same executable is launched with different options is then an implementation detail.

You could still use different programming languages fairly easily, as pretty much all of them can call C, and when you are serializing everything already, you can easily pass data across this boundary.

Answer by @felixguendling

Making the routing part swappable sounds great!

I would appreciate if the new architecture would still be simple enough to be able to run outside docker, since I basically can not use docker in the environment the current Transitous (development) instance is running in.

As far as I understand the current architecture, all communication between modules already goes through flatbuffers, which should in principle allow things like splitting out modules into a new process dynamically with a config option, since you can easily serialize / deserialize the communication between them already.

Whether the different modules are then implemented as shared libraries that can be either loaded in the main process or in a separate runner, or whether always the same executable is launched with different options is then an implementation detail.

You could still use different programming languages fairly easily, as pretty much all of them can call C, and when you are serializing everything already, you can easily pass data across this boundary.

es, we already have some code for remote operations, e.g. here:

std::vector<std::string> registry::register_remote_ops(
std::vector<std::string> const& names, remote_op_fn_t const& fn) {
std::lock_guard const g{remote_op_mutex_};
std::vector<std::string> successful_names;
for (auto const& name : names) {
if (remote_operations_.emplace(name, fn).second) {
successful_names.emplace_back(name);
}
}
return successful_names;
}
void registry::unregister_remote_op(std::vector<std::string> const& names) {
std::lock_guard const g{remote_op_mutex_};
for (auto const& name : names) {
remote_operations_.erase(name);
}
}
std::optional<remote_op_fn_t> registry::get_remote_op(
std::string const& prefix) {
std::lock_guard const g{remote_op_mutex_};
if (auto const it = remote_operations_.upper_bound(prefix);
it != begin(remote_operations_) &&
boost::algorithm::starts_with(prefix, std::next(it, -1)->first)) {
return std::next(it, -1)->second;
} else {
return std::nullopt;
}
}

That's also one reason why we have our own (stackful) coroutine library (written when C++ didn't have language support for coroutines): https://github.com/motis-project/ctx

So MOTIS with its current architecture is prepared to scale to multiple servers. Add a websocket load balancer in between plus maybe an auto-scaling mechanism from one of the big cloud vendors, and you're almost there.

However, not having to maintain all this would also have some benefits :-)

I guess everything that can be started with Docker/Podman/etc. can also be run standalone. And if the orchestrator would be written in Rust, it would even be an option to statically link nigiri.

It's not urgent.. so no need to decide anything now. Just wanted to share some ideas :-)

@felixguendling felixguendling mentioned this issue Feb 24, 2024
26 tasks
@felixguendling felixguendling added the enhancement New feature or request label May 10, 2024
@felixguendling
Copy link
Member Author

Not really needed anymore as a single machine can host the entire planet with affordable RAM resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant