You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When OpenCRVS is hosted on an environment where it is not directly exposed to internet but a reverse proxy (or load balancer) is used to forward Internet requests to it, end users cannot log in as soon as there 3 or 4 end users trying to log in in the same minute.
Which feature of OpenCRVS your bug concern?
Login
Security
To Reproduce
Steps to reproduce the behaviour:
Host opencrvs on a server which does not have a public IP with the host name crvs.domain
Configure a reverse proxy which has a public (ex: nginx or aws loadbalancer) to forward all HTTP request to crvs.domain and *.crvs.domain to the opencrvs server
Configure the DNS to point crvs.domain and *.crvs.domain to the public IP of the reverse proxy or load balancer
Ensure the app is working for yourself (login, do one or two tasks)
Ask 3 endusers to try to login in and log out and log in by yourself
See error (Red thing below the login button)
Expected behaviour
Every 4 users including yourself should be able to login.
Actual behaviour
Most likely, at list one of the 4 users won't be able to log in and will have a weird red small popup below the login button instead.
Opening the Devtools, the /authenticate HTTP request will have a status code of 402 with the message : To many requests within a minute. Please throttle your requests.
Screenshots
OpenCRVS Core Version:
v1.6 (Git branch: release-v1.6.0)
Country Configuration Version:
v1.1.0 (Git branch: release-v1.1.0)
Desktop (please complete the following information):
OS: MacOS
Browser: Chrome
Version: 130.0.6723.70
Smartphone (please complete the following information):
N/A
Additional context
Madagascar did its first training of trainers with 60 persons in the same rooms trying to use the software at the same time.
We didn't spot this issue before.
Possible fixes
After investigation there few workaround for staging environments : define the DISABLE_RATE_LIMIT env variable in the staging environement so that the rate limiter is completely switched off
For production, we cannot do this because there need to be a rate limiter for any critical endpoints like /authenticate .
What could be done is to
create an utility function to get the current user's IP and implement it this way
use that utility function everywhere needed instead of the built in one (eg. rate limiting, logging, black/whitelisting ...)
The trick here is that most correctly configured reverse proxy and load balancers, pass the real user IP to the upstream server using an HTTP header. There seem to be a convention that this HTTP header is called X-Forwarded-For .
But to make sure an attacker don't just add this header to trick our rate limiting system, we also ensure that the source IP is local.
Reproducible demo
A bit complicated to create a demo reproduction because I'm short on docker things.
But I'm sure a docker compose lover would easily add a new service with nginx which would be reverse proxying requests to the current traeffick service and would manage to reproduce the issue locally.
I will try to submit a PR for this because we'll need it before going to production in few days 😉
The text was updated successfully, but these errors were encountered:
We don't use IP addresses with rate-limiting, but we used a wrong key for bucketing the rate-limiting requests. For now, like you described, you can disable rate limiting with the environment variable DISABLE_RATE_LIMIT.
We will QA this with @SyedaAfrida and amend it to v1.6
Wonderful, thank you for your lightning fast support on this guys !
It took you the same time to spot and fix the issue than for me to create the bug report 😂
Try logging in 10 times quickly with an incorrect password. Then try to login with a correct password - it should hit the rate limit and not let you log in. As the rate limit is being hit, try logging in with the correct details with another user. We must be able to login with other users, even though another user has hit the rate limit.
The rate limit expires in 60 seconds so the actions need to be fairly swift.
Describe the bug
When OpenCRVS is hosted on an environment where it is not directly exposed to internet but a reverse proxy (or load balancer) is used to forward Internet requests to it, end users cannot log in as soon as there 3 or 4 end users trying to log in in the same minute.
Which feature of OpenCRVS your bug concern?
Login
Security
To Reproduce
Steps to reproduce the behaviour:
Expected behaviour
Every 4 users including yourself should be able to login.
Actual behaviour
Most likely, at list one of the 4 users won't be able to log in and will have a weird red small popup below the login button instead.
Opening the Devtools, the /authenticate HTTP request will have a status code of 402 with the message :
To many requests within a minute. Please throttle your requests.
Screenshots
OpenCRVS Core Version:
Country Configuration Version:
Desktop (please complete the following information):
Smartphone (please complete the following information):
N/A
Additional context
Madagascar did its first training of trainers with 60 persons in the same rooms trying to use the software at the same time.
We didn't spot this issue before.
Possible fixes
After investigation there few workaround for staging environments : define the DISABLE_RATE_LIMIT env variable in the staging environement so that the rate limiter is completely switched off
For production, we cannot do this because there need to be a rate limiter for any critical endpoints like /authenticate .
What could be done is to
The trick here is that most correctly configured reverse proxy and load balancers, pass the real user IP to the upstream server using an HTTP header. There seem to be a convention that this HTTP header is called X-Forwarded-For .
But to make sure an attacker don't just add this header to trick our rate limiting system, we also ensure that the source IP is local.
Reproducible demo
A bit complicated to create a demo reproduction because I'm short on docker things.
But I'm sure a docker compose lover would easily add a new service with nginx which would be reverse proxying requests to the current traeffick service and would manage to reproduce the issue locally.
I will try to submit a PR for this because we'll need it before going to production in few days 😉
The text was updated successfully, but these errors were encountered: