-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Specify caching for OFREP in server providers #17
base: main
Are you sure you want to change the base?
Conversation
type: boolean | ||
description: set to true if you want the provider to cache the evaluation results | ||
ttl: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default TTL
can be 0 and we will relay only on the polling to evict data from the cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, by default the behaviour should be no-cache so every evaluation is done remotely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the idea and think we should implement server cache control.
I am wondering if we could not just use the Cache-Control
header with the max-age
directive here and leave the TTL out of the response body.
This feels more idiomatic to me and could be nice in cases where other HTTP tooling like proxies is in the loop.
How will the cache be busted when the configuration changes without SSE, WebSockets, or polling? |
@beeme1mr I am not sure I understand what you mean here. If yes for me those are the usecases when we should set |
@lukas-reining I like the idea but I have one doubt. We will do sequenceDiagram
participant p as provider
participant px as proxy
participant f as flag management system
Note over p,f: Evaluation for user 1
p ->> px: POST /ofrep/v1/evaluate/flags/my_flag<br/>(targetingKey: user1)
px ->> f: POST /ofrep/v1/evaluate/flags/my_flag<br/>(targetingKey: user1)
f -->>px: response (user1)
px -->> p: response (user1)
Note over p,f: Evaluation for user 2
p ->> px: POST /ofrep/v1/evaluate/flags/my_flag<br/>(targetingKey: user2)
px-->>px: retrieve the response from the cache
px -->> p: response (user1)
Note over p,px: ⚠️ Most of the proxies are not using<br/>the body in their caching policies.
|
Mh good point, I was missing the targeting key. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have this doubt , otherwise this seems a good opt-in feature.
I think what @beeme1mr meant is upstream (flag mgmt system) changes and how to invalidate OFREP provider cache when a change happen. I think the |
Signed-off-by: Thomas Poignant <[email protected]>
a370b1d
to
d3d4c41
Compare
Signed-off-by: Kavindu Dodanduwa <[email protected]>
@thomaspoignant I think this we should get some more approvals and get this merged. Besides, the only change is that we decided the configuration endpoint to be an extension. So we will need to define a server provider guideline, where we could define the default behavior of caching (ex:- enable caching through OFREP provider constructor options) |
Hi guys, I have a question. What's essentially difference between I understand that This is not so clear at first glance and maybe we could improve the wording/explanation of both. Also I can't see both being enabled at the same time, it'd be great to have a |
This PR
Why this feature?
With the current implementation of OFREP on the server side the provider has to do a remote call to an API for each evaluation which can add latency to the application.
This caching mechanism helps to reduce the overhead of calling the API each time.
The
cacheable
field is here for the flag management system to notify the provider that it is ok to cache this flag evaluation. We don't want to cache all evaluations because some of them contain time-based changes and in those cases, we accept calling multiple times the API because the evaluation result can change and the cache will not be reliable.By setting the
cacheable
field we let the decision to the flag management system.Sequence diagram
This sequence diagram explains how it can work from a provider's perspective.