Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache context.jsonld #1316

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Cache context.jsonld #1316

wants to merge 3 commits into from

Conversation

Laurin-W
Copy link
Contributor

Add Cache-Control: public, max-age=21600 header to context.jsonld requests by default (advise browser to cache for 6 hours).

@srosset81
Copy link
Contributor

I'm not such a fan of caching data that may change. If the JSON-LD context is updated with important changes, all the frontends will be broken during 6 hours. If the context was super stable, I could consent to that, but currently it is frequently changed.

I see two possible alternatives:

  • Cache the JSON-LD context with Redis, using the regular Moleculer mechanism. This way it is not calculated again every time it is fetched + we can disable the cache if an ontology is changed.
  • Use Etag to allow the browser to cache it. Moleculer has support for that, but I have never tried it.

@Laurin-W
Copy link
Contributor Author

Laurin-W commented Oct 15, 2024

Mh, I see. Maybe we could set a very short cache time e.g. a minute or two, ... or less?
What initially got me to create this PR was that sometimes the context is fetched three times in a row within a time frame of less than 10 seconds. So even this would improve performance and reduce server load.

We might also improve that on the frontend side and do some caching there...

we can disable the cache if an ontology is changed.

I like the idea of using redis here but this has caused me quite some significant annoyances while debugging things. So I have some concerns as well.

Etags could be nice as well but result in delay since a HTTP request is sent anyways, if I understand correctly?

@srosset81
Copy link
Contributor

srosset81 commented Oct 15, 2024

What initially got me to create this PR was that sometimes the context is fetched three times in a row within a time frame of less than 10 seconds

I think this is coming from the jsonld library, that is used within the data provider to compact/frame results. By default it uses no cache. We added a cache on the backend through what they call a document loader:

https://github.com/assemblee-virtuelle/semapps/blob/master/src/middleware/packages/jsonld/services/parser/index.js#L14-L18

If we added a similar mechanism on the frontend, it would surely reduce the amount of calls. But then there will be still be the problem of invalidating the cache... 🤔 With 1-2 minutes, there would be less risks indeed.

I like the idea of using redis here but this has caused me quite some significant annoyances while debugging things. So I have some concerns as well.

On dev mode, Redis cache is disabled by default so this helps.

Etags could be nice as well but result in delay since a HTTP request is sent anyways, if I understand correctly?

Yes the client do a HEAD request to know the value of the ETag.

I think my concern is mainly for developers, because the usages are different than in prod mode. I want to note also that the JSON-LD context is loaded dynamically. Whenever a new ontology is registered, it is added to the JSON-LD context. So if the client fetches the JSON-LD before all the services are started (and the jsonld service cannot know when all services will have registered their ontologies), it may get a non-complete JSON-LD context. And if it's cached, it will be kept forever, creating weird bugs for developers.

In the end it's a matter of evaluating what is more annoying: some extra unneeded requests, or hard-to-debug problems for developers due to invalid cache ? For me, the second one is 100x worst.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants