Skip to content
MajoBerger edited this page Feb 7, 2023 · 10 revisions

Laggy dspace

solr memory

DSpace uses solr for searches and caches some data. If many users add extremely long text data or create items with extremely long descriptions (or other fields), solr might have trouble with memory and might crash.

How to detect

DSpace appears non-functional, impossible to find communities, collections, items.. when clicking on Catalogue: image site is just loading and no results are visible

What to do? (in docker environment)

Short term

Nothing. Solr will restart itself and work normally in a few tens of seconds or minutes. It might happen that it fails several times in a row for the same entry, but it will resolve the problem eventually.

Long term

check memory

Check and/or increase memory. To check, enter solr docker docker exec -it dspacesolr bash and execute solr status image last line shows currently used memory and max memory. If it's low, it can be increased

increase memory

In our case, docker-compose-rest.yml must be modified. https://github.com/dataquest-dev/dspace-angular/blob/bf743b702de6b09c292911d14f6771b42b37e5c9/docker/docker-compose-rest.yml#L116 Command solr -f -m 4g (where -m 4g is argument specifying 4GB of memory) can be modified to for example solr -f -m 8g. Then rerun deploy of dspace, preferably from github action on front-end repository. Similar procedure is to be expected when running outside of docker.

Known Errors

Failing integration tests

  • WorkspaceItemRestRepositoryIT.lookupPubmedMetadataTest is still failing in vanilla DSpace also.
  • WorkspaceItemRestRepositoryIT.createPubmedWorkspaceItemFromFileTest is failing after some our change. If is run only this test it pass but if are run all tests it pass. The test fail in the PubmedImportMetadataSourceServiceImpl.splitToRecords, row 329, code: OMElement element = records.getDocumentElement();
Clone this wiki locally