Scaladex is composed of five repositories:
- scaladex: The main repository, it contains the source files.
- scaladex-credentials: The configuration repository, it is private in the scalacenter organization because it contains some secret tokens.
- scaladex-contrib: Some resource files that can be contributed to by the community: the
claims.json
, thenon-standard.json
files and others. - scaladex-small-index and scaladex-index: Some data repositories that can be used to initialize a Scaladex instance from scratch.
For development you only need to clone scaladex:
- scaladex-credentials and scaladex-index are only used in the
staging
andprod
environments. - scaladex-contrib and scaladex-small-index are submodules of scaladex.
You will need the following tools installed:
- git
- Java 1.8
- the sbt build tool
- a Scala code editor:
- VSCode with Metals
- IntelliJ Idea with the Scala plugin
- docker
If you cannot install docker, alternatively you can run PostgreSQL on port 5432 and Elasticsearch on port 9200.
So that Scaladex can collect information about Scala projects from Github you need to configure a GITHUB_TOKEN environment variable.
Go to https://github.com/settings/tokens and generate a new token with the repo
and admin:org
scopes.
Add this new token to your environment profile. For instance in Linux, you can the add following lines to the ~/.profile
file:
export GITHUB_TOKEN=<your token>
- Clone the repository and initialize the submodules:
$ git clone [email protected]:scalacenter/scaladex.git
$ cd scaladex
$ git submodule update --init
- Start the sbt shell in the terminal, compile and run the tests:
$ sbt
sbt:scaladex> compile
sbt:scaladex> test
- To check your Github token you can run the integration tests of the
infra
module.
sbt:scaladex> infra / IntegrationTest / test
- Import the project in your code editor.
- Before running Scaladex for the first time you need to populate the database:
sbt:scaladex> data/run init
It reads the json files from the small-index/
folder and populate the databese with artifacts and projects.
- Then you can start the server with:
sbt:scaladex> server/run
- Finally you can open the website in your browser at
localhost:8080
.
The database and elasticsearch indexes are persisted locally so that you do not need to run data/run init
each time you restart your computer.
The build contains 6 modules (there are all under the modules/
directory):
It contains the core data classes and service interfaces of Scaladex.
It is cross-compiled into JVM bytecode for the server
and Javascript for the webclient
.
Some useful operations to update the Scaladex data.
Only the init
operation is still used, the other operations are progressively translated to scheduled jobs in the server
module.
It contains the implementations of all the services, using PostgreSQL for the database, Elasticsearch for the search engine and akka-http for the third-party APIs (Github, Maven Central...).
The Scaladex server, written with Endpoint4s and akka-http.
The HTML templates of the pages that are generated by the server.
The script compiled to Javascript and executed by the users' browser.
Scaladex receives POM files from Maven Central, it parses them, find the Github URL of the project and stores the information in the artifacts
and projects
table of the database.
In the Scaladex terminology, an artifact is the POM file of a Scala artifact and a project is the Github repository of a Scala project. An artifact must be associated to a single project. A project can have many artifacts.
Locally, we do not receive any POM files, instead we use the data / run init
task to populate the database with some initial artifacts and projects.
While running, the server executes some scheduled jobs periodically to update the information of the projects and to synchronize the search engine: Here are some examples of scheduled jobs:
- github-info: Update information about the projects form Github.
- project-dependencies: Compute the dependencies of a project from the dependencies of its artifacts.
- sync-search: Update the content of the Elasticsearch index.
Check out the Job class to have a complete list of all the scheduled jobs.
The main and search pages are computed with information coming from Elasticsearch. The project and artifact pages contain data from the SQL database.
Before pushing your changes you must run bin/scalafmt
and sbt scalafixAll
to format your code and organize all imports.
There are two deployment environments, a staging and a production one.
The urls of each environment are:
To deploy the application to the server (index.scala-lang.org) you will need to have the following ssh access:
- [email protected] (staging)
- [email protected]
These people have access:
- Deploy the index and the server from your workstation
sbt deployDevIndex
sbt deployDevServer
- Restart the server
ssh [email protected]
./server.sh
tail -n 100 -f server.log
If all goes well, the staging scaladex website should be up and running.
- Similarly you can deploy the production index and server
sbt deployIndex
sbt deployServer
- And restart the server
ssh [email protected]
./server.sh
tail -n 100 -f server.log
Requests must be authenticated with Basic HTTP authentication:
- login:
token
- password: a Github personal access token with
read:org
scope. You can create one here
curl --data-binary "@test_2.11-1.1.5.pom" \
-XPUT \
--user token:c61e65b80662c064abe923a407b936894b29fb55 \
"http://localhost:8080/publish?created=1478668532&readme=true&info=true&contributors=true&path=/org/example/test_2.11/1.2.3/test_2.11-1.2.3.pom"
curl --data-binary "@noscm_2.11-1.0.0.pom" \
-XPUT \
--user token:c61e65b80662c064abe923a407b936894b29fb55 \
"http://localhost:8080/publish?created=1478668532&readme=true&info=true&contributors=true&path=/org/example/noscm_2.11/1.0.0/noscm_2.11-1.0.0.pom"
curl --data-binary "@test_2.11-1.1.5.pom" \
-XPUT \
--user token:c61e65b80662c064abe923a407b936894b29fb55 \
"https://index.scala-lang.org/publish?created=1478668532&readme=true&info=true&contributors=true&path=/org/example/test_2.11/1.2.3/test_2.11-1.2.3.pom"