Skip to content

Commit

Permalink
Rename rdf -> filby
Browse files Browse the repository at this point in the history
  • Loading branch information
cressie176 committed Jan 7, 2024
1 parent 228a574 commit 0194817
Show file tree
Hide file tree
Showing 40 changed files with 361 additions and 358 deletions.
12 changes: 6 additions & 6 deletions .github/workflows/node-js-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ jobs:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: rdf_test
POSTGRES_USER: rdf_test
POSTGRES_PASSWORD: rdf_test
POSTGRES_DB: fby_test
POSTGRES_USER: fby_test
POSTGRES_PASSWORD: fby_test
strategy:
matrix:
node-version: [18.x, 20.x]
Expand All @@ -38,9 +38,9 @@ jobs:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: rdf_test
POSTGRES_USER: rdf_test
POSTGRES_PASSWORD: rdf_test
POSTGRES_DB: fby_test
POSTGRES_USER: fby_test
POSTGRES_PASSWORD: fby_test
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/node-js-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ jobs:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: rdf_test
POSTGRES_USER: rdf_test
POSTGRES_PASSWORD: rdf_test
POSTGRES_DB: fby_test
POSTGRES_USER: fby_test
POSTGRES_PASSWORD: fby_test
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
Expand Down
42 changes: 21 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Filby - A framework for managing temporal reference data

[![Node.js CI](https://github.com/acuminous/reference-data-framework/workflows/Node.js%20CI/badge.svg)](https://github.com/acuminous/reference-data-framework/actions?query=workflow%3A%22Node.js+CI%22)
[![Code Climate](https://codeclimate.com/github/acuminous/reference-data-framework/badges/gpa.svg)](https://codeclimate.com/github/acuminous/reference-data-framework)
[![Test Coverage](https://codeclimate.com/github/acuminous/reference-data-framework/badges/coverage.svg)](https://codeclimate.com/github/acuminous/reference-data-framework/coverage)
[![Node.js CI](https://github.com/acuminous/filby/workflows/Node.js%20CI/badge.svg)](https://github.com/acuminous/filby/actions?query=workflow%3A%22Node.js+CI%22)
[![Code Climate](https://codeclimate.com/github/acuminous/filby/badges/gpa.svg)](https://codeclimate.com/github/acuminous/filby)
[![Test Coverage](https://codeclimate.com/github/acuminous/filby/badges/coverage.svg)](https://codeclimate.com/github/acuminous/filby/coverage)
[![Discover zUnit](https://img.shields.io/badge/Discover-zUnit-brightgreen)](https://www.npmjs.com/package/zunit)

*There is no difference between Time and any of the three dimensions of Space except that our consciousness moves along it.*
Expand All @@ -28,7 +28,7 @@ Most applications require slow moving reference data, which presents the followi
| Evolution | Both reference data, and our understanding of the application domain evolves over time. We will at some point need to make backwards incompatible changes to our reference data, and will need to do so without breaking client applications. This suggests a versioning and validation mechanism. The issue of temporality compounds the challenge of evolution, since we may need to retrospecively add data to historic records. In some cases this data will not be known. |
| Local Testing | Applications may be tested locally, and therefore any solution sould work well on a development laptop. |

Solving such a complex problem becomes simpler when broken down. This project provides a server side framework for managing slow moving, time dependent reference data. In the following diagram, the mechanism for defining, loading, accessing and receiving notifications about reference data are provided by this framework. The RESTful API and Webhook must be manually created by the application developer. An [example application](#example-application) is provided to demonstrate how.
Solving such a complex problem becomes simpler when broken down. This project provides a server side framework for managing temporal reference data. In the following diagram, the mechanism for defining, loading, accessing and receiving notifications about reference data are provided by this framework. The RESTful API and Webhook must be manually created by the application developer. An [example application](#example-application) is provided to demonstrate how.

<pre>
Change
Expand Down Expand Up @@ -121,7 +121,7 @@ Refering back to the previous list of challenges:
- **Local Testing** is possible through HTTP mocking libraries.

## How it works
RDF has the following important concepts
filby has the following important concepts
<pre>
┌─────────────────┐
│ │
Expand Down Expand Up @@ -173,39 +173,39 @@ A change set groups a set of data frames (potentially for different entities) in
### Notifications
Notifications are published whenever a new data frame is created. By subscribing to the notifications that are emitted per projection when the backing data changes, downstream systems can maintain copies of the data, with reduced risk of it becoming stale. For example, the client in the above diagram could be another backend system, caching proxy, a web application, a websocket application, a CI / CD pipeline responsible for building a client side data module, or an ETL process for exporting the reference data to the company data lake.

Notifications are retried a configurable number of times using an exponential backoff algorithm. It is save for multiple instances of the framework to poll for notifications concurrently.
Notifications are retried a configurable number of times using an exponential backoff algorithm. It is safe for multiple instances of the framework to poll for notifications concurrently.

### Hook
A hook is an event the framework will emit to whenenver a data frame used to build a projection is added. Your application can handle these events how it chooses, e.g. by making an HTTP request, or publishing a message to an SNS topic. Unlike node events, the handlers can be (and should be) asynchronous. It is advised not to share hooks between handlers since if one handler fails but another succeeds the built in retry mechanism will re-notify both handlers.

## API
RDF provides a set of lifecycle methods and an API for retrieving change sets and projections, and for executing database queries (although you are free to use your preferred PostgreSQL client too).
filby provides a set of lifecycle methods and an API for retrieving change sets and projections, and for executing database queries (although you are free to use your preferred PostgreSQL client too).

#### rdf.init(config: RdfConfig): Promise&lt;void&gt;
#### filby.init(config: RdfConfig): Promise&lt;void&gt;
Connects to the database and runs migrations

#### rdf.startNotifications(): Promise&lt;void&gt;
#### filby.startNotifications(): Promise&lt;void&gt;
Starts polling the database for notifications

#### rdf.stopNotifications(): Promise&lt;void&gt;
#### filby.stopNotifications(): Promise&lt;void&gt;
Stops polling the database for notifications, and waits for any inflight notifications to complete.

#### rdf.stop(): Promise&lt;void&gt;
#### filby.stop(): Promise&lt;void&gt;
Stops polling for notifications then disconnects from the database

#### rdf.getProjections(): Promise&lt;RdfProjection&gt;[]
#### filby.getProjections(): Promise&lt;RdfProjection&gt;[]
Returns the list of projections.

#### rdf.getProjection(name: string, version: number): Promise&lt;RdfProjection&gt;
#### filby.getProjection(name: string, version: number): Promise&lt;RdfProjection&gt;
Returns the specified projection.

#### rdf.getChangeLog(projection): Promise&lt;RdfChangeSet[]&gt;
#### filby.getChangeLog(projection): Promise&lt;RdfChangeSet[]&gt;
Returns the change log (an ordered list of change sets) for the given projection.

#### rdf.getChangeSet(changeSetId): Promise&lt;RdfChangeSet&gt;
#### filby.getChangeSet(changeSetId): Promise&lt;RdfChangeSet&gt;
Returns the specified change set

#### rdf.withTransaction(callback: (client: PoolClient) => Promise&lt;T&gt;): Promise<T&gt;
#### filby.withTransaction(callback: (client: PoolClient) => Promise&lt;T&gt;): Promise<T&gt;
Passes a transactional [node-pg client](https://node-postgres.com/) to the given callback. Use this to query the aggregate entities for your projections, e.g.

```sql
Expand All @@ -217,7 +217,7 @@ ORDER BY p.code ASC, pc.occurs ASC;

```js
function getParks(changeSetId) {
return rdf.withTransaction(async (client) => {
return filby.withTransaction(async (client) => {
const { rows } = await client.query(query, [changeSetId]);
return rows.map(toPark);
});
Expand Down Expand Up @@ -259,7 +259,7 @@ define enums:

# Defining entities performs the following:
#
# 1. Inserts a row into the 'rdf_entity' table,
# 1. Inserts a row into the 'fby_entity' table,
# 2. Creates a table 'park_v1' for holding reference data
# 3. Creates an aggregate function 'park_v1_aggregate' to be used by projections
#
Expand All @@ -278,7 +278,7 @@ define entities:
park_code_len: LENGTH(code) >= 2 # Creates PostgreSQL check constraints

# Defining projections and their dependent entities
# RDF uses the dependencies to work out what projections are affected by reference data updates
# filby uses the dependencies to work out what projections are affected by reference data updates
add projections:
- name: park
version: 1
Expand Down Expand Up @@ -333,8 +333,8 @@ This project includes proof of concept applications based on a Caravan Park busi
### Installation
```bash
git clone [email protected]:acuminous/reference-data-framework.git
cd reference-data-framework
git clone [email protected]:acuminous/filby.git
cd filby
npm i
```

Expand Down
8 changes: 4 additions & 4 deletions examples/javascript/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
"port": 3000
},
"database": {
"user": "rdf_example",
"database": "rdf_example",
"password": "rdf_example"
"user": "fby_example",
"database": "fby_example",
"password": "fby_example"
},
"rdf": {
"filby": {
"migrations": "../migrations",
"notifications": {
"interval": "5s",
Expand Down
8 changes: 4 additions & 4 deletions examples/javascript/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ version: '3.8'
services:
postgres:
image: postgres:16-alpine
container_name: rdf_example
container_name: fby_example
environment:
POSTGRES_DB: rdf_example
POSTGRES_USER: rdf_example
POSTGRES_PASSWORD: rdf_example
POSTGRES_DB: fby_example
POSTGRES_USER: fby_example
POSTGRES_PASSWORD: fby_example
ports:
- "5432:5432"
24 changes: 12 additions & 12 deletions examples/javascript/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,19 @@ const swaggerUI = require('@fastify/swagger-ui');

const config = require('./config.json');
const changeLogRoute = require('./routes/changelog-v1');
const ReferenceDataFramework = require('../..');
const Filby = require('../..');

const fastify = Fastify(config.fastify);

const rdf = new ReferenceDataFramework({ ...config.rdf, ...{ database: config.database } });
const filby = new Filby({ ...config.filby, ...{ database: config.database } });

(async () => {

await fastify.register(swagger, {
swagger: {
info: {
title: 'Holiday Park Data Service',
description: 'A proof of concept reference data application',
description: 'A proof of concept Filby application',
version: '1.0.0',
},
schemes: ['http'],
Expand All @@ -46,20 +46,20 @@ const rdf = new ReferenceDataFramework({ ...config.rdf, ...{ database: config.da
});

try {
await rdf.init();
await filby.init();

await registerChangelog();
await registerProjections();

await fastify.listen(config.server);

rdf.on('park_v1_change', (event) => {
filby.on('park_v1_change', (event) => {
console.log({ event });
});
rdf.on('change', (event) => {
filby.on('change', (event) => {
console.log({ event });
});
await rdf.startNotifications();
await filby.startNotifications();

registerShutdownHooks();
console.log(`Server is listening on port ${config.server?.port}`);
Expand All @@ -72,16 +72,16 @@ const rdf = new ReferenceDataFramework({ ...config.rdf, ...{ database: config.da
})();

async function registerChangelog() {
fastify.register(changeLogRoute, { prefix: '/api/changelog', rdf });
fastify.register(changeLogRoute, { prefix: '/api/changelog', filby });
}

async function registerProjections() {
const projections = await rdf.getProjections();
const projections = await filby.getProjections();
projections.forEach((projection) => {
// eslint-disable-next-line global-require
const route = require(path.resolve(`routes/${projection.name}-v${projection.version}`));
const prefix = `/api/projection/v${projection.version}/${projection.name}`;
fastify.register(route, { prefix, rdf });
fastify.register(route, { prefix, filby });
});
}

Expand All @@ -90,9 +90,9 @@ function registerShutdownHooks() {
process.once('SIGTERM', () => process.emit('app_stop'));
process.once('app_stop', async () => {
process.removeAllListeners('app_stop');
await rdf.stopNotifications();
await filby.stopNotifications();
await fastify.close();
await rdf.stop();
await filby.stop();
console.log('Server has stopped');
});
}
4 changes: 2 additions & 2 deletions examples/javascript/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions examples/javascript/package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "rdf-example",
"name": "filby-example",
"version": "1.0.0",
"description": "An example project using the reference data framework",
"description": "An example project using Filby",
"scripts": {
"start": "node index.js",
"docker": "docker-compose up --detach",
Expand Down
6 changes: 3 additions & 3 deletions examples/javascript/routes/changelog-v1.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ const getChangelogSchema = {
},
};

module.exports = (fastify, { rdf }, done) => {
module.exports = (fastify, { filby }, done) => {

fastify.get('/', { schema: getChangelogSchema }, async (request, reply) => {

const projection = await getProjection(request);

const changeLog = await rdf.getChangeLog(projection);
const changeLog = await filby.getChangeLog(projection);
if (changeLog.length === 0) throw createError(404, `Projection ${projection.name}-v${projection.version} has no change sets`);

const changeSet = changeLog[changeLog.length - 1];
Expand All @@ -38,7 +38,7 @@ module.exports = (fastify, { rdf }, done) => {
async function getProjection(request) {
const name = String(request.query.projection);
const version = Number(request.query.version);
const projection = await rdf.getProjection(name, version);
const projection = await filby.getProjection(name, version);
if (!projection) throw createError(404, `Projection not found: ${name}-v${version}`);
return projection;
}
Expand Down
8 changes: 4 additions & 4 deletions examples/javascript/routes/park-v1.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
const createError = require('http-errors');

module.exports = (fastify, { rdf }, done) => {
module.exports = (fastify, { filby }, done) => {

const getParksSchema = {
querystring: {
Expand Down Expand Up @@ -65,21 +65,21 @@ module.exports = (fastify, { rdf }, done) => {

async function getChangeSet(request) {
const changeSetId = Number(request.query.changeSetId);
const changeSet = await rdf.getChangeSet(changeSetId);
const changeSet = await filby.getChangeSet(changeSetId);
if (!changeSet) throw createError(400, 'Invalid changeSetId');
return changeSet;
}

async function getParks(changeSet) {
return rdf.withTransaction(async (tx) => {
return filby.withTransaction(async (tx) => {
const { rows } = await tx.query('SELECT code, name, calendar_event, calendar_occurs FROM get_park_v1($1)', [changeSet.id]);
const parkDictionary = rows.reduce(toParkDictionary, new Map());
return Array.from(parkDictionary.values());
});
}

async function getPark(changeSet, code) {
return rdf.withTransaction(async (tx) => {
return filby.withTransaction(async (tx) => {
const { rows } = await tx.query('SELECT code, name, calendar_event, calendar_occurs FROM get_park_v1($1) WHERE code = upper($2)', [changeSet.id, code]);
const parkDictionary = rows.reduce(toParkDictionary, new Map());
return parkDictionary.get(code);
Expand Down
8 changes: 4 additions & 4 deletions examples/typescript/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
"port": 3000
},
"database": {
"user": "rdf_example",
"database": "rdf_example",
"password": "rdf_example"
"user": "fby_example",
"database": "fby_example",
"password": "fby_example"
},
"rdf": {
"filby": {
"migrations": "../migrations",
"notifications": {
"interval": "5s",
Expand Down
8 changes: 4 additions & 4 deletions examples/typescript/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ version: '3.8'
services:
postgres:
image: postgres:16-alpine
container_name: rdf_example
container_name: fby_example
environment:
POSTGRES_DB: rdf_example
POSTGRES_USER: rdf_example
POSTGRES_PASSWORD: rdf_example
POSTGRES_DB: fby_example
POSTGRES_USER: fby_example
POSTGRES_PASSWORD: fby_example
ports:
- "5432:5432"
Loading

0 comments on commit 0194817

Please sign in to comment.