-
Notifications
You must be signed in to change notification settings - Fork 8
Magic Metadata
This page is a one-stop guide to creating magic metadata for adaptor functions.
The metadata model is a very lightweight heirarchical model. It still need a little work.
The basic structure is a tree of Entities which look like this:
Note: the current implementation isn't quite the same, I'm just about to update it and bring it all into line
type Entity = {
label?: string; // human readable label
type: string; // domain-specific type string (eg OrgUnit, sObject)
datatype?: string; // the javascript type (shown in monaco)
desc?: string;
value?: string; // the value when inserted (TODO: support templating?)
// Is this system/admin entity?
system?: boolean;
// children can be an array or a named map
children?: Entity[] | Record<string, Entity>;
}
This is what the process of creating a magic function looks like:
This repo provides a number of utilities and patterns to make it easier to write metadata functions. The role of this function is to model the contents of a backend datasource, specified by config data.
Developers create a src/meta/metadata.js
file with the main metadata function in it. This receives a state object (with config) and returns a hierarchical model of the data source.
Developers are encouraged to create a a src/meta/helper.js
file which contains helper functions which call up to the actual datasource. These can be automatically mocked out for unit testing.
Using the CLI in tools/metadata
, developers can run a metadata request for an adapator and config
pnpm cli salesforce ../../../packages/salesforce/tmp/config.js
The result will be written to src/meta/data/metadata.json
, where it can be used in unit tests (and is super useful when writing queries later)
Unit tests should create a mock helper. This will a) call the actual helper and save the results to a checked-in cache, and b) return the cached data on the next call. This enables unit testing on pre-loaded data with minimal effort. The mock is totally generic and provided by tools/metadata
.
Developers should also provide a utility function to warm the cache by running queries against the mock.
A sample (but not a whole dump) of the final model should be saved to drive the rest of the process
Each operation with a magic parameter (ie, one that refers to values in the datasource) should have a query to lookup the relevant data.
Use tooling to write those queries against test data.
You can do this right now with the online jsonpath playground. Paste in the sample model and write a query to get what you need. Bear in mind the query may need placeholders, which lookup values from other arguments.
I plan to write our own tool which will fetch sample data based on config and let you query live against it.
Once you've got a working query, add it to the adaptor's jsdoc directly:
/**
* Upsert an object.
* @param {String} sObject - API name of the sObject.
* @paramlookup sObject - $.entities[?(@.type=="sobject" && [email protected])].name
*/
Unit tests will parse the jsdoc, extract query strings, and let you write tests against the sample data model.
First, load the query strings from your source file:
import extractLookups from '@openfn/parse-jsdoc';
let queries;
before(async () => {
// Parse Adaptor.js and pull out all of its lookup queries
queries = await extractLookups(path.resolve('src/Adaptor.js'));
});
Now write a unit test against a specific query:
it('upsert.sObject: should list non-system sObject names', () => {
const results = jp.query(data, queries.upsert.sObject);
expect(results).to.have.lengthOf(1);
expect(results[0]).to.equal('vera__Beneficiary__c');
});