Skip to content

Commit

Permalink
Add CLI and mappings docs (previously part of the Aleph docs) (#1557)
Browse files Browse the repository at this point in the history
The documentation for the ftm CLI and mappings where previously part of the [Aleph docs](https://docs.aleph.occrp.org/developers). We recently restructured the Aleph docs. As part of that, we remove the ftm CLI and mappings docs. I’m moving them to the FollowTheMoney docs as they are helpful resources.

I’ve only removed a few references to Aleph that would’ve been confusing otherwise and fixed a few links, but apart from that the contents are mostly unedited.
  • Loading branch information
tillprochaska authored Oct 22, 2024
1 parent 8cd9bf3 commit 1fc4b2b
Show file tree
Hide file tree
Showing 6 changed files with 677 additions and 40 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
266 changes: 228 additions & 38 deletions docs/src/pages/docs/cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,60 +3,250 @@ layout: '@layouts/DocsLayout.astro'
title: CLI
---

# Command-Line Functions
# CLI

Many of the functions of _followthemoney_ can be used interactively or in scripts via the command line. Please first refer to the [Aleph documentation](https://docs.aleph.occrp.org/developers/followthemoney/ftm) for an intro to the `ftm` utility.
The `ftm` command-line tool can be used to generate, process and export streams of entities in a line-based JSON format. Typical uses would include:

Key to understanding the `ftm` tool is the notion of [streams](/docs#streams): entities can be transferred between programs and processing steps as a series of JSON objects, one per line. This notion is supported by the related [alephclient](https://docs.aleph.occrp.org/developers/alephclient) command, which can serve as a source, and a sink for entity streams, backed by the Aleph API.
* Generating FollowTheMoney entities by applying an [entity mapping to structured data tables](/docs/mappings) (CSV, SQL).
* Converting an existing stream of FollowTheMoney entities into another format, such as CSV, Excel, Gephi GEXF or Neo4J's Cypher language.
* Converting data in complex formats, such as the Open Contracting Data Standard, into FollowTheMoney entities.

## Examples
## Installation

The command line sequence below uses shell pipes to a) [map data](/docs/mappings) into entities from a database, b) apply a [namespace](/docs/namespace) to the entity IDs, c) aggregate [entity fragments](/docs/fragments) created by the mapping, and d) export the resulting entity stream into a sequence of CYPHER statements that can be executed on a Neo4J database to generate a property graph:
To install `ftm`, you need to have Python 3 installed and working on your computer. You may also want to create a virtual environment using virtualenv or pyenv. With that done, type:

```bash
ftm map companies_from_db.yml | \
ftm sign -s my_namespace | \
ftm aggregate | \
ftm export-cypher -o graph.cypher
pip install followthemoney
ftm --help
```

Here's another example that fetches pre-generated entities from a URL and loads
them into a local Aleph instance:
### Optional: Enhanced transliteration support

One of the jobs of followthemoney is to transliterate text from various alphabets into the latin script to support the comparison of names. The normal tool used for this is prone to fail with certain alphabets, e.g. the Azeri language. For that reason, we recommend also installing ICU (International components for Unicode).

On a Debian-based Linux system, installing ICU is relatively simple:

```bash
export URL=https://public.data.occrp.org/datasets/icij/panama_papers.ijson
curl -s $URL | \
ftm validate | \
alephclient write-entities -f icij_panama_papers
apt install libicu-dev
pip install pyicu
```

## Reference
The Mac OS version of installing ICU is a bit complicated, and requires you to have Homebrew installed:

Please refer to the output of `ftm --help` for a detailed reference of the `ftm` CLI:
```bash
brew install icu4c
env CFLAGS=-I/usr/local/opt/icu4c/include
env LDFLAGS=-L/usr/local/opt/icu4c/lib
PATH=$PATH:/usr/local/opt/icu4c/bin
pip install pyicu
```

## Executing a data mapping

Probably the most common task for `ftm` is to generate FollowTheMoney entities from some structured data source. This is done using a YAML-formatted mapping file, [described here](/docs/mappings). With such a YAML file in hand, you can generate entities like this:

```bash
curl -o md_companies.yml https://raw.githubusercontent.com/alephdata/aleph/main/mappings/md_companies.yml
ftm map md_companies.yml
```
Usage: ftm [OPTIONS] COMMAND [ARGS]...

Utility for FollowTheMoney graph data
This will yield a line-based JSON stream of every company in Moldova, their directors and principal shareholders.

Options:
--help Show this message and exit.
<Image
src="/assets/pages/docs/cli/mapping-result.png"
alt="Screenshot of a terminal window. The terminal shows the output of the `ftm map` command to generate the Moldovan company data."
density={2}
/>

Commands:
aggregate Aggregate multiple fragments of entities
dump-model Export the current schema model
export-csv Export to CSV
export-cypher Export to Cypher script
export-excel Export to Excel
export-gexf Export to GEXF (Gephi) format
export-neo4j-bulk Export to Neo4J bulk import
export-rdf Export to RDF NTriples
import-vis Load a .VIS file and get entities
map Execute a mapping file and emit objects
map-csv Map CSV data from stdin and emit objects
pretty Format a stream of entities to make it readable
sieve Filter out parts of entities.
sign Apply a HMAC signature to entity IDs
sorted-aggregate Aggregate sorted fragments of entities
validate Re-parse and validate the given data
You might note, however, that this actually generates multiple entity fragments for each company (i.e. multiple entities with the same ID). This is due to the way the md_companies mapping is written: each query section generates a partial company record. In order to mitigate this, you will need to perform entity aggregation:

```bash
curl -o md_companies.yml https://raw.githubusercontent.com/alephdata/aleph/main/mappings/md_companies.yml
ftm map md_companies.yml | ftm aggregate > moldova.ijson
```

The call for `ftm aggregate` will retain the entire dataset in memory, which is impossible to do for large databases. In such cases, it's recommended to use an on-disk entity aggregation tool, `followthemoney-store`.

### Loading data from a local CSV file

Another peculiarity of `ftm map` is that the source data is actually referenced within the YAML mapping file as an absolute URL. While this makes sense for data sourced from a SQL database or a public CSV file, you might sometimes want to map a local CSV file instead. For this, a modified version of `ftm map` is provided, `ftm map-csv`. It ignores the specified source URLs and reads data from standard input:

```bash
cat people_of_interest.csv | ftm map-csv people_of_interest.yml | ftm aggregate
```

## Exporting entities to Excel or CSV

FollowTheMoney data can be exported to tabular formats, such as modern Excel (XLSX) files, and comma-separated values (CSV). Since each schema of entities has a different set of properties it makes sense to turn each schema into a separate table: `People` go into one, `Directorships` into another.

To export to an Excel file, use the `ftm export-excel` command:

```bash
curl -o us_ofac.ijson https://storage.googleapis.com/occrp-data-exports/us_ofac/us_ofac.json
cat us_ofac.ijson | ftm validate | ftm export-excel -o OFAC.xlsx
```

Since writing the binary data of an Excel file to standard output is awkward, it is mandatory to include a file name with the `-o` option.

<Image
src="/assets/pages/docs/cli/export-excel.png"
alt="Screenshot of Microsoft Excel showing the export from the example above. The Excel file has multiple sheets, one for each entity type (e.g. People, Companies, and Ownerships)."
density={2}
/>

<Callout theme="danger">
When exporting to Excel format, it's easy to generate a workbook larger than what Microsoft Excel and similar office programs can actually open. Only export small and mid-size datasets.
</Callout>

When exporting to CSV format using `ftm export-csv`, the exporter will usually generate multiple output files, one for each schema of entities present in the input stream of FollowTheMoney entities. To handle this, it expects to be given a directory name:

```bash
curl -o us_ofac.ijson https://storage.googleapis.com/occrp-data-exports/us_ofac/us_ofac.json
cat us_ofac.ijson | ftm validate | ftm export-csv -o OFAC/
```

In the given directory, you will find files names `Person.csv`, `LegalEntity.csv`, `Vessel.csv`, etc.

## Exporting data to a network graph

FollowTheMoney sees every unit of information as an entity with a set of properties. To analyse this information as a network with nodes and edges, we need to decide what logic should rule the transformation of entities into nodes and edges. Different strategies are available:

* Some entity schemata, such as `Directorship`, `Ownership`, `Family` or `Payment`, contain annotations that define how they can be transformed into an edge with a source and target.
* Entities also naturally reference others. For example, an `Email` has an `emitters` property that refers to a `LegalEntity`, the sender. The `emitters` property connects the two entities and can also be turned into an edge.
* Finally, some types of properties (e.g. `email`, `iban`, `names`) can be formed into nodes, with edges formed towards each node that derives from an entity with that property value. For example, an `address` node for "40 Wall Street" would show links to all the companies registered there, or a node representing the name "Frank Smith" would connect all the documents mentioning that name. It rarely makes sense to turn all property types into nodes, so the set of types that need to be [reified](<https://en.wikipedia.org/wiki/Reification_(computer_science)>) can be passed as options into the graph exporter.

### Cypher commands for Neo4J

[Neo4J](https://neo4j.com/) is a popular open source graph database that can be queried and edited [using the Cypher language](https://neo4j.com/docs/cypher-refcard/current/). It can be used as a database backend or queried directly to perform advanced analysis, e.g. to find all paths between two entities.

The example below uses Neo4J's `cypher-shell` command to load the US sanctions list into a local instance of the database:

```bash
curl -o us_ofac.ijson https://storage.googleapis.com/occrp-data-exports/us_ofac/us_ofac.json
cat us_ofac.ijson | ftm export-cypher | cypher-shell -u user -p password
```

<Image
src="/assets/pages/docs/cli/export-cypher.png"
alt="Screenshot of FtM entities imported to a Neo4J instance."
density={2}
/>

By default, this will only make explicit edges based on entity to entity relationships. If you want to reify specific property types, use the `-e` option:

```bash
cat us_ofac.ijson | ftm export-cypher -e name -e iban -e entity -e address
```

When working with file-based datasets, you may want to delete folder hierarchies from the imported data in Neo4J to avoid file co-location biasing path and density analyses:

```
# Delete folder hierarchies:
MATCH ()-[r:ANCESTORS]-() DELETE r;
MATCH ()-[r:PARENT]-() DELETE r;
# Delete entities representing individual pages:
MATCH (n:Page) DETACH DELETE n;
# Delete names or email only used once:
MATCH (n:name) WHERE size((n)--()) <= 1 DETACH DELETE (n);
MATCH (n:email) WHERE size((n)--()) <= 1 DETACH DELETE (n);
MATCH (n:address) WHERE size((n)--()) <= 1 DETACH DELETE (n);
# ... for all reified value types ...
```

At any time, you can flush the entire Neo4J and start from scratch:

```
MATCH (n) DETACH DELETE n;
```

#### Bulk loading data

Another option for loading data to Neo4J is to export a set of entities into CSV files and then using the `neo4-admin import` command to load them into an empty database. This requires shutting down the Neo4J server and then running a command that will write the new database.

In order to generate data in CSV format suitable for Neo4J import, use the following command:

```bash
cat us_ofac.ijson | ftm export-neo4j-bulk -o folder_name -e iban -e entity -e address
```

This will generate a set of CSV files in a folder, and include a shell script file that describes the `neo4-admin` import command that should be used to load the data into a graph store.

### GEXF for Gephi/Sigma.js

[GEXF](https://gephi.org/gexf/format/) (Graph Exchange XML Format) is a file format used by the network analysis software [Gephi](https://gephi.org/) and other tools developed in the periphery of the [Media Lab at Sciences Po](http://tools.medialab.sciences-po.fr/). Gephi is particularly suited to do quantitative analysis of graphs with tens of thousands of nodes. It can calculate network metrics like centrality or PageRank, or generate complex visual layouts.

The command line works analogous to the Neo4J export, also accepting the `-e` flag for property types that should be turned into nodes:

```bash
curl -o us_ofac.ijson https://storage.googleapis.com/occrp-data-exports/us_ofac/us_ofac.json
cat us_ofac.ijson | ftm validate | ftm export-cypher -e iban -o ofac.gexf
```

<Image
src="/assets/pages/docs/cli/export-gephi.png"
alt="Screenshot of Gephi. A small trove of emails has been visualized as a network. The entity schema type has been used to color nodes, while the size is based on the amount of inbound links (i.e. In-Degree)."
density={2}
/>

## Exporting entities to RDF/Linked Data

Entity streams of FollowTheMoney data can also be exported to linked data in the `NTriples` format.

```bash
curl -o us_ofac.ijson https://storage.googleapis.com/occrp-data-exports/us_ofac/us_ofac.json
cat us_ofac.ijson | ftm validate | ftm export-rdf
```

It is unclear to the author why this functionality exists, it was just really easy to implement. For those developers who enjoy working with RDF, it might be worthwhile to point out that the underlying ontology (FollowTheMoney) is also regularly published in [RDF/XML](https://followthemoney.tech/ns/ftm.xml) format.

By default, the RDF exporter tries to map each entity property to a fully-qualified RDF predicate. Schemas include some mappings to FOAF and similar ontologies.

## Importing Open Contracting data

The [Open Contracting Data Standard](https://standard.open-contracting.org/latest/en/) (OCDS) is commonly serialised as a series of JSON objects. `ftm` includes a function to transform a stream of OCDS objects into `Contract` and `ContractAward` entities. This was developed in particular to import data from the DIGIWHIST [OpenTender.eu](https://opentender.eu/all/download) site, so other implementations of OCDS may require extending the importer to accommodate other formats.

Here's how you can convert all Cyprus government procurement data to FollowTheMoney objects:

```bash
curl -o CY_ocds_data.json.tar.gz https://opentender.eu/data/files/CY_ocds_data.json.tar.gz
tar xvfz CY_ocds_data.json.tar.gz
cat CY_ocds_data.json | ftm import-ocds | ftm aggregate >cy_contracts.ijson
```

Depending on how large the OCDS dataset is, you may want to use `followthemoney-store` instead of `ftm aggregate`.

## Aggregating entities using ftm-store

While the method of streaming FollowTheMoney entities is very convenient, there are situations where not all information about an entity is known at the time at which it is generated. For example, think of a [mapping](/docs/mappings) that loads company names from one CSV file, while the corresponding addresses are in a second, separate CSV table. In such cases, it is easier to generate two entities with the same ID and to merge them later.

Merging such entity fragments requires sorting all the entities in the given dataset by their ID in order to aggregate their properties. For small datasets, this can be done in application memory using the `ftm aggregate`command.

Once the dataset size approaches the amount of available memory, however, sorting must be performed on disk. This is also true when entity fragments are generated on different nodes in a computing cluster.

For this purpose, `followthemoney-store` is available as a Python library and a command line tool. It can use any SQL database as a backend, with a local SQLite file set as a default. When using PostgreSQL as a database, `followthemoney-store` can use its built-in upsert functionality, making the backend more performant than others.

To use `followthemoney-store` with SQLite, install it like this:

```bash
pip install followthemoney-store
```

For PostgreSQL support, use the following settings:

```bash
pip install followthemoney-store[postgresql]
export FTM_STORE_URI=postgresql://localhost/followthemoney
```

Once installed, you can operate the `followthemoney-store` command in read or write mode:

```bash
curl -o us_ofac.ijson https://storage.googleapis.com/occrp-data-exports/us_ofac/us_ofac.json
cat us_ofac.ijson | ftm store write -d us_ofac
ftm store iterate -d us_ofac | alephclient write-entities -f us_ofac
ftm store delete -d us_ofac
```

<Callout theme="danger">
When aggregating entities with large fragments of text, a size limit applies. By default, no entity is allowed to grow larger than 50MB of raw text. Additional text fragments are discarded with a warning.
</Callout>
Loading

0 comments on commit 1fc4b2b

Please sign in to comment.