diff --git a/docs/schema-loader.md b/docs/schema-loader.md index c20dcc4d77..7c981fc910 100644 --- a/docs/schema-loader.md +++ b/docs/schema-loader.md @@ -1,50 +1,66 @@ # ScalarDB Schema Loader -ScalarDB has its own data model and schema, that maps to the implementation specific data model and schema. -Also, it stores internal metadata (e.g., transaction ID, record version, transaction status) for managing transaction logs and statuses when you use the Consensus Commit transaction manager. -It is a little hard for application developers to manage the schema mapping and metadata for transactions, so we offer a tool called ScalarDB Schema Loader for creating schema without requiring much knowledge about those. +ScalarDB has its own data model and schema that maps to the implementation-specific data model and schema. In addition, ScalarDB stores internal metadata, such as transaction IDs, record versions, and transaction statuses, to manage transaction logs and statuses when you use the Consensus Commit transaction manager. -There are two ways to specify general CLI options in Schema Loader: - - Pass a ScalarDB configuration file and database/storage-specific options additionally. - - Pass the options without a ScalarDB configuration (Deprecated). +Since managing the schema mapping and metadata for transactions can be difficult, you can use ScalarDB Schema Loader, which is a tool to create schemas that doesn't require you to need in-depth knowledge about schema mapping or metadata. -Note that this tool supports only basic options to create/delete/repair/alter a table. If you want -to use the advanced features of a database, please alter your tables with a database specific tool after creating them with this tool. +You have two options to specify general CLI options in Schema Loader: -# Usage +- Pass the ScalarDB properties file and database-specific or storage-specific options. +- Pass database-specific or storage-specific options without the ScalarDB properties file. (Deprecated) -## Install +{% capture notice--info %} +**Note** -The release versions of `schema-loader` can be downloaded from [releases](https://github.com/scalar-labs/scalardb/releases) page of ScalarDB. +This tool supports only basic options to create, delete, repair, or alter a table. If you want to use the advanced features of a database, you must alter your tables with a database-specific tool after creating the tables with this tool. +{% endcapture %} -## Build +
{{ notice--info | markdownify }}
-In case you want to build `schema-loader` from the source: -```console -$ ./gradlew schema-loader:shadowJar -``` -- The built fat jar file is `schema-loader/build/libs/scalardb-schema-loader-.jar` +## Set up Schema Loader -## Docker +Select your preferred method to set up Schema Loader, and follow the instructions. -You can pull the docker image from [Scalar's container registry](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-schema-loader). -```console -docker run --rm -v : [-v :] ghcr.io/scalar-labs/scalardb-schema-loader: -``` -- Note that you can specify the same command arguments even if you use the fat jar or the container. The example commands in the next section are shown with a jar, but you can run the commands with the container in the same way by replacing `java -jar scalardb-schema-loader-.jar` with `docker run --rm -v : [-v :] ghcr.io/scalar-labs/scalardb-schema-loader:`. +
+
+ + +
+ +
+ +You can download the release versions of Schema Loader from the [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases) page. +
+
+ +You can pull the Docker image from the [Scalar container registry](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-schema-loader) by running the following command, replacing the contents in the angle brackets as described: -You can also build the docker image as follows. ```console -$ ./gradlew schema-loader:docker +$ docker run --rm -v : [-v :] ghcr.io/scalar-labs/scalardb-schema-loader: ``` -## Run +{% capture notice--info %} +**Note** + +You can specify the same command arguments even if you use the fat JAR or the container. In the [Available commands](#available-commands) section, the JAR is used, but you can run the commands by using the container in the same way by replacing `java -jar scalardb-schema-loader-.jar` with `docker run --rm -v : [-v :] ghcr.io/scalar-labs/scalardb-schema-loader:`. +{% endcapture %} + +
{{ notice--info | markdownify }}
+
+
+ +## Run Schema Loader + +This section explains how to run Schema Loader. ### Available commands -For using a config file: +Select how you would like to configure Schema Loader for your database. The preferred method is to use the properties file since other, database-specific methods are deprecated. + +The following commands are available when using the properties file: + ```console -Usage: java -jar scalardb-schema-loader-.jar [-D] [--coordinator] +Usage: java -jar scalardb-schema-loader-.jar [-D] [--coordinator] [--no-backup] [--no-scaling] -c= [--compaction-strategy=] [-f=] [--replication-factor=] @@ -60,7 +76,7 @@ Create/Delete schemas in the storage defined in the config file --compaction-strategy= The compaction strategy, must be LCS, STCS or TWCS (supported in Cassandra) - --coordinator Create/delete/repair coordinator tables + --coordinator Create/delete/repair Coordinator tables -D, --delete-all Delete tables -f, --schema-file= Path to the schema json file @@ -78,9 +94,57 @@ Create/Delete schemas in the storage defined in the config file --ru= Base resource unit (supported in DynamoDB, Cosmos DB) ``` -For Cosmos DB for NoSQL (Deprecated. Please use the command using a config file instead): +For a sample properties file, see [`database.properties`](https://github.com/scalar-labs/scalardb/blob/master/conf/database.properties). + +{% capture notice--info %} +**Note** + +The following database-specific methods have been deprecated. Please use the [commands for configuring the properties file](#available-commands) instead. + +
+
+ + + + +
+ +
+ +```console +Usage: java -jar scalardb-schema-loader-.jar --cassandra [-D] + [-c=] -f= -h= + [-n=] [-p=] [-P=] + [-R=] [-u=] +Create/Delete Cassandra schemas + -A, --alter Alter tables : it will add new columns and create/delete + secondary index for existing tables. It compares the + provided table schema to the existing schema to decide + which columns need to be added and which indexes need + to be created or deleted + -c, --compaction-strategy= + Cassandra compaction strategy, must be LCS, STCS or TWCS + -D, --delete-all Delete tables + -f, --schema-file= + Path to the schema json file + -h, --host= Cassandra host IP + -n, --network-strategy= + Cassandra network strategy, must be SimpleStrategy or + NetworkTopologyStrategy + -p, --password= + Cassandra password + -P, --port= Cassandra Port + -R, --replication-factor= + Cassandra replication factor + --repair-all Repair tables : it repairs the table metadata of + existing tables + -u, --user= Cassandra user +``` +
+
+ ```console -Usage: java -jar scalardb-schema-loader-.jar --cosmos [-D] +Usage: java -jar scalardb-schema-loader-.jar --cosmos [-D] [--no-scaling] -f= -h= -p= [-r=] Create/Delete Cosmos DB schemas -A, --alter Alter tables : it will add new columns and create/delete @@ -99,10 +163,11 @@ Create/Delete Cosmos DB schemas existing tables and repairs stored procedure attached to each table ``` +
+
-For DynamoDB (Deprecated. Please use the command using a config file instead): ```console -Usage: java -jar scalardb-schema-loader-.jar --dynamo [-D] +Usage: java -jar scalardb-schema-loader-.jar --dynamo [-D] [--no-backup] [--no-scaling] [--endpoint-override=] -f= -p= [-r=] --region= -u= @@ -127,41 +192,11 @@ Create/Delete DynamoDB schemas existing tables -u, --user= AWS access key ID ``` +
+
-For Cassandra (Deprecated. Please use the command using a config file instead): ```console -Usage: java -jar scalardb-schema-loader-.jar --cassandra [-D] - [-c=] -f= -h= - [-n=] [-p=] [-P=] - [-R=] [-u=] -Create/Delete Cassandra schemas - -A, --alter Alter tables : it will add new columns and create/delete - secondary index for existing tables. It compares the - provided table schema to the existing schema to decide - which columns need to be added and which indexes need - to be created or deleted - -c, --compaction-strategy= - Cassandra compaction strategy, must be LCS, STCS or TWCS - -D, --delete-all Delete tables - -f, --schema-file= - Path to the schema json file - -h, --host= Cassandra host IP - -n, --network-strategy= - Cassandra network strategy, must be SimpleStrategy or - NetworkTopologyStrategy - -p, --password= - Cassandra password - -P, --port= Cassandra Port - -R, --replication-factor= - Cassandra replication factor - --repair-all Repair tables : it repairs the table metadata of - existing tables - -u, --user= Cassandra user -``` - -For a JDBC database (Deprecated. Please use the command using a config file instead): -```console -Usage: java -jar scalardb-schema-loader-.jar --jdbc [-D] +Usage: java -jar scalardb-schema-loader-.jar --jdbc [-D] -f= -j= -p= -u= Create/Delete JDBC schemas -A, --alter Alter tables : it will add new columns and create/delete @@ -179,142 +214,234 @@ Create/Delete JDBC schemas existing tables -u, --user= JDBC user ``` +
+
+{% endcapture %} + +
{{ notice--info | markdownify }}
### Create namespaces and tables -For using a config file (Sample config file can be found [here](https://github.com/scalar-labs/scalardb/blob/master/conf/database.properties)): +To create namespaces and tables by using a properties file, run the following command, replacing the contents in the angle brackets as described: + ```console -$ java -jar scalardb-schema-loader-.jar --config -f schema.json [--coordinator] +$ java -jar scalardb-schema-loader-.jar --config -f [--coordinator] ``` - - if `--coordinator` is specified, the coordinator tables will be created. -For using CLI arguments fully for configuration (Deprecated. Please use the command using a config file instead): +If `--coordinator` is specified, a [Coordinator table](api-guide.md#specify-operations-for-the-coordinator-table) will be created. + +{% capture notice--info %} +**Note** + +The following database-specific CLI arguments have been deprecated. Please use the CLI arguments for configuring the properties file instead. + +
+
+ + + + +
+ +
+ ```console -# For Cosmos DB for NoSQL -$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f schema.json [-r BASE_RESOURCE_UNIT] +$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f [-n ] [-R ] ``` - - `` you can use a primary key or a secondary key. - - `-r BASE_RESOURCE_UNIT` is an option. You can specify the RU of each database. The maximum RU in tables in the database will be set. If you don't specify RU of tables, the database RU will be set with this option. By default, it's 400. + +- If `-P ` is not supplied, it defaults to `9042`. +- If `-u ` is not supplied, it defaults to `cassandra`. +- If `-p ` is not supplied, it defaults to `cassandra`. +- `` should be `SimpleStrategy` or `NetworkTopologyStrategy` +
+
```console -# For DynamoDB -$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region -f schema.json [-r BASE_RESOURCE_UNIT] +$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f [-r BASE_RESOURCE_UNIT] ``` - - `` should be a string to specify an AWS region like `ap-northeast-1`. - - `-r` option is almost the same as Cosmos DB for NoSQL option. However, the unit means DynamoDB capacity unit. The read and write capacity units are set the same value. + +- `` you can use a primary key or a secondary key. +- `-r BASE_RESOURCE_UNIT` is an option. You can specify the RU of each database. The maximum RU in tables in the database will be set. If you don't specify RU of tables, the database RU will be set with this option. By default, it's 400. +
+
```console -# For Cassandra -$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f schema.json [-n ] [-R ] +$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region -f [-r BASE_RESOURCE_UNIT] ``` - - If `-P ` is not supplied, it defaults to `9042`. - - If `-u ` is not supplied, it defaults to `cassandra`. - - If `-p ` is not supplied, it defaults to `cassandra`. - - `` should be `SimpleStrategy` or `NetworkTopologyStrategy` + +- `` should be a string to specify an AWS region like `ap-northeast-1`. +- `-r` option is almost the same as Cosmos DB for NoSQL option. However, the unit means DynamoDB capacity unit. The read and write capacity units are set the same value. +
+
```console -# For a JDBC database -$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f schema.json +$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f ``` +
+
+{% endcapture %} + +
{{ notice--info | markdownify }}
### Alter tables -This command will add new columns and create/delete secondary index for existing tables. It compares -the provided table schema to the existing schema to decide which columns need to be added and which -indexes need to be created or deleted. +You can use a command to add new columns to and create or delete a secondary index for existing tables. This command compares the provided table schema to the existing schema to decide which columns need to be added and which indexes need to be created or deleted. -For using config file (Sample config file can be found [here](https://github.com/scalar-labs/scalardb/blob/master/conf/database.properties)): +To add new colums to and create or delete a secondary index for existing tables, run the following command, replacing the contents in the angle brackets as described: ```console -$ java -jar scalardb-schema-loader-.jar --config -f schema.json --alter +$ java -jar scalardb-schema-loader-.jar --config -f --alter ``` -For using CLI arguments fully for configuration (Deprecated. Please use the command using a config -file instead): +{% capture notice--info %} +**Note** + +The following database-specific CLI arguments have been deprecated. Please use the CLI arguments for configuring the properties file instead. + +
+
+ + + + +
+ +
```console -# For Cosmos DB for NoSQL -$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f schema.json --alter +$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f --alter ``` +
+
```console -# For DynamoDB -$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region -f schema.json --alter +$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f --alter ``` +
+
```console -# For Cassandra -$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f schema.json --alter +$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region -f --alter ``` +
+
```console -# For a JDBC database -$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f schema.json --alter +$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f --alter ``` +
+
+{% endcapture %} + +
{{ notice--info | markdownify }}
### Delete tables -For using config file (Sample config file can be found [here](https://github.com/scalar-labs/scalardb/blob/master/conf/database.properties)): +You can delete tables by using the properties file. To delete tables, run the following command, replacing the contents in the angle brackets as described: + ```console -$ java -jar scalardb-schema-loader-.jar --config -f schema.json [--coordinator] -D +$ java -jar scalardb-schema-loader-.jar --config -f [--coordinator] -D ``` - - if `--coordinator` is specified, the coordinator tables will be deleted. - -For using CLI arguments fully for configuration (Deprecated. Please use the command using a config file instead): + +If `--coordinator` is specified, the Coordinator table will be deleted as well. + +{% capture notice--info %} +**Note** + +The following database-specific CLI arguments have been deprecated. Please use the CLI arguments for configuring the properties file instead. + +
+
+ + + + +
+ +
+ ```console -# For Cosmos DB for NoSQL -$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f schema.json -D +$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f -D ``` +
+
```console -# For DynamoDB -$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region -f schema.json -D +$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f -D ``` +
+
```console -# For Cassandra -$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f schema.json -D +$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region -f -D ``` +
+
```console -# For a JDBC database -$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f schema.json -D +$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f -D ``` +
+
+{% endcapture %} + +
{{ notice--info | markdownify }}
### Repair tables -This command will repair the table metadata of existing tables. When using Cosmos DB for NoSQL, it additionally repairs stored procedure attached to each table. +You can repair the table metadata of existing tables by using the properties file. To repair table metadata of existing tables, run the following command, replacing the contents in the angle brackets as described: -For using config file (Sample config file can be found [here](https://github.com/scalar-labs/scalardb/blob/master/conf/database.properties)): ```console -$ java -jar scalardb-schema-loader-.jar --config -f schema.json [--coordinator] --repair-all +$ java -jar scalardb-schema-loader-.jar --config -f [--coordinator] --repair-all ``` -- if `--coordinator` is specified, the coordinator tables will be repaired as well. -For using CLI arguments fully for configuration (Deprecated. Please use the command using a config file instead): +If `--coordinator` is specified, the Coordinator table will be repaired as well. In addition, if you're using Cosmos DB for NoSQL, running this command will also repair stored procedures attached to each table. + +{% capture notice--info %} +**Note** + +The following database-specific CLI arguments have been deprecated. Please use the CLI arguments for configuring the properties file instead. + +
+
+ + + + +
+ +
+ ```console -# For Cosmos DB for NoSQL -$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f schema.json --repair-all +$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f --repair-all ``` +
+
```console -# For DynamoDB -$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region [--no-backup] -f schema.json --repair-all +$ java -jar scalardb-schema-loader-.jar --cosmos -h -p -f --repair-all ``` +
+
```console -# For Cassandra -$ java -jar scalardb-schema-loader-.jar --cassandra -h [-P ] [-u ] [-p ] -f schema.json --repair-all +$ java -jar scalardb-schema-loader-.jar --dynamo -u -p --region [--no-backup] -f --repair-all ``` +
+
```console -# For a JDBC database -$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f schema.json --repair-all +$ java -jar scalardb-schema-loader-.jar --jdbc -j -u -p -f --repair-all ``` +
+
+{% endcapture %} + +
{{ notice--info | markdownify }}
### Sample schema file -The sample schema is as follows (Sample schema file can be found [here](https://github.com/scalar-labs/scalardb/blob/master/schema-loader/sample/schema_sample.json)): +The following is a sample schema. For a sample schema file, see [`schema_sample.json`](https://github.com/scalar-labs/scalardb/blob/master/schema-loader/sample/schema_sample.json). ```json { @@ -379,14 +506,17 @@ The sample schema is as follows (Sample schema file can be found [here](https:// ``` The schema has table definitions that include `columns`, `partition-key`, `clustering-key`, `secondary-index`, and `transaction` fields. -The `columns` field defines columns of the table and their data types. -The `partition-key` field defines which columns the partition key is composed of, and `clustering-key` defines which columns the clustering key is composed of. -The `secondary-index` field defines which columns are indexed. -The `transaction` field indicates whether the table is for transactions or not. -If you set the `transaction` field to `true` or don't specify the `transaction` field, this tool creates a table with transaction metadata if needed. -If not, it creates a table without any transaction metadata (that is, for a table with [Storage API](storage-abstraction.md)). - -You can also specify database/storage-specific options in the table definition as follows: + +- The `columns` field defines columns of the table and their data types. +- The `partition-key` field defines which columns the partition key is composed of. +- The `clustering-key` field defines which columns the clustering key is composed of. +- The `secondary-index` field defines which columns are indexed. +- The `transaction` field indicates whether the table is for transactions or not. + - If you set the `transaction` field to `true` or don't specify the `transaction` field, this tool creates a table with transaction metadata if needed. + - If you set the `transaction` field to `false`, this tool creates a table without any transaction metadata (that is, for a table with [Storage API](storage-abstraction.md)). + +You can also specify database or storage-specific options in the table definition as follows: + ```json { "sample_db.sample_table3": { @@ -404,30 +534,68 @@ You can also specify database/storage-specific options in the table definition a } ``` -The database/storage-specific options you can specify are as follows: +The database or storage-specific options you can specify are as follows: -For Cassandra: -- `compaction-strategy`, a compaction strategy. It should be `STCS` (SizeTieredCompaction), `LCS` (LeveledCompactionStrategy) or `TWCS` (TimeWindowCompactionStrategy). +
+
+ + + + +
-For DynamoDB and Cosmos DB for NoSQL: -- `ru`, a request unit. Please see [RU](#ru) for the details. +
-## Scaling Performance +The `compaction-strategy` option is the compaction strategy used. This option should be `STCS` (SizeTieredCompaction), `LCS` (LeveledCompactionStrategy), or `TWCS` (TimeWindowCompactionStrategy). +
+
-### RU +The `ru` option stands for Request Units. For details, see [RUs](#rus). +
+
-You can scale the throughput of Cosmos DB for NoSQL and DynamoDB by specifying `--ru` option (which applies to all the tables) or `ru` parameter for each table. The default values are `400` for Cosmos DB for NoSQL and `10` for DynamoDB respectively, which are set without `--ru` option. +The `ru` option stands for Request Units. For details, see [RUs](#rus). +
+
-Note that the schema loader abstracts [Request Unit](https://docs.microsoft.com/azure/cosmos-db/request-units) of Cosmos DB for NoSQL and [Capacity Unit](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual) of DynamoDB with `RU`. -So, please set an appropriate value depending on the database implementations. Please also note that the schema loader sets the same value to both Read Capacity Unit and Write Capacity Unit for DynamoDB. +No options are available for JDBC databases. +
+
+ +## Scale for performance when using Cosmos DB for NoSQL or DynamoDB + +When using Cosmos DB for NoSQL or DynamoDB, you can scale by using Request Units (RUs) or auto-scaling. + +### RUs + +You can scale the throughput of Cosmos DB for NoSQL and DynamoDB by specifying the `--ru` option. When specifying this option, scaling applies to all tables or the `ru` parameter for each table. + +If the `--ru` option is not set, the default values will be `400` for Cosmos DB for NoSQL and `10` for DynamoDB. + +{% capture notice--info %} +**Note** + +- Schema Loader abstracts [Request Units](https://docs.microsoft.com/azure/cosmos-db/request-units) for Cosmos DB for NoSQL and [Capacity Units](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual) for DynamoDB with `RU`. Therefore, be sure to set an appropriate value depending on the database implementation. +- Be aware that Schema Loader sets the same value to both read capacity unit and write capacity unit for DynamoDB. +{% endcapture %} + +
{{ notice--info | markdownify }}
### Auto-scaling -By default, the schema loader enables auto-scaling of RU for all tables: RU is scaled in or out between 10% and 100% of a specified RU depending on a workload. For example, if you specify `-r 10000`, RU of each table is scaled in or out between 1000 and 10000. Note that auto-scaling of Cosmos DB for NoSQL is enabled only when you set more than or equal to 4000 RU. +By default, Schema Loader enables auto-scaling of RUs for all tables: RUs scale between 10 percent and 100 percent of a specified RU depending on the workload. For example, if you specify `-r 10000`, the RUs of each table auto-scales between `1000` and `10000`. -## Data type mapping between ScalarDB and the other databases +{% capture notice--info %} +**Note** -Here are the supported data types in ScalarDB and their mapping to the data types of other databases. +Auto-scaling for Cosmos DB for NoSQL is enabled only when this option is set to `4000` or more. +{% endcapture %} + +
{{ notice--info | markdownify }}
+ +## Data-type mapping between ScalarDB and other databases + +The following table shows the supported data types in ScalarDB and their mapping to the data types of other databases. | ScalarDB | Cassandra | Cosmos DB for NoSQL | DynamoDB | MySQL | PostgreSQL | Oracle | SQL Server | SQLite | |-----------|-----------|---------------------|----------|----------|------------------|----------------|-----------------|---------| @@ -439,48 +607,47 @@ Here are the supported data types in ScalarDB and their mapping to the data type | TEXT | text | string (JSON) | S | longtext | text | varchar2(4000) | varchar(8000) | text | | BLOB | blob | string (JSON) | B | longblob | bytea | RAW(2000) | varbinary(8000) | blob | -However, the following types in JDBC databases are converted differently when they are used as a primary key or a secondary index key due to the limitations of RDB data types. +However, the following data types in JDBC databases are converted differently when they are used as a primary key or a secondary index key. This is due to the limitations of RDB data types. | ScalarDB | MySQL | PostgreSQL | Oracle | |----------|---------------|-------------------|--------------| | TEXT | VARCHAR(64) | VARCHAR(10485760) | VARCHAR2(64) | | BLOB | VARBINARY(64) | | RAW(64) | -The value range of `BIGINT` in ScalarDB is from -2^53 to 2^53 regardless of the underlying database. +The value range of `BIGINT` in ScalarDB is from -2^53 to 2^53, regardless of the underlying database. -If this data type mapping doesn't match your application, please alter the tables to change the data types after creating them with this tool. +If this data-type mapping doesn't match your application, please alter the tables to change the data types after creating them by using this tool. ## Internal metadata for Consensus Commit -The Consensus Commit transaction manager manages metadata (e.g., transaction ID, record version, transaction status) stored along with the actual records to handle transactions properly. -Thus, along with any required columns by the application, additional columns for the metadata need to be defined in the schema. -Additionaly, this tool creates a table with the metadata when you use the Consensus Commit transaction manager. +The Consensus Commit transaction manager manages metadata (for example, transaction ID, record version, and transaction status) stored along with the actual records to handle transactions properly. + +Thus, along with any columns that the application requires, additional columns for the metadata need to be defined in the schema. Additionally, this tool creates a table with the metadata if you use the Consensus Commit transaction manager. + +## Use Schema Loader in your application + +You can check the version of Schema Loader from the [Maven Central Repository](https://mvnrepository.com/artifact/com.scalar-labs/scalardb-schema-loader). For example in Gradle, you can add the following dependency to your `build.gradle` file, replacing `` with the version of Schema Loader that you want to use: -## Using Schema Loader in your program -You can check the version of `schema-loader` from [maven central repository](https://mvnrepository.com/artifact/com.scalar-labs/scalardb-schema-loader). -For example in Gradle, you can add the following dependency to your build.gradle. Please replace the `` with the version you want to use. ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-schema-loader:' + implementation 'com.scalar-labs:scalardb-schema-loader:' } ``` -### Create, alter, repair and delete +### Create, alter, repair, or delete tables -You can create, alter, delete and repair tables that are defined in the schema using SchemaLoader by -simply passing ScalarDB configuration file, schema, and additional options if needed as shown -below. +You can create, alter, delete, or repair tables that are defined in the schema by using Schema Loader. To do this, you can pass a ScalarDB properties file, schema, and additional options, if needed, as shown below: ```java public class SchemaLoaderSample { public static int main(String... args) throws SchemaLoaderException { Path configFilePath = Paths.get("database.properties"); - // "sample_schema.json" and "altered_sample_schema.json" can be found in the "/sample" directory + // "sample_schema.json" and "altered_sample_schema.json" can be found in the "/sample" directory. Path schemaFilePath = Paths.get("sample_schema.json"); Path alteredSchemaFilePath = Paths.get("altered_sample_schema.json"); - boolean createCoordinatorTables = true; // whether to create the coordinator tables or not - boolean deleteCoordinatorTables = true; // whether to delete the coordinator tables or not - boolean repairCoordinatorTables = true; // whether to repair the coordinator tables or not + boolean createCoordinatorTables = true; // whether to create the Coordinator table or not + boolean deleteCoordinatorTables = true; // whether to delete the Coordinator table or not + boolean repairCoordinatorTables = true; // whether to repair the Coordinator table or not Map tableCreationOptions = new HashMap<>(); @@ -499,16 +666,16 @@ public class SchemaLoaderSample { Map tableReparationOptions = new HashMap<>(); indexCreationOptions.put(DynamoAdmin.NO_BACKUP, "true"); - // Create tables + // Create tables. SchemaLoader.load(configFilePath, schemaFilePath, tableCreationOptions, createCoordinatorTables); - // Alter tables + // Alter tables. SchemaLoader.alterTables(configFilePath, alteredSchemaFilePath, indexCreationOptions); - // Repair tables + // Repair tables. SchemaLoader.repairTables(configFilePath, schemaFilePath, tableReparationOptions, repairCoordinatorTables); - // Delete tables + // Delete tables. SchemaLoader.unload(configFilePath, schemaFilePath, deleteCoordinatorTables); return 0; @@ -516,33 +683,34 @@ public class SchemaLoaderSample { } ``` -You can also create, delete or repair a schema by passing a serialized schema JSON string (the raw text of a schema file). +You can also create, delete, or repair a schema by passing a serialized-schema JSON string (the raw text of a schema file) as shown below: + ```java -// Create tables +// Create tables. SchemaLoader.load(configFilePath, serializedSchemaJson, tableCreationOptions, createCoordinatorTables); -// Alter tables +// Alter tables. SchemaLoader.alterTables(configFilePath, serializedAlteredSchemaFilePath, indexCreationOptions); -// Repair tables +// Repair tables. SchemaLoader.repairTables(configFilePath, serializedSchemaJson, tableReparationOptions, repairCoordinatorTables); -// Delete tables +// Delete tables. SchemaLoader.unload(configFilePath, serializedSchemaJson, deleteCoordinatorTables); ``` -For ScalarDB configuration, a `Properties` object can be used as well. +When configuring ScalarDB, you can use a `Properties` object as well, as shown below: ```java -// Create tables +// Create tables. SchemaLoader.load(properties, serializedSchemaJson, tableCreationOptions, createCoordinatorTables); -// Alter tables +// Alter tables. SchemaLoader.alterTables(properties, serializedAlteredSchemaFilePath, indexCreationOptions); -// Repair tables +// Repair tables. SchemaLoader.repairTables(properties, serializedSchemaJson, tableReparationOptions, repairCoordinatorTables); -// Delete tables +// Delete tables. SchemaLoader.unload(properties, serializedSchemaJson, deleteCoordinatorTables); ```