Skip to content

Command Line Options

tazija edited this page Jan 17, 2013 · 1 revision

$ bin/nuodb-migration

[--help] Prints command line options
[--help=[command]] Prints help contents on the migration command
[--list] Lists available migration commands
[dump] | [load] | [schema] Executes specified migration command (dump, load or schema) with its arguments

$ bin/nuodb-migration dump

Source database connection, required
    --source.driver=driver JDBC driver class name
    --source.url=url Source database connection URL in the standard syntax jdbc:<subprotocol>:<subname>
    [--source.username=[username]] Source database username
    [--source.password=[password]] Source database password
    [--source.properties=[properties]] Additional connection properties encoded as URL query string "property1=value1&property2=value2"
    [--source.catalog=[catalog]] Default database catalog name to use
    [--source.schema=[schema]] Default database schema name to use
Output specification, required
    --output.type=output type Output format type name (cvs, xml or bson)
    --output.path=[output path] Path on the file system to the .cat file
    [--output.*=[attribute value]] CSV output format attributes com.nuodb.migration.result.format.csv.CsvAttributes
  • --output.csv.delimiter=,
  • --output.csv.quoting=false
  • --output.csv.quote="
  • --output.csv.escape=|
  • --output.csv.line.separator=\r\n
XML output format attributes com.nuodb.migration.result.format.xml.XmlAttributes
  • --output.xml.encoding=UTF-8
Table names, types & query filters, optional
    [--table=table] Table name
    [--table.type=[table type]] Comma separated types of tables (TABLE, VIEW, SYSTEM TABLE, GLOBAL TEMPORARY, ALIAS, SYNONYM, etc) to process, by default only TABLE is included into dump
    [--table.*.filter=[query filter]] Filters table records when appended to the query statement after the where clause
Select statements, optional
    [--query=query] Select statement
[--time.zone=time zone] Time zone option enables date columns to be dumped and reloaded between servers in different time zones, i.e. --time.zone=UTC

$ bin/nuodb-migration load

Target database connection, required
    --target.url=url Target database connection URL in format jdbc:com.nuodb://{BROKER}:{PORT}/{DATABASE}
    [--target.username=[username]] Target database username
    [--target.password=[password]] Target database password
    [--target.properties=[properties]] Additional connection properties encoded as URL query string "property1=value1&property2=value2"
    [--target.schema=[schema]] Default database schema name to use
Input specification, required
    --input.path=[input path] Path on the file system to the .cat file
    [--input.*=[attribute value]] Input format attributes, same options as described under the Output specification of $ bin/nuodb-migration dump
[--time.zone=time zone] Time zone option enables date columns to be dumped and reloaded between servers in different time zones, i.e. --time.zone=UTC

$ bin/nuodb-migration schema

Source database connection, required
    --source.driver=driver JDBC driver class name
    --source.url=url Source database connection URL in the standard syntax jdbc:<subprotocol>:<subname>
    [--source.username=[username]] Source database username
    [--source.password=[password]] Source database password
    [--source.properties=[properties]] Additional connection properties encoded as URL query string "property1=value1&property2=value2"
    [--source.catalog=[catalog]] Default database catalog name to use
    [--source.schema=[schema]] Default database schema name to use
Target database connection, optional
    --target.url=url Target database connection URL in format jdbc:com.nuodb://{BROKER}:{PORT}/{DATABASE}
    [--target.username=[username]] Target database username
    [--target.password=[password]] Target database password
    [--target.properties=[properties]] Additional connection properties encoded as URL query string "property1=value1&property2=value2"
    [--target.schema=[schema]] Default database schema name to use
Script output, optional
    --output.path=output path Path on the file system to the generated schema file, i.e. /tmp/schema.sql
Custom type declarations, optional
    [--type.name=type name] SQL type name template, i.e. decimal({p},{s}) or varchar({n}), where {p} is a placeholder for a precision, {s} is a scale and {n} is a maximum size
    [--type.code=type code] Integer code of declared SQL type
    [--type.size=[type size]] Maximum size of custom data type
    [--type.precision=[type precision]] The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. Typically, type precision is in the range of 1 through the maximum precision of 38
    [--type.scale=[type scale]] The number of fractional digits for numeric data types
[--meta.data.*=[true | false]] Enables or disables specific meta data type (catalog, schema, table, column, primary.key, index, foreign.key, check.constraint, auto.increment) for the generation, by default all objects are generated
[--group.scripts.by=[table | meta.data]] Group generated DDL scripts, table by default
[--identifier.normalizer=[noop | standard | lower.case | upper.case]] Identifier normalizer to use (noop, standard, lower.case, upper.case), default is noop