Skip to content

autory convert results

Usage:  [OPTIONS]

  Convert the results of Autory runs.

Options:
  -i, --in DIRECTORY              The directory containing the Autory results
                                  to process. Can be specified multiple times.
                                  [required]
  --use-aggregated / --use-iterations
                                  Whether to read the aggregated results or
                                  the per-iteration results from the input
                                  directories.  [default: use-aggregated]
  --projection-node-filter TEXT   A formula to filter the projection nodes to
                                  include. This uses the same syntax as
                                  formulas in the Autory model. E.g.,
                                  `AND(hLevel=1, valnType='summary')`.
                                  [default: TRUE]
  --time-step-filter TEXT         A formula to filter the time steps to
                                  include. This uses the same syntax as
                                  formulas in the Autory model.

                                  Examples:

                                      - `OR(t=1, t=2)`: Include only time
                                      steps 1 and 2.

                                      - `Date > date(2024,1,31)`: Include only
                                      time steps after 2024/01/31.  [default:
                                      TRUE]
  --include-names TEXT            A comma-separated list of names of variables
                                  and properties to include in the output. See
                                  below for how to use this with `--exclude-
                                  names`.
  --exclude-names TEXT            A comma-separated list of names of variables
                                  and properties to exclude from the output.
                                  See below for how to use this with
                                  `--include-names`.
  --output-format [autory_results_json|sqlite|csv]
                                  The format to which to convert the results.
                                  If not specified, this is guessed from the
                                  file extension of the output path. If the
                                  output path is a folder, it's not possible
                                  to guess the desired output format.

                                  - autory_results_json: Convert to 'Autory
                                  results JSON' format. This is similar to
                                  what is normally produced by the VBA engine.

                                  - sqlite: Convert to a SQLite database.

                                  - csv: Convert to CSV files.
  -o, --out PATH                  The output path to write to.This may be a
                                  folder or a file, depending on the output
                                  format and other options:

                                  - `autory_results_json`, `csv`: When
                                  splitting the results, this must be a
                                  folder. Otherwise, it may be a file or a
                                  folder.

                                  - `sqlite`: This must be a file.  [required]
  --encode-error-values [null|str|serialize]
                                  How to encode error values.

                                  - null: Encode error values as null. In JSON
                                  and SQLite, this is the native null value.
                                  In CSV, this is an empty field, which may be
                                  read as an empty string, NULL, or even zero,
                                  depending on the software used to read the
                                  CSV file.

                                  - str: Encode error values as strings. This
                                  looks like
                                  `PythonErrorValue(reason='Division by
                                  zero.')`,
                                  `ExcelErrorValue(value='#DIV/0!')`, etc.

                                  - serialize: Serialize error values as JSON
                                  strings. This looks like `{"class":
                                  "ExcelErrorValue", "properties": {"value":
                                  "#N/A"}}`, `{"class": "PythonErrorValue",
                                  "properties": {"reason": "fell out of a
                                  tree"}}`, etc.
  --split-by TEXT                 Split the output into multiple files or
                                  tables. This option takes a comma-separated
                                  list of names.

                                  For table-like output formats (`csv`,
                                  `sqlite`), any column name may be used.

                                  For the `autory_results_json` output format,
                                  the names of scalars, hierarchy properties,
                                  and projection node properties may be used.

                                  For SQLite, this means that multiple tables
                                  will be created in one database file.

                                  For CSV and JSON, this means that more than
                                  one file will be created.

                                  Examples:

                                  - One table or file per valuation type:
                                  --split-by valnType

                                  - One table or file per projection node:
                                  --split-by valnType,valnDate,hPath

                                  - Everything in one table or file: Omit the
                                  --split-by option.  [default: ""]
  --name-pattern TEXT             A template for generating file names or
                                  tables names.

                                  See below for how to use this with `--split-
                                  by`.

                                  When not using `--split-by`, this determines
                                  the table name for the `sqlite` output
                                  format.
  --table-orientation [wide|long]
                                  The default orientation of the data in the
                                  output table(s). Only applicable for table-
                                  like output formats, like SQLite and CSV.

                                  - wide: By default, each name has its own
                                  column.

                                  - long: By default, everything is put into
                                  `name` and `value` columns.
  --table-long-names TEXT         A comma-separated list of names that SHOULD
                                  NOT have their own columns even when using
                                  `--table-orientation wide`. This is ignored
                                  when using `--table-orientation long`. `t`
                                  must always have its own column, therefore
                                  including `t` in this list has no effect.
  --table-wide-names TEXT         A comma-separated list of names that SHOULD
                                  have their own columns even when using
                                  `--table-orientation long`. This is ignored
                                  when using `--table-orientation wide`. Names
                                  that have their own columns are repeated for
                                  each long-format name-value pair.
  --sqlite-append-tables / --sqlite-overwrite-db
                                  When using one of the `sqlite_*` output
                                  formats, if the database file already
                                  exists, this pair of options determines
                                  whether the tables should be appended to the
                                  existing database or if a new database file
                                  should be created, overwriting the previous
                                  one if it exists.  [default: sqlite-append-
                                  tables]
  --sqlite-append-rows / --sqlite-overwrite-tables
                                  When using one of the `sqlite_*` output
                                  formats, if a table already exists in the
                                  database file, this pair of options
                                  determines whether the rows should be
                                  appended to the existing table or if a new
                                  table should be created, overwriting the
                                  previous one if it exists.  [default:
                                  sqlite-overwrite-tables]
  --help                          Show this message and exit.

  --------

  Using `--split-by` and `--name-pattern`:

      - When not using `--split-by`, there is only one table, and its name is
      given by `--name-pattern`. The table name defaults to "results" for
      SQLite.

      - When using `--split-by`, more than one table will be created. For
      SQLite, this means one database file with multiple tables. For CSV and
      JSON, this means multiple files (If the output path is a file, the
      generated table names will be appended to the file name. If the output
      path is a folder, the table names will be used as file names inside that
      folder). You may use the column names as placeholders using Python's
      `{placeholder}` syntax. The default is a comma-separated list of values
      corresponding to the column names given in the `--split-by` option,
      e.g., if `--split-by valnType,valnDate,hPath` is given, then the default
      for `--name-pattern` is '{ valnType},{valnDate},{hPath}'.

      Examples:

          - All results in one table called 'asdf': `--name-pattern 'asdf'`

          - Use valuation types in the table names: `--split-by valnType
          --name-pattern 'results_{valnType}'`

      For more info on the placeholder syntax, see
      https://docs.python.org/3/library/stdtypes.html#str.format .

  --------

  Using `--include-names` and `--exclude-names`:

      - "Names" refer to names of variables (time vectors and scalars),
      hierarchy properties, projection node properties, or other metadata.

      - If neither option is given, all names are included.

      - If only `--include-names` is given, only those names are included.

      - If only `--exclude-names` is given, all names except those are
      included.

      - If both options are given, only the name in `--include-names` that are
      not in `--exclude-names` are included.

      - `t` is always included. It's not possible to exclude it.

      - When using `--use-iterations`, all `loop_*` columns are always
      included. It's not possible to exclude them.

  Stability=beta: This command is in the later stages of development, but may
  still change without notice. Use it at your own risk.