Metadata-Version: 2.1
Name: catalystcoop.ferc-xbrl-extractor
Version: 0.4.2
Summary: A tool for extracting data from FERC XBRL Filings.
Home-page: 
Author: Catalyst Cooperative
Author-email: pudl@catalyst.coop
Maintainer: Zach Schira
Maintainer-email: zach.schira@catalyst.coop
License: MIT
Project-URL: Source, https://github.com/catalyst-cooperative/ferc-xbrl-extract
Project-URL: Documentation, https://PACKAGE_NAME.readthedocs.io
Project-URL: Issue Tracker, https://github.com/catalyst-cooperative/ferc-xbrl-extract/issues
Keywords: xbrl,ferc,financial
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Requires-Python: >=3.8,<3.11
Description-Content-Type: text/x-rst
License-File: LICENSE.txt
Requires-Dist: pydantic (<2,>=1.9)
Requires-Dist: coloredlogs (~=15.0)
Requires-Dist: catalystcoop.arelle-mirror (==1.3.0)
Requires-Dist: frictionless (<5,>=4.4)
Requires-Dist: sqlalchemy (<2,>=1.4)
Requires-Dist: pandas (<1.5,>=1.4)
Requires-Dist: stringcase (~=1.2.0)
Provides-Extra: dev
Requires-Dist: black (<22.7,>=22.0) ; extra == 'dev'
Requires-Dist: isort (<5.11,>=5.0) ; extra == 'dev'
Requires-Dist: tox (<3.26,>=3.20) ; extra == 'dev'
Requires-Dist: twine (<4.1,>=3.3) ; extra == 'dev'
Provides-Extra: docs
Requires-Dist: doc8 (<1.1,>=0.9) ; extra == 'docs'
Requires-Dist: furo (>=2022.4.7) ; extra == 'docs'
Requires-Dist: sphinx (!=5.1.0,<5.1.2,>=4) ; extra == 'docs'
Requires-Dist: sphinx-autoapi (<1.10,>=1.8) ; extra == 'docs'
Requires-Dist: sphinx-issues (<3.1,>=1.2) ; extra == 'docs'
Provides-Extra: tests
Requires-Dist: bandit (<1.8,>=1.6) ; extra == 'tests'
Requires-Dist: coverage (<6.5,>=5.3) ; extra == 'tests'
Requires-Dist: doc8 (<1.1,>=0.9) ; extra == 'tests'
Requires-Dist: flake8 (<5.1,>=4.0) ; extra == 'tests'
Requires-Dist: flake8-builtins (<1.6,>=1.5) ; extra == 'tests'
Requires-Dist: flake8-colors (<0.2,>=0.1) ; extra == 'tests'
Requires-Dist: flake8-docstrings (<1.7,>=1.5) ; extra == 'tests'
Requires-Dist: flake8-rst-docstrings (<0.3,>=0.2) ; extra == 'tests'
Requires-Dist: flake8-use-fstring (<1.5,>=1.0) ; extra == 'tests'
Requires-Dist: mccabe (<0.8,>=0.6) ; extra == 'tests'
Requires-Dist: mypy (<0.972,>=0.942) ; extra == 'tests'
Requires-Dist: pep8-naming (<0.14,>=0.12) ; extra == 'tests'
Requires-Dist: pre-commit (<2.21,>=2.9) ; extra == 'tests'
Requires-Dist: pydocstyle (<6.2,>=5.1) ; extra == 'tests'
Requires-Dist: pytest (<7.2,>=6.2) ; extra == 'tests'
Requires-Dist: pytest-console-scripts (<1.4,>=1.1) ; extra == 'tests'
Requires-Dist: pytest-cov (<3.1,>=2.10) ; extra == 'tests'
Requires-Dist: rstcheck[sphinx] (<6.2,>=5.0) ; extra == 'tests'
Requires-Dist: tox (<3.26,>=3.20) ; extra == 'tests'
Provides-Extra: types
Requires-Dist: types-setuptools ; extra == 'types'

===============================================================================
ferc-xbrl-extract
===============================================================================

.. readme-intro

The Federal Energy Regulatory Commission (FERC) has moved to collecting and distributing data using `XBRL <https://en.wikipedia.org/wiki/XBRL>`__. XBRL is primarily designed for financial reporting, and has been adopted by regulators in the US and other countries. Much of the tooling in the XBRL ecosystem is targeted towards filers, and rendering individual filings in a human readable way, but there is very little targeted towards accessing and analyzing large collections of filings. This tool is designed to provide that functionality for FERC XBRL data. Specifically, it can extract data from a set of XBRL filings, and write that data to a SQLite database whose structure is generated from an XBRL `Taxonomy <https://en.wikipedia.org/wiki/XBRL#XBRL_Taxonomy>`__. While each XBRL instance contains a reference to a taxonomy, this tool requires a path to a single taxonomy that will be used to interpret all instances being processed. This means even if instances were created from different versions of a Taxonomy, the provided taxonomy will be used when processing all of these instances, so the output database will have a consistent structure. For more information on the technical details of the XBRL extraction, see the docs.

As of this point in time this tool is only tested with FERC Form 1, but is intended
to extend support to other FERC forms. It is possible it could be used with non-FER
taxonomies with some tweaking, but we do not provide any official support for
non-FERC data.

Usage
-----

Installation
^^^^^^^^^^^^

To install using conda, run the following command, and activate the environment.

.. code-block:: console

    $ conda env create -f environment.yml

Activate:

.. code-block:: console

    $ conda activate ferc-xbrl-extract


CLI
^^^

This tool can be used as a library, as it is in `PUDL <https://github.com/catalyst-cooperative/pudl>`__,
or there is a CLI provided for interacting with XBRL data. The only required options
for the CLI are a path a single XBRL filing, or directory of XBRL filings, and a
path to the SQLite database.

.. code-block:: console

    $ xbrl_extract {path_to_filings} {path_to_database}

By default, the CLI will use the 2022 version of the FERC Form 1 Taxonomy to create
the structure of the output database. To specify a different taxonomy use the
``--taxonomy`` option.

.. code-block:: console

    $ xbrl_extract {path_to_filings} {path_to_database} --taxonomy {url_of_taxonomy}

Parsing XBRL filings can be a time consuming and CPU heavy task, so this tool
implements some basic multiprocessing to speed this up. It uses a
`process pool <https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor>`__
to do this. There are two options for configuring the process pool, ``--batch-size``
and ``--workers``. The batch size configures how many filings will be processed by
each child process at a time, and workers specifies how many child processes to
create in the pool. It may take some experimentation to get these options
optimally configured.

.. code-block:: console

    $ xbrl_extract {path_to_filings} {path_to_database} --workers {number_of_processes} --batch-size {filings_per_batch}
