Metadata-Version: 2.1
Name: ForestDiffusion
Version: 1.0.4
Summary: Generating and Imputing Tabular Data via Diffusion and Flow XGBoost Models
Home-page: https://github.com/SamsungSAILMontreal/ForestDiffusion
Author: Alexia Jolicoeur-Martineau
Author-email: <alexia.jolicoeur-martineau@mail.mcgill.ca>
Keywords: python,AI,xgboost,GBT,tree,forest,tabular,diffusion,flow
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Education
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: Microsoft :: Windows
License-File: LICENSE.txt
Requires-Dist: numpy
Requires-Dist: scikit-learn
Requires-Dist: xgboost >=2.0.0
Requires-Dist: lightgbm
Requires-Dist: catboost
Requires-Dist: pandas

Tabular data is hard to acquire and is subject to missing values. This paper proposes a novel approach to generate and impute mixed-type (continuous and categorical) tabular data using score-based diffusion and conditional flow matching. Contrary to previous work that relies on neural networks as function approximators, we instead utilize XGBoost, a popular Gradient-Boosted Tree (GBT) method. In addition to being elegant, we empirically show on various datasets that our method i) generates highly realistic synthetic data when the training dataset is either clean or tainted by missing data and ii) generates diverse plausible data imputations. Our method often outperforms deep-learning generation methods and can trained in parallel using CPUs without the need for a GPU. To make it easily accessible, we release our code through a Python library and an R package <arXiv:2309.09968>.
