Synthetic Data Generation for tabular, relational and time series data.

Overview

DAI-Lab An Open Source Project from the Data to AI Lab, at MIT

Development Status PyPi Shield Tests Coverage Status Downloads Binder Slack

Overview

The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset.

Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent software systems without the risk of exposure that comes with data disclosure.

Underneath the hood it uses several probabilistic graphical modeling and deep learning based techniques. To enable a variety of data storage structures, we employ unique hierarchical generative modeling and recursive sampling techniques.

Current functionality and features:

Try it out now!

If you want to quickly discover SDV, simply click the button below and follow the tutorials!

Binder

Join our Slack Workspace

If you want to be part of the SDV community to receive announcements of the latest releases, ask questions, suggest new features or participate in the development meetings, please join our Slack Workspace!

Slack

Install

Using pip:

pip install sdv

Using conda:

conda install -c sdv-dev -c pytorch -c conda-forge sdv

For more installation options please visit the SDV installation Guide

Quickstart

In this short tutorial we will guide you through a series of steps that will help you getting started using SDV.

1. Model the dataset using SDV

To model a multi table, relational dataset, we follow two steps. In the first step, we will load the data and configures the meta data. In the second step, we will use the sdv API to fit and save a hierarchical model. We will cover these two steps in this section using an example dataset.

Step 1: Load example data

SDV comes with a toy dataset to play with, which can be loaded using the sdv.load_demo function:

from sdv import load_demo

metadata, tables = load_demo(metadata=True)

This will return two objects:

  1. A Metadata object with all the information that SDV needs to know about the dataset.

For more details about how to build the Metadata for your own dataset, please refer to the Working with Metadata tutorial.

  1. A dictionary containing three pandas.DataFrames with the tables described in the metadata object.

The returned objects contain the following information:

{
    'users':
            user_id country gender  age
          0        0     USA      M   34
          1        1      UK      F   23
          2        2      ES   None   44
          3        3      UK      M   22
          4        4     USA      F   54
          5        5      DE      M   57
          6        6      BG      F   45
          7        7      ES   None   41
          8        8      FR      F   23
          9        9      UK   None   30,
  'sessions':
          session_id  user_id  device       os
          0           0        0  mobile  android
          1           1        1  tablet      ios
          2           2        1  tablet  android
          3           3        2  mobile  android
          4           4        4  mobile      ios
          5           5        5  mobile  android
          6           6        6  mobile      ios
          7           7        6  tablet      ios
          8           8        6  mobile      ios
          9           9        8  tablet      ios,
  'transactions':
          transaction_id  session_id           timestamp  amount  approved
          0               0           0 2019-01-01 12:34:32   100.0      True
          1               1           0 2019-01-01 12:42:21    55.3      True
          2               2           1 2019-01-07 17:23:11    79.5      True
          3               3           3 2019-01-10 11:08:57   112.1     False
          4               4           5 2019-01-10 21:54:08   110.0     False
          5               5           5 2019-01-11 11:21:20    76.3      True
          6               6           7 2019-01-22 14:44:10    89.5      True
          7               7           8 2019-01-23 10:14:09   132.1     False
          8               8           9 2019-01-27 16:09:17    68.0      True
          9               9           9 2019-01-29 12:10:48    99.9      True
}

2. Fit a model using the SDV API.

First, we build a hierarchical statistical model of the data using SDV. For this we will create an instance of the sdv.SDV class and use its fit method.

During this process, SDV will traverse across all the tables in your dataset following the primary key-foreign key relationships and learn the probability distributions of the values in the columns.

from sdv import SDV

sdv = SDV()
sdv.fit(metadata, tables)

Once the modeling has finished, you can save your fitted SDV instance for later usage using the save method of your instance.

sdv.save('sdv.pkl')

The generated pkl file will not include any of the original data in it, so it can be safely sent to where the synthetic data will be generated without any privacy concerns.

2. Sample data from the fitted model

In order to sample data from the fitted model, we will first need to load it from its pkl file. Note that you can skip this step if you are running all the steps sequentially within the same python session.

sdv = SDV.load('sdv.pkl')

After loading the instance, we can sample synthetic data by calling its sample method.

samples = sdv.sample()

The output will be a dictionary with the same structure as the original tables dict, but filled with synthetic data instead of the real one.

Finally, if you want to evaluate how similar the sampled tables are to the real data, please have a look at our evaluation framework or visit the SDMetrics library.

Join our community

  1. If you would like to see more usage examples, please have a look at the tutorials folder of the repository. Please contact us if you have a usage example that you would want to share with the community.
  2. Please have a look at the Contributing Guide to see how you can contribute to the project.
  3. If you have any doubts, feature requests or detect an error, please open an issue on github or join our Slack Workspace
  4. Also, do not forget to check the project documentation site!

Citation

If you use SDV for your research, please consider citing the following paper:

Neha Patki, Roy Wedge, Kalyan Veeramachaneni. The Synthetic Data Vault. IEEE DSAA 2016.

@inproceedings{
    7796926,
    author={N. {Patki} and R. {Wedge} and K. {Veeramachaneni}},
    booktitle={2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)},
    title={The Synthetic Data Vault},
    year={2016},
    volume={},
    number={},
    pages={399-410},
    keywords={data analysis;relational databases;synthetic data vault;SDV;generative model;relational database;multivariate modelling;predictive model;data analysis;data science;Data models;Databases;Computational modeling;Predictive models;Hidden Markov models;Numerical models;Synthetic data generation;crowd sourcing;data science;predictive modeling},
    doi={10.1109/DSAA.2016.49},
    ISSN={},
    month={Oct}
}
Comments
  • Should `ConstraintsNotMetError` be a Warning instead?

    Should `ConstraintsNotMetError` be a Warning instead?

    Problem Description

    Starting fromv0.12.0, the SDV only allows you fit a model with constraints if all of the input data matches the constraints.

    constraint = GreaterThan(low='effective_date', high='due_date')
    model = CopulaGAN(constraints=[constraint])
    
    model.fit(my_data)
    # Crash (ConstraintsNotMetError) if effective_date > due_date for any single row of my dataset
    

    Expected behavior

    It's useful to know whether the input data passes the constraints, but should this really be a hard requirement that all the rows need to pass the constraint?

    My expectation: Give me a Warning but continue fitting the data.

    • The warning can be descriptive. For eg, tell me how many rows aren't passing, or which rows they are
    • It's ok if the SDV drops the offending rows before modeling

    Additional context

    There may be legitimate reasons why a few rows of the input data don't match the constraints: some rows in the dataset were manually overridden exceptions, there was a bug in my application, the rows were generated by some legacy system, etc.

    In any case, the only recourse I have now is to manually identify & delete the offending rows.

    feature:constraints 
    opened by npatki 13
  • CTGAN loss values documentation

    CTGAN loss values documentation

    Hello

    I am fitting a ctgan on a simple table (3 continuous variables) in verbose mode.

    The image shows the evolution over epochs of G (blue) and D(red) loss values

    Is there some documentation available describing the meaning of these values (and evolution) in more detail. Also why the D loss seems to oscillate around 0 while the G loss decreases to some negative value (I understand the decreasing part :) )

    I also had some fit runs where the G loss started increasing after a number of epoch's. What would be the meaning of that?

    All tips welcome!

    Wkr

    Peter

    gan_epochs

    question data:single-table 
    opened by petercoppensdatylon 12
  • Pip install took too much time

    Pip install took too much time

    Environment details

    I am trying to install SDV by using pip install sdv, but pip kept looking at multiple versions of many dependencies, like "six", "decorator" and so on. It took to long!

    Here is Info:

    INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking

    So I tried to install by using conda install -c sdv-dev -c pytorch -c conda-forge sdv, but the version is then 0.5.0.

    How can I install it quickly and get the stable latest version?

    • Python version: 3.8
    • Operating System: Windows 10
    question 
    opened by redcican 12
  • evalute() from sdv.evaluation not working

    evalute() from sdv.evaluation not working

    Environment Details

    Please indicate the following details about the environment in which you found the bug:

    • SDV version: 0.12.1 (sdmetrics 0.3.2)
    • Python version: Python 3
    • Operating System: Windows 10

    Error Description

    Original inquery on Slack can be found here.

    Error with evaluate(synthetic_data, original_data) and evaluate(synthetic_data, original_data, aggregate=False). Blank table is generated when aggregate is set to false. NaN is received without setting aggregate to false.

    All dataframes fed to evaluate() were confirmed to be Pandas dataframes that were successfully and correctly read in.

    Steps to reproduce

    Used evaluate(synthetic_data, original_data) and evaluate(synthetic_data, original_data, aggregate=False) with the original dataset (Pandas dataframe) and four different synthetic dataframes generated by SDV (TVAE, GaussianCopula, CopulaGAN, and CTGAN) in four different instances.

    When running evaluate() on two generated datasets, with either aggregate settings, it seemed to run fine. I'm not sure why it isn't working with the original dataset. I thought maybe it was an issue with the difference in size of the datasets, but when I sampled one of the generated datasets to be the same size as the original (the generated datasets are larger than the original data), evaluate() did not work.

    image

    image

    # If running stand-alone, import Pandas & ignore warnings
    import pandas as pd
    import warnings
    warnings.filterwarnings('ignore')
    
    # Metrics library
    from sdv.evaluation import evaluate
    
    # Read the original data in
    original_data = pd.read_csv('clean_prfdata2.csv')
    n_sample = len(original_data.index)
    
    # Read the four datasets we generated in
    GaussianCopula_data = pd.read_csv("prf_GaussianCopula.csv").sample(n=n_sample)
    CTGAN_data = pd.read_csv("prf_CTGAN.csv").sample(n=n_sample)
    CopulaGAN_data = pd.read_csv("prf_CopulaGAN.csv").sample(n=n_sample)
    TVAE_data = pd.read_csv("prf_TVAE.csv").sample(n=n_sample)
    
    # Compare the GaussianCopula data
    evaluate(GaussianCopula_data, original_data, aggregate=False)
    
    # Compare the CTGAN data
    evaluate(CTGAN_data, original_data)#, aggregate=False)
    
    
    bug resolution:WAI 
    opened by cafornaca 11
  • Generating samples is taking lot of time. Is there any way to speed up sample generation.

    Generating samples is taking lot of time. Is there any way to speed up sample generation.

    • SDV version: 0.1.1
    • Python version: 3.6
    • Operating System: Mac Mojave

    Description

    I am trying to setup automated test data generation for my testing. I generated metadata JSON for the table and fit the model with it. As sampler is a dict, I am storing sampler from data_vault as pickle. The goal is to store this pickle of sampler in db or in a remote server and generate test data wherever and whenever necessary.

    The samples are taking too much of time to generate - for a table of 29 columns and 1800 rows To generate 10 samples it is taking 5 mins. I tried to generate whole 1800 rows but it was never completed , had to kill.

    Please let me know If am dealing things in a wrong way or Anything which i need to tweak to get faster response.

    What I Did

            data_vault = SDV(self.findMeta())
            data_vault.fit()
    
            output = open('sampler.pkl', 'wb')
            pickle.dump(data_vault.sampler, output)
            output.close()
    
            infile = open('sampler.pkl','rb')
            new_dict = pickle.load(infile)
            infile.close()
            samples = new_dict.sample_all(100)
    
    bug 
    opened by imsitu 11
  • How to use inequality constraints with datetime?

    How to use inequality constraints with datetime?

    Environment details

    If you are already running SDV, please indicate the following details about the environment in which you are running it:

    • SDV version: 0.17.1
    • Python version: 3.8.15
    • Operating System: Windows 11 Pro

    Problem description

    I'm using an inequality constraint with two columns of type 'datetime64[ns]' and I want that date1 is before date2, but I get the error:

    Traceback (most recent call last): File "minisogei_generation.py", line 41, in <module> model = GaussianCopula(constraints = c, table_metadata=metadata.get_table_meta('minisogei'), File "C:\Users\luimon\miniconda3\envs\minisogei\lib\site-packages\sdv\tabular\copulas.py", line 171, in __init__ super().__init__( File "C:\Users\luimon\miniconda3\envs\minisogei\lib\site-packages\sdv\tabular\base.py", line 110, in __init__ 'If table_metadata is given {} must be None'.format(arg.__name__)) AttributeError: 'list' object has no attribute '__name__'

    How can I fix? Thanks

    bug resolution:duplicate feature:constraints 
    opened by montasIET 9
  • How physics based loss function applied to synthetic data generation?

    How physics based loss function applied to synthetic data generation?

    Problem description

    I have soil parameters and they are small in size. I applied CopulaGAN and also GaussianCopula to increase the size of data. I have obtained large size soil parameters. By the way, the dataset is only a continuous variable. My purpose is to obtain physics consistency while data generation because I would like to show the synthetic data will be generated based on rock mechanics law. Therefore, I would like to add rock mechanics law as a loss function to the Copulas. But I didn't see loss inside the Copulas class. Do I need to add rock mechanics law into the constraints? Or how can I implement this physics law to the data generation?

    What I already tried

    I have applied GaussianCopula and CopulaGAN so far. The dataset has 7 columns 18 rows (real data). After application, I have obtained 7 columns 700 rows (synthetic data). The dataset is only consisting of continuous variables.

    feature request 
    opened by kilickursat 8
  • CustomConstraint not found / NaN values not supported in Inequality constraint?

    CustomConstraint not found / NaN values not supported in Inequality constraint?

    Environment Details

    • SDV version: 0.17.1
    • Python version: 3.7
    • Operating System: MacOS

    Error Description

    I have a simple multi table dataset (orders, products, customers). I was trying to test creating a custom constraint for the "orders" table that specified that the "order_date" had to be before the "shipped_date". However, when I try to fit the model, I get AttributeError: module 'sdv.constraints.tabular' has no attribute 'CustomConstraint'.

    I tried to use the Inequality constraint but the "shipped_date" column has NaN values and I got the following error :

    ---------------------------------------------------------------------------
    
    MultipleConstraintsErrors                 Traceback (most recent call last)
    
    [<ipython-input-170-89fae4524a02>](https://localhost:8080/#) in <module>
          3 model = HMA1(metadata)
          4 
    ----> 5 model.fit(tables)
    
    8 frames
    
    [/usr/local/lib/python3.7/dist-packages/sdv/metadata/table.py](https://localhost:8080/#) in _fit_constraints(self, data)
        440 
        441         if errors:
    --> 442             raise MultipleConstraintsErrors('\n' + '\n\n'.join(map(str, errors)))
        443 
        444     def _transform_constraints(self, data, is_condition=False):
    
    MultipleConstraintsErrors: 
    Both high and low must be datetime.
    

    Steps to reproduce

    Option 1: I have incuded my notebook and CSV files

    Option 2:

    • Have a data table with two date columns (I have included the csv files I am using)
    • Implement the following:
    from sdv.constraints import create_custom_constraint
    
    def is_valid(column_names, data):
            column_name_low=column_names[0]
            column_name_high=column_names[1]
        
            is_valid = (data[column_name2].isnull() or data[column_name_low] < data[column_name_high])
    
            return is_valid
    
    DateInequality = create_custom_constraint(is_valid_fn=is_valid)
    
    date_constraint = DateInequality(column_names=['order_date', 'shipped_date'], data=tables['orders'])
    
    • add your table to the metadata with your new custom constraint
    • Fit the model

    This is the error I get :

    ---------------------------------------------------------------------------
    
    AttributeError                            Traceback (most recent call last)
    
    [<ipython-input-154-89fae4524a02>](https://localhost:8080/#) in <module>
          3 model = HMA1(metadata)
          4 
    ----> 5 model.fit(tables)
    
    10 frames
    
    [/usr/local/lib/python3.7/dist-packages/sdv/constraints/base.py](https://localhost:8080/#) in import_object(obj)
         52     if isinstance(obj, str):
         53         package, name = obj.rsplit('.', 1)
    ---> 54         return getattr(importlib.import_module(package), name)
         55 
         56     return obj
    
    AttributeError: module 'sdv.constraints.tabular' has no attribute 'CustomConstraint'
    

    customers.csv order_items.csv order_statuses.csv orders.csv products.csv shippers.csv Synthetic(1).ipynb.zip

    bug resolution:duplicate feature:constraints data:multi-table 
    opened by nelsonrogers 7
  • KeyError while sampling using freshly trained PAR model

    KeyError while sampling using freshly trained PAR model

    Environment Details

    Please indicate the following details about the environment in which you found the bug:

    SDV version: 0.16.0 Python version: 3.8.13 (default, May 8 2022, 17:48:02) \n[Clang 13.1.6 (clang-1316.0.21.2)] Operating System: Macbook Pro M1 Mac OS X 12.0.1

    Error description

    The key error is also being raised when trying to sample from a freshly-trained PAR model in v0.16.0.

    I tried both passing the field types metadata and without it, nothing seems to help.

    I printed the model metadata just to check if the model inferred properly the data types and everything seems correct.

    Here I attach the code used just in case it helps (this is the last version used in which the model infers the field types):

    import pandas as pd
    from sdv.timeseries import PAR
    from sdv.metrics.timeseries import TSFClassifierEfficacy
    
    data = pd.read_csv("data/micro_batch_task.csv")
    sequence_index = 'start_time'
    field_types = {
        "instance_num": {
            "type": "numerical",
            'subtype': 'integer'
        },
        "start_time": {
            "type": "numerical",
            'subtype': 'integer'
        },
        "plan_cpu": {
            "type": "numerical",
            'subtype': 'float'
        },
        "plan_mem": {
            "type": "numerical",
            'subtype': 'float'
        },
        "makespan": {
            "type": "numerical",
            'subtype': 'integer'
        },
    }
    model = PAR(
        sequence_index=sequence_index,
        segment_size=10,
        epochs=1,
        verbose=True
    )
    model.fit(data)
    print(model.get_metadata().to_dict())
    new_data = model.sample(1)
    print(new_data)
    print(TSFClassifierEfficacy.compute(data, new_data, field_types, target='makespan'))
    

    When trying to sample:

    PARModel(epochs=1, sample_size=1, cuda='cpu', verbose=True) instance created
    Epoch 1 | Loss 0.001459105173125863: 100%|██████████| 1/1 [00:51<00:00, 51.42s/it]
    {'fields': {'instance_num': {'type': 'numerical', 'subtype': 'float', 'transformer': None}, 'start_time': {'type': 'numerical', 'subtype': 'integer', 'transformer': None}, 'plan_cpu': {'type': 'numerical', 'subtype': 'float', 'transformer': None}, 'plan_mem': {'type': 'numerical', 'subtype': 'float', 'transformer': None}, 'makespan': {'type': 'numerical', 'subtype': 'integer', 'transformer': None}}, 'constraints': [], 'model_kwargs': {}, 'name': None, 'primary_key': None, 'sequence_index': 'start_time', 'entity_columns': [], 'context_columns': []}
    100%|██████████| 1/1 [00:00<00:00, 85.72it/s]
    Traceback (most recent call last):
      File "/opt/homebrew/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3621, in get_loc
        return self._engine.get_loc(casted_key)
      File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc
      File "pandas/_libs/index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc
      File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
      File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'start_time'
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/Users/damianfernandez/PycharmProjects/sdv/main.py", line 46, in <module>
        new_data = model.sample(1)
      File "/opt/homebrew/lib/python3.8/site-packages/sdv/timeseries/base.py", line 268, in sample
        return self._metadata.reverse_transform(sampled)
      File "/opt/homebrew/lib/python3.8/site-packages/sdv/metadata/table.py", line 700, in reverse_transform
        field_data = reversed_data[name]
      File "/opt/homebrew/lib/python3.8/site-packages/pandas/core/frame.py", line 3505, in __getitem__
        indexer = self.columns.get_loc(key)
      File "/opt/homebrew/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3623, in get_loc
        raise KeyError(key) from err
    KeyError: 'start_time'
    
    Process finished with exit code 1
    

    Maybe I'm not doing something properly. I'm new to the library!

    bug resolution:duplicate data:sequential 
    opened by DamianUS 7
  • TypeError during sample_conditions() due to progress bar (`tqdm`)

    TypeError during sample_conditions() due to progress bar (`tqdm`)

    Environment Details

    Please indicate the following details about the environment in which you found the bug:

    Error Description

    I trained a CTGAN and want to generate about 450000 samples. However, during sample_conditions() an exception occurs that seems to be related to the progress bar implementation. See below for the traceback. Numpy version is 1.22.4.

    Steps to reproduce

    1. Install sdv
    2. Train CTGAN model
    3. Run sample_conditions()
    4. Crash

    The code is really straight forward. Below is a simplified version of it.

    model = CTGAN(field_transformers={"stalling": "categorical" })
    model.fit(data)
    model.save(save_path)
    
    stalling_condition = Condition({"stalling": 1}, num_rows=450000)
    stalling_samples = model.sample_conditions(conditions=[ stalling_condition ])
    

    The relevant output:

      0%|          | 0/451702 [00:00<?, ?it/s]
    Sampling conditions:   0%|          | 0/451702 [00:00<?, ?it/s]
    Sampling conditions:   0%|          | 0/451702 [05:23<?, ?it/s]
    Error: Sampling terminated. Partial results are stored in a temporary file: .sample.csv.temp. This file will be overridden the next time you sample. Please rename the file if you wish to save these results.
    
    Traceback (most recent call last):
      File "scripts/ctgan_rf.py", line 81, in <module>
        sys.exit(main())
      File "scripts/ctgan_rf.py", line 72, in main
        upsample(model, ds)
      File "scripts/ctgan_rf.py", line 48, in upsample
        stalling_samples = model.sample_conditions(conditions=[stalling_condition])
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 667, in sample_conditions
        return self._sample_conditions(
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 715, in _sample_conditions
        handle_sampling_error(output_file_path == TMP_FILE_NAME, output_file_path, error)
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/utils.py", line 185, in handle_sampling_error
        raise sampling_error
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 703, in _sample_conditions
        sampled = progress_bar_wrapper(_sample_function, num_rows, 'Sampling conditions')
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/utils.py", line 157, in progress_bar_wrapper
        return function(progress_bar)
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 689, in _sample_function
        sampled_for_condition = self._sample_with_conditions(
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 612, in _sample_with_conditions
        sampled_rows = self._conditionally_sample_rows(
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 386, in _conditionally_sample_rows
        sampled_rows = self._sample_batch(
      File "/usr/local/lib/python3.8/dist-packages/sdv/tabular/base.py", line 345, in _sample_batch
        if progress_bar:
    TypeError: __bool__ should return bool, returned numpy.bool_
    
    bug feature:sampling resolution:cannot replicate 
    opened by mphe 7
  • Provide a description for the PARModel (including evaluation & benchmarking)

    Provide a description for the PARModel (including evaluation & benchmarking)

    Hi There,

    I have a question. In the PAR Model guide it is written that "The PAR class is an implementation of a Probabilistic AutoRegressive model". Which one in particular? Is there a documentation regarding the theoretical aspects of that model?

    Thanks.

    documentation data:sequential 
    opened by mabotti 7
  • Do not include the original real data in the trained model .pkl file

    Do not include the original real data in the trained model .pkl file

    A further consideration would be to not include the original real data in the trained model .pkl file. If a user only needs to supply final synthetic data, for example in the form of a .csv, then it is not a problem. But if they wish to supply a trained model .pkl file to another user so they can generate however much synthetic data they want, then it is a potential problem that the original real PII data is accessible from the .pkl

    Here is an example that replicates the point

    import cloudpickle
    import pandas as pd
    
    from sdv.tabular import CTGAN
    
    # create dummy data
    real_data = pd.DataFrame(
        data={"real_name": ["Peter", "John", "Mary", "Susan"]}
    )
    
    anonymize_fields = {
        "real_name": "name",
    }
    
    print(f"Raw data: {real_data.shape[0]} rows, {real_data.shape[1]} cols")
    
    model = CTGAN(
        epochs=1,
        verbose=True,
        anonymize_fields=anonymize_fields,
    )
    model.fit(real_data)
    
    with open("file.pkl", "wb") as output:
        cloudpickle.dump(model, output)
    
    # delete the model to be sure it is not accessed
    del model
    
    # load back the model and inspect ANONYMIZATION_MAPPINGS
    with open("file.pkl", "rb") as input:
        model_saved = cloudpickle.load(input)
    
    print(model_saved._metadata._ANONYMIZATION_MAPPINGS)
    

    which outputs:

    Raw data: 4 rows, 1 cols
    Epoch 1, Loss G:  1.5588,Loss D: -0.0033
    {1636409547408: {'real_name': {'Peter': 'Jessica Reynolds', 'John': 'Jill Graham', 'Mary': 'Eric Williamson', 'Susan': 'Jordan Davis'}}}
    

    containing the original names ["Peter", "John", "Mary", "Susan"]

    Originally posted by @PJPRoche in https://github.com/sdv-dev/SDV/issues/439#issuecomment-1363888588

    feature request 
    opened by npatki 0
  • Support of pandas dtypes (needed for integers with missing values)

    Support of pandas dtypes (needed for integers with missing values)

    Problem Description

    I have a column in my dataset that has integers and nan values. The way I transform my columns currently, in order to deal with integers (no decimals) and nan values, is by transforming it to a 'Int64' dtype, more specifically; pd.Int64Dtype(). However after training a sdv model with this dtype it provides errors when I want to sample ("Cannot interpret 'Int64Dtype()' as a data type").

    Expected behavior

    Be able to support pandas dtypes such that I am able to train and sample on this kind of data.

    Additional context

    I transformed the column with .astype('Int64'), more specifically with round(pd.to_numeric(dataframe['column1'], errors='coerce')).astype('Int64'). Such that: {'column1':[123500,56832,]}, where the type() of each corresponds to [np.int64, np.int64, pandas._libs.missing.NAType]. The used metadata is provided below.

    "fields": { "column1": { "type": "numerical", "subtype": "integer" }

    feature request under discussion 
    opened by nuldertien 1
  • Models are not creating the right PII values for students dataset (metadata bug)

    Models are not creating the right PII values for students dataset (metadata bug)

    Environment Details

    • SDV version: 0.17.2
    • Python version: 3.8
    • Operating System: MacOS (Darwin)

    Error Description

    If I use the student_placements_pii demo dataset with the provided metadata, the PII column 'address' doesn't seem to be configured properly. The synthetic data just contains letters like 'a', 'b', 'c' instead of fake addresses like intended.

    Real Data: image

    Synthetic Data: image

    Steps to reproduce

    from sdv.demo import load_tabular_demo
    from sdv.tabular import GaussianCopula
    
    metadata, data = load_tabular_demo('student_placements_pii', metadata=True)
    model = GaussianCopula(table_metadata=metadata)
    model.fit(data)
    synthetic_data = model.sample(num_rows=5)
    

    Details

    The issue is in the returned metadata. It improperly specified for column address --

    Current (incorrect) specification:

    'address': {'type': 'id',
       'subtype': 'string',
       'pii': True,
       'pii_category': 'address'},
    

    Correct specification:

    'address': {
      'type': 'categorical',
      'pii': True,
      'pii_category': 'address'
    }
    

    If I correct the metadata, it works as intended.

    bug 
    opened by npatki 1
  • Foreign Keys are added as Alternate Keys when upgrading

    Foreign Keys are added as Alternate Keys when upgrading

    Environment Details

    • SDV version: V1.0.0
    • Python version: 3.8
    • Operating System: MacOS Darwin

    Error Description

    When converting the load_demo metadata, this adds the foreign keys as alternate keys, which leads to erroring out later on validation while fitting.

    Steps to reproduce

    from sdv.demo import load_demo
    from sdv.metadata import MultiTableMetadata
    
    metadata, data = load_demo(metadata=True)
    metadata.to_json('old_metadata.json')
    
    MultiTableMetadata.upgrade_metadata('old_metadata.json', 'new_metadata.json')
    
    mtm = MultiTableMetadata.load_from_json('new_metadata.json')
    mtm
    

    Here is the output of the mtm as json representation:

    {
        "tables": {
            "users": {
                "columns": {
                    "user_id": {
                        "sdtype": "numerical"
                    },
                    "country": {
                        "sdtype": "categorical"
                    },
                    "gender": {
                        "sdtype": "categorical"
                    },
                    "age": {
                        "sdtype": "numerical",
                        "computer_representation": "Int64"
                    }
                },
                "primary_key": "user_id"
            },
            "sessions": {
                "alternate_keys": [
                    "user_id"
                ],
                "columns": {
                    "session_id": {
                        "sdtype": "numerical"
                    },
                    "user_id": {
                        "sdtype": "numerical"
                    },
                    "device": {
                        "sdtype": "categorical"
                    },
                    "os": {
                        "sdtype": "categorical"
                    },
                    "minutes": {
                        "sdtype": "numerical",
                        "computer_representation": "Int64"
                    }
                },
                "primary_key": "session_id"
            },
            "transactions": {
                "alternate_keys": [
                    "session_id"
                ],
                "columns": {
                    "transaction_id": {
                        "sdtype": "numerical"
                    },
                    "session_id": {
                        "sdtype": "numerical"
                    },
                    "timestamp": {
                        "sdtype": "datetime",
                        "datetime_format": "%Y-%m-%dT%H:%M"
                    },
                    "amount": {
                        "sdtype": "numerical",
                        "computer_representation": "Float"
                    },
                    "cancelled": {
                        "sdtype": "boolean"
                    }
                },
                "primary_key": "transaction_id"
            }
        },
        "relationships": [
            {
                "parent_table_name": "users",
                "parent_primary_key": "user_id",
                "child_table_name": "sessions",
                "child_foreign_key": "user_id"
            },
            {
                "parent_table_name": "sessions",
                "parent_primary_key": "session_id",
                "child_table_name": "transactions",
                "child_foreign_key": "session_id"
            }
        ],
        "SCHEMA_VERSION": "MULTI_TABLE_V1"
    }
    
    bug 
    opened by pvk-developer 0
  • Change metadata ` `"METADATA_SPEC_VERSION"`">

    Change metadata `"SCHEMA_VERSION"` --> `"METADATA_SPEC_VERSION"`

    Problem Description

    As a user, I have a hard time understanding what "SCHEMA_VERSION" means. I may assume that it is referring to my own database schema rather than the format that the SDV software expects.

    Expected behavior

    For single and multi-table metadata, replace the current word "SCHEMA_VERSION" with "METADATA_SPEC_VERSION". This will make it more apparent that it is referring to the JSON specification.

    Additional context

    The single table metadata should look like this

    {
        "METADATA_SPEC_VERSION": "SINGLE_TABLE_V1",
        "primary_key": "user_id",
        "columns": {
            "user_id": { "sdtype": "text", "regex_format": "U_[0-9]{3}" },
    ...
    

    The multi table metadata should look like this

    {
        "METADATA_SPEC_VERSION": "MULTI_TABLE_V1",
        "tables": {
            "users": {
                "primary_key": "user_id",
                "columns": {
                    "user_id": { "sdtype": "text", "regex_format": "U_[0-9]{3}" },
    ...
    
    feature request 
    opened by npatki 0
Releases(v0.17.2)
  • v0.17.2(Dec 8, 2022)

    This release fixes a bug in the demo module related to loading the demo data with constraints. It also adds a name to the demo datasets. Finally, it bumps the version of SDMetrics used.

    Maintenance

    • Upgrade SDMetrics requirement to 0.8.0 - Issue #1125 by @katxiao

    New Features

    • Provide a name for the default demo datasets - Issue #1124 by @amontanez24

    Bugs Fixed

    • Cannot load_tabular_demo with metadata - Issue #1123 by @amontanez24
    Source code(tar.gz)
    Source code(zip)
  • v0.17.1(Sep 29, 2022)

    This release bumps the dependency requirements to use the latest version of SDMetrics.

    Maintenance

    • Patch release: Bump required version for SDMetrics - Issue #1010 by @katxiao
    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Sep 9, 2022)

    This release updates the code to use RDT version 1.2.0 and greater, so that those new features are now available in SDV. This changes the transformers that are available in SDV models to be those that are in RDT version 1.2.0. As a result, some arguments for initializing models have changed.

    Additionally, this release fixes bugs related to loading models with custom constraints. It also fixes a bug that added NaNs to the index of sampled data when using sample_remaining_columns.

    Bugs Fixed

    • Incorrect rounding in Custom Constraint example - Issue #941 by @amontanez24
    • Can't save the model if use the custom constraint - Issue #928 by @pvk-developer
    • User Guide code fixes - Issue #983 by @amontanez24
    • Index contains NaNs when using sample_remaining_columns - Issue #985 by @amontanez24
    • Cannot sample after loading a model with custom constraints: TypeError - Issue #984 by @pvk-developer
    • Set HyperTransformer config manually, based on Metadata if given - Issue #982 by @pvk-developer

    New Features

    • Change default metrics for evaluate - Issue #949 by @fealho

    Maintenance

    • Update the RDT version to 1.0 - Issue #897 by @pvk-developer
    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Jul 22, 2022)

    This release brings user friendly improvements and bug fixes on the SDV constraints, to help users generate their synthetic data easily.

    Some predefined constraints have been renamed and redefined to be more user friendly & consistent. The custom constraint API has also been updated for usability. The SDV now automatically determines the best handling_strategy to use for each constraint, attempting transform by default and falling back to reject_sampling otherwise. The handling_strategy parameters are no longer included in the API.

    Finally, this version of SDV also unifies the parameters for all sampling related methods for all models (including TabularPreset).

    Changes to Constraints

    • GreatherThan constraint is now separated in two new constraints: Inequality, which is intended to be used between two columns, and ScalarInequality, which is intended to be used between a column and a scalar.

    • Between constraint is now separated in two new constraints: Range, which is intended to be used between three columns, and ScalarRange, which is intended to be used between a column and low and high scalar values.

    • FixedIncrements a new constraint that makes the data increment by a certain value.

    • New create_custom_constraint function available to create custom constraints.

    Removed Constraints

    • Rounding Rounding is automatically being handled by the rdt.HyperTransformer.
    • ColumnFormula the create_custom_constraint takes place over this one and allows more advanced usage for the end users.

    New Features

    • Improve error message for invalid constraints - Issue #801 by @fealho
    • Numerical Instability in Constrained GaussianCopula - Issue #806 by @fealho
    • Unify sampling params for reject sampling - Issue #809 by @amontanez24
    • Split GreaterThan constraint into Inequality and ScalarInequality - Issue #814 by @fealho
    • Split Between constraint into Range and ScalarRange - Issue #815 @pvk-developer
    • Change columns to column_names in OneHotEncoding and Unique constraints - Issue #816 by @amontanez24
    • Update columns parameter in Positive and Negative constraint - Issue #817 by @fealho
    • Create FixedIncrements constraint - Issue #818 by @amontanez24
    • Improve datetime handling in ScalarInequality and ScalarRange constraints - Issue #819 by @pvk-developer
    • Support strict boundaries even when transform strategy is used - Issue #820 by @fealho
    • Add create_custom_constraint factory method - Issue #836 by @fealho

    Internal Improvements

    • Remove handling_strategy parameter - Issue #833 by @amontanez24
    • Remove fit_columns_model parameter - Issue #834 by @pvk-developer
    • Remove the ColumnFormula constraint - Issue #837 by @amontanez24
    • Move table_data.copy to base class of constraints - Issue #845 by @fealho

    Bugs Fixed

    • Numerical Instability in Constrained GaussianCopula - Issue #801 by @tlranda and @fealho
    • Fix error message for FixedIncrements - Issue #865 by @pvk-developer
    • Fix constraints with conditional sampling - Issue #866 by @amontanez24
    • Fix error message in ScalarInequality - Issue #868 by @pvk-developer
    • Cannot use max_tries_per_batch on sample: TypeError: sample() got an unexpected keyword argument 'max_tries_per_batch' - Issue #885 by @amontanez24
    • Conditional sampling + batch size: ValueError: Length of values (1) does not match length of index (5) - Issue #886 by @amontanez24
    • TabularPreset doesn't support new sampling parameters - Issue #887 by @fealho
    • Conditional Sampling: batch_size is being set to None by default? - Issue #889 by @amontanez24
    • Conditional sampling using GaussianCopula inefficient when categories are noised - Issue #910 by @amontanez24

    Documentation Changes

    • Show the API for TabularPreset models - Issue #854 by @katxiao
    • Update handling constraints doc - Pull Request #856 by @amontanez24
    • Update custom costraints documentation - Pull Request #857 by @pvk-developer
    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(May 25, 2022)

    This release improves the speed of the GaussianCopula model by removing logic that previously searched for the appropriate distribution to use. It also fixes a bug that was happening when conditional sampling was used with the TabularPreset.

    The rest of the release focuses on making changes to improve constraints including changing the UniqueCombinations constraint to FixedCombinations, making the Unique constraint work with missing values and erroring when null values are seen in the OneHotEncoding constraint.

    New Features

    • Silence warnings coming from univariate fit in copulas - Issue #769 by @pvk-developer
    • Remove parameters related to distribution search and change default - Issue #767 by @fealho
    • Update the UniqueCombinations constraint - Issue #793 by @fealho
    • Make Unique constraint works with nans - Issue #797 by @fealho
    • Error out if nans in OneHotEncoding - Issue #800 by @amontanez24

    Bugs Fixed

    • Unable to sample conditionally in Tabular_Preset model - Issue #796 by @katxiao

    Documentation Changes

    • Support GPU computing and progress track? - Issue #478 by @fealho
    Source code(tar.gz)
    Source code(zip)
  • v0.14.1(May 3, 2022)

    This release adds a TabularPreset, available in the sdv.lite module, which allows users to easily optimize a tabular model for speed. In this release, we also include bug fixes for sampling with conditions, an unresolved warning, and setting field distributions. Finally, we include documentation updates for sampling and the new TabularPreset.

    Bugs Fixed

    • Sampling with conditions={column: 0.0} for float columns doesn't work - Issue #525 by @shlomihod and @tssbas
    • resolved FutureWarning with Pandas replaced append by concat - Issue #759 by @Deathn0t
    • Field distributions bug in CopulaGAN - Issue #747 by @katxiao
    • Field distributions bug in GaussianCopula - Issue #746 by @katxiao

    New Features

    • Set default transformer to categorical_fuzzy - Issue #768 by @amontanez24
    • Model nulls normally when tabular preset has constraints - Issue #764 by @katxiao
    • Don't modify my metadata object - Issue #754 by @amontanez24
    • Presets should be able to handle constraints - Issue #753 by @katxiao
    • Change preset optimize_for --> name - Issue #749 by @katxiao
    • Create a speed optimized Preset - Issue #716 by @katxiao

    Documentation Changes

    • Add tabular preset docs - Issue #777 by @katxiao
    • sdv.sampling module is missing from the API - Issue #740 by @katxiao
    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Mar 21, 2022)

    This release updates the sampling API and splits the existing functionality into three methods - sample, sample_conditions, and sample_remaining_columns. We also add support for sampling in batches, displaying a progress bar when sampling with more than one batch, sampling deterministically, and writing the sampled results to an output file. Finally, we include fixes for sampling with conditions and updates to the documentation.

    Bugs Fixed

    • Fix write to file in sampling - Issue #732 by @katxiao
    • Conditional sampling doesn't work if the model has a CustomConstraint - Issue #696 by @katxiao

    New Features

    • Updates to GaussianCopula conditional sampling methods - Issue #729 by @katxiao
    • Update conditional sampling errors - Issue #730 by @katxiao
    • Enable Batch Sampling + Progress Bar - Issue #693 by @katxiao
    • Create sample_remaining_columns() method - Issue #692 by @katxiao
    • Create sample_conditions() method - Issue #691 by @katxiao
    • Improve sample() method - Issue #690 by @katxiao
    • Create Condition object - Issue #689 by @katxiao
    • Is it possible to generate data with new set of primary keys? - Issue #686 by @katxiao
    • No way to fix the random seed? - Issue #157 by @katxiao
    • Can you set a random state for the sdv.tabular.ctgan.CTGAN.sample method? - Issue #515 by @katxiao
    • generating different synthetic data while training the model multiple times. - Issue #299 by @katxiao

    Documentation Changes

    • Typo in the document documentation - Issue #680 by @katxiao
    Source code(tar.gz)
    Source code(zip)
  • v0.13.1(Dec 22, 2021)

    This release adds support for passing tabular constraints to the HMA1 model, and adds more explicit error handling for metric evaluation. It also includes a fix for using categorical columns in the PAR model and documentation updates for metadata and HMA1.

    Bugs Fixed

    • Categorical column after sequence_index column - Issue #314 by @fealho

    New Features

    • Support passing tabular constraints to the HMA1 model - Issue #296 by @katxiao
    • Metric evaluation error handling metrics - Issue #638 by @katxiao

    Documentation Changes

    • Make true/false values lowercase in Metadata Schema specification - Issue #664 by @katxiao
    • Update docstrings for hma1 methods - Issue #642 by @katxiao
    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Nov 22, 2021)

    This release makes multiple improvements to different Constraint classes. The Unique constraint can now handle columns with the name index and no longer crashes on subsets of the original data. The Between constraint can now handle columns with nulls properly. The memory of all constraints was also improved.

    Various other features and fixes were added. Conditional sampling no longer crashes when the num_rows argument is not provided. Multiple localizations can now be used for PII fields. Scaffolding for integration tests was added and the workflows now run pip check.

    Additionally, this release adds support for Python 3.9!

    Bugs Fixed

    • Gaussian Copula – Memory Issue in Release 0.10.0 - Issue #459 by @xamm
    • Applying Unique Constraint errors when calling model.fit() on a subset of data - Issue #610 by @xamm
    • Calling sampling with conditions and without num_rows crashes - Issue #614 by @xamm
    • Metadata.visualize with path parameter throws AttributeError - Issue #634 by @xamm
    • The Unique constraint crashes when the data contains a column called index - Issue #616 by @xamm
    • The Unique constraint cannot handle non-default index - Issue #617 by @xamm
    • ConstraintsNotMetError when applying Between constraint on datetime columns containing null values - Issue #632 by @katxiao

    New Features

    • Adds Multi localisations feature for PII fields defined in #308 - PR #609 by @xamm

    Housekeeping Tasks

    • Support latest version of Faker - Issue #621 by @katxiao
    • Add scaffolding for Metadata integration tests - Issue #624 by @katxiao
    • Add support for Python 3.9 - Issue #631 by @amontanez24

    Internal Improvements

    • Add pip check to CI workflows - Issue #626 by @pvk-developer

    Documentation Changes

    • Anonymizing PII in single table tutorials states address field as e-mail type - Issue #604 by @xamm

    Special thanks to @xamm, @katxiao, @pvk-developer and @amontanez24 for all the work that made this release possible!

    Source code(tar.gz)
    Source code(zip)
  • v0.12.1(Oct 12, 2021)

    This release fixes bugs in constraints, metadata behavior, and SDV documentation. Specifically, we added proper handling of data containing null values for constraints and timeseries data, and updated the default metadata detection behavior.

    Bugs Fixed

    • ValueError: The parameter loc has invalid values - Issue #353 by @fealho
    • Gaussian Copula is generating different data with metadata and without metadata - Issue #576 by @katxiao
    • Make pomegranate an optional dependency - Issue #567 by @katxiao
    • Small wording change for Question Issue Template - Issue #571 by @katxiao
    • ConstraintsNotMetError when using GreaterThan constraint with datetime - Issue #590 by @katxiao
    • GreaterThan constraint crashing with NaN values - Issue #592 by @katxiao
    • Null values in GreaterThan constraint raises error - Issue #589 by @katxiao
    • ColumnFormula raises ConstraintsNotMetError when checking NaN values - Issue #593 by @katxiao
    • GreaterThan constraint raises TypeError when using datetime - Issue #596 by @katxiao
    • Fix repository language - Issue #464 by @fealho
    • Update init.py - Issue #578 by @dyuliu
    • IndexingError: Unalignable boolean - Issue #446 by @fealho
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Aug 19, 2021)

    This release focuses on improving and expanding upon the existing constraints. More specifically, the users can now (1) specify multiple columns in Positive and Negative constraints, (2) use the new Uniqueconstraint and (3) use datetime data with the Between constraint. Additionaly, error messages have been added and updated to provide more useful feedback to the user.

    Besides the added features, several bugs regarding the UniqueCombinations and ColumnFormula constraints have been fixed, and an error in the metadata.json for the student_placements dataset was corrected. The release also added documentation for the fit_columns_model which affects the majority of the available constraints.

    New Features

    • Change default fit_columns_model to False - Issue #550 by @katxiao
    • Support multi-column specification for positive and negative constraint - Issue #545 by @sarahmish
    • Raise error when multiple constraints can't be enforced - Issue #541 by @amontanez24
    • Create Unique Constraint - Issue #532 by @amontanez24
    • Passing invalid conditions when using constraints produces unreadable errors - Issue #511 by @katxiao
    • Improve error message for ColumnFormula constraint when constraint column used in formula - Issue #508 by @katxiao
    • Add datetime functionality to Between constraint - Issue #504 by @katxiao

    Bugs Fixed

    • UniqueCombinations constraint with handling_strategy = 'transform' yields synthetic data with nan values - Issue #521 by @katxiao and @csala
    • UniqueCombinations constraint outputting wrong data type - Issue #510 by @katxiao and @csala
    • UniqueCombinations constraint on only one column gets stuck in an infinite loop - Issue #509 by @katxiao
    • Conditioning on a non-constraint column using the ColumnFormula constraint - Issue #507 by @katxiao
    • Conditioning on the constraint column of the ColumnFormula constraint - Issue #506 by @katxiao
    • Update metadata.json for duration of student_placements dataset - Issue #503 by @amontanez24
    • Unit test for HMA1 when working with a single child row per parent row - Issue #497 by @pvk-developer
    • UniqueCombinations constraint for more than 2 columns - Issue #494 by @katxiao and @csala

    Documentation Changes

    • Add explanation of fit_columns_model to API docs - Issue #517 by @katxiao
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Jul 12, 2021)

    This release primarily addresses bugs and feature requests related to using constraints for the single-table models. Users can now enforce scalar comparison with the existing GreaterThan constraint and apply 5 new constraints: OneHotEncoding, Positive, Negative, Between and Rounding. Additionally, the SDV will now auto-apply constraints for rounding numerical values, and for keeping the data within the observed bounds. All related user guides are updated with the new functionality.

    New Features

    • Add OneHotEncoding Constraint - Issue #303 by @fealho
    • GreaterThan Constraint should apply to scalars - Issue #410 by @amontanez24
    • Improve GreaterThan constraint - Issue #368 by @amontanez24
    • Add Non-negative and Positive constraints across multiple columns- Issue #409 by @amontanez24
    • Add Between values constraint - Issue #367 by @fealho
    • Ensure values fall within the specified range - Issue #423 by @amontanez24
    • Add Rounding constraint - Issue #482 by @katxiao
    • Add rounding and min/max arguments that are passed down to the NumericalTransformer - Issue #491 by @amontanez24

    Bugs Fixed

    • GreaterThan constraint between Date columns rasises TypeError - Issue #421 by @amontanez24
    • GreaterThan constraint's transform strategy fails on columns that are not float - Issue #448 by @amontanez24
    • AttributeError on UniqueCombinations constraint with non-strings - Issue #196 by @katxiao
    • Use reject sampling to sample missing columns for constraints - Issue #435 by @amontanez24

    Documentation Changes

    • Ensure privacy metrics are available in the API docs - Issue #458 by @fealho
    • Ensure formula constraint is called ColumnFormula everywhere in the docs - Issue #449 by @fealho
    Source code(tar.gz)
    Source code(zip)
  • v0.10.1(Jun 11, 2021)

    This release changes the way we sample conditions to not only group by the conditions passed by the user, but also by the transformed conditions that result from them.

    Issues resolved

    • Conditionally sampling on variable in constraint should have variety for other variables - Issue #440 by @amontanez24
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(May 21, 2021)

    This release improves the constraint functionality by allowing constraints and conditions at the same time. Additional changes were made to update tutorials.

    Issues resolved

    • Not able to use constraints and conditions in the same time - Issue #379 by @amontanez24
    • Update benchmarking user guide for reading private datasets - Issue #427 by @katxiao
    Source code(tar.gz)
    Source code(zip)
  • v0.9.1(Apr 29, 2021)

    This release broadens the constraint functionality by allowing for the ColumnFormula constraint to take lambda functions and returned functions as an input for its formula.

    It also improves conditional sampling by ensuring that any id fields generated by the model remain unique throughout the sampled data.

    The CTGAN model was improved by adjusting a default parameter to be more mathematically correct.

    Additional changes were made to improve tutorials as well as fix fragile tests.

    Issues resolved

    • Tutorials test sometimes fails - Issue #355 by @fealho
    • Duplicate IDs when using reject-sampling - Issue #331 by @amontanez24 and @csala
    • discriminator_decay should be initialized at 1e-6 but it's 0 - Issue #401 by @fealho and @YoucefZemmouri
    • Tutorial typo - Issue #380 by @fealho
    • Request for sdv.constraint.ColumnFormula for a wider range of function - Issue #373 by @amontanez24 and @JetfiRex
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Apr 1, 2021)

    This release brings new privacy metrics to the evaluate framework which help to determine if the real data could be obtained or deduced from the synthetic samples. Additionally, now there is a normalized score for the metrics, which stays between 0 and 1.

    There are improvements that reduce the usage of memory ram when sampling new data. Also there is a new parameter to control the reject sampling crash, graceful_reject_sampling, which if set to True and if it's not possible to generate all the requested rows, it will just issue a warning and return whatever it was able to generate.

    The Metadata object can now be visualized using different combinations of names and details, which can be set to True or False in order to display only the table names with details or without. There is also an improvement on the validation, which now will display all the errors found at the end of the validation instead of only the first one.

    This version also exposes all the hyperparameters of the models CTGAN and TVAE to allow a more advanced usage. There is also a fix for the TVAE model on small datasets and it's performance with NaN values has been improved. There is a fix for when using UniqueCombinationConstraint with the transform strategy.

    Issues resolved

    • Memory Usage Gaussian Copula Trained Model consuming high memory when generating synthetic data - Issue #304 by @pvk-developer
    • Add option to visualize metadata with only table names - Issue #347 by @csala
    • Add sample parameter to control reject sampling crash - Issue #343 by @fealho
    • Verbose metadata validation - Issue #348 by @csala
    • Missing the introduction of custom specification for hyperparameters in the TVAE model - Issue #344 by @pvk-developer
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Feb 24, 2021)

    This version adds conditional sampling for tabular models by combining a reject-sampling strategy with the native conditional sampling capabilities from the Gaussian Copulas.

    It also introduces several upgrades on the HMA1 algorithm that improve data quality and robustness in the multi-table scenarios by making changes in how the parameters of the child tables are aggregated on the parent tables, including a complete rework of how the correlation matrices are modeled and rebuild after sampling.

    Issues resolved

    • Fix probabilities contain NaN error - Issue #326 by @csala
    • Conditional Sampling for tabular models - Issue #316 by @fealho and @csala
    • HMA1: LinAlgError: SVD did not converge - Issue #240 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Jan 28, 2021)

    This release introduces a few changes in the HMA1 relational algorithm to decrease modeling and sampling times, while also ensuring that correlations are properly kept across tables and also adding support for some relational schemas that were not supported before.

    A few changes in constraints and tabular models also ensure that situations that produced errors before now work without errors.

    Issues resolved

    • Fix unique key generation - Issue #306 by @fealho
    • Ensure tables that contain nothing but ids can be modeled - Issue #302 by @csala
    • Metadata visualization improvements - Issue #301 by @csala
    • Multi-parent re-model and re-sample issue - Issue #298 by @csala
    • Support datetimes in GreaterThan constraint - Issue #266 by @rollervan
    • Support for multiple foreign keys in one table - Issue #185 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(Dec 31, 2020)

    SDMetrics version is updated to include the new Time Series metrics, which have also been added to the API Reference and User Guides documentation. Additionally, a few code has been refactored to reduce external dependencies and a few minor bugs related to single table constraints have been fixed

    Issues resolved:

    • Add timeseries metrics and user guides - Issue #289 by @csala
    • Add functions to generate regex ids - Issue #288 by @csala
    • Saving a fitted tabular model with UniqueCombinations constraint raises PicklingError - Issue #286 by @csala
    • Constraints: handling_strategy='reject_sampling' causes 'ZeroDivisionError: division by zero' - Issue #285 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Dec 22, 2020)

    This release updates to the latest CTGAN, RDT and SDMetrics libraries to introduce a new TVAE model, multiple new metrics for single table and multi table, and fixes issues in the re-creation of tabular models from a metadata dict.

    Issues resolved:

    • Upgrade to SDMetrics v0.1.0 and add sdv.metrics module - Issue #281 by @csala
    • Upgrade to CTGAN 0.3.0 and add TVAE model - Issue #278 by @fealho
    • Add dtype_transformers to Table.from_dict - Issue #276 by @csala
    • Fix Metadata from_dict behavior - Issue #275 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Nov 25, 2020)

    This version updates the dependencies and makes a few internal changes in order to ensure that SDV works properly on Windows Systems, making this the first release to be officially supported on Windows.

    Apart from this, some more internal changes have been made to solve a few minor issues from the older versions while also improving the processing speed when processing relational datasets with the default parameters.

    API breaking changes

    • The distribution argument of the GaussianCopula has been renamed to field_distributions.
    • The HMA1 and SDV classes now use the categorical_fuzzy transformer by default instead of the one_hot_encoding one.

    Issues resolved

    • GaussianCopula: rename distribution argument to field_distributions - Issue #237 by @csala
    • GaussianCopula: Improve error message if an invalid distribution name is passed - Issue #220 by @csala
    • Import urllib.request explicitly - Issue #227 by @csala
    • TypeError: cannot astype a datetimelike from [datetime64[ns]] to [int32] - Issue #218 by @csala
    • Change default categorical transformer to categorical_fuzzy in HMA1 - Issue #214 by @csala
    • Integer categoricals being sampled as strings instead of integer values - Issue #194 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.4.5(Oct 17, 2020)

    In this version a new family of models for Synthetic Time Series Generation is introduced under the sdv.timeseries sub-package. The new family of models now includes a new class called PAR, which implements a Probabilistic AutoRegressive model.

    This version also adds support for composite primary keys and regex based generation of id fields in tabular models and drops Python 3.5 support.

    Issues resolved

    • Drop python 3.5 support - Issue #204 by @csala
    • Support composite primary keys in tabular models - Issue #207 by @csala
    • Add the option to generate string id fields based on regex on tabular models - Issue #208 by @csala
    • Synthetic Time Series - Issue #142 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.4.4(Oct 6, 2020)

    This release adds a new tabular model based on combining the CTGAN model with the reversible transformation applied in the GaussianCopula model that converts random variables with arbitrary distributions to new random variables with standard normal distribution.

    The reversible transformation is handled by the GaussianCopulaTransformer recently added to RDT.

    New Features

    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Sep 28, 2020)

    This release moves the models and algorithms related to generation of synthetic relational data to a new sdv.relational subpackage (Issue #198)

    As part of the change, also the old sdv.models have been removed and now relational modeling is based on the recently introduced sdv.tabular models.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Sep 19, 2020)

    In this release the sdv.evaluation module has been reworked to include 4 different metrics and in all cases return a normalized score between 0 and 1.

    Included metrics are:

    • cstest
    • kstest
    • logistic_detection
    • svc_detection
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Sep 7, 2020)

    This release fixes a couple of minor issues and introduces an important rework of the User Guides section of the documentation.

    Issues fixed

    • Error Message: "make sure the Graphviz executables are on your systems' PATH" - Issue #182 by @csala
    • Anonymization mappings leak - Issue #187 by @csala
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Aug 8, 2020)

    In this release SDV gets new documentation, new tutorials, improvements to the Tabular API and broader python and dependency support.

    Complete list of changes:

    • New Documentation site based on the pydata-sphinx-theme.
    • New User Guides and Notebook tutorials.
    • New Developer Guides section within the docs with details about the SDV architecture, the ecosystem libraries and how to extend and contribute to the project.
    • Improved API for the Tabular models with focus on ease of use.
    • Support for Python 3.8 and the newest versions of pandas, scipy and scikit-learn.
    • New Slack Workspace for development discussions and community support.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.6(Jul 23, 2020)

    This release introduces a new concept of Constraints, which allow the user to define special relationships between columns that will not be handled via modeling.

    This is done via a new sdv.constraints subpackage which defines some well-known pre-defined constraints, as well as a generic framework that allows the user to customize the constraints to their needs as much as necessary.

    New Features

    Source code(tar.gz)
    Source code(zip)
  • v0.3.5(Jul 9, 2020)

    This release introduces a new subpackage sdv.tabular with models designed specifically for single table modeling, while still providing all the usual conveniences from SDV, such as:

    • Seamless multi-type support
    • Missing data handling
    • PII anonymization

    Currently implemented models are:

    • GaussianCopula: Multivariate distributions modeled using copula functions. This is stronger version, with more marginal distributions and options, than the one used to model multi-table datasets.
    • CTGAN: GAN-based data synthesizer that can generate synthetic tabular data with high fidelity.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.4(Jul 4, 2020)

Owner
The Synthetic Data Vault Project
The Synthetic Data Vault Project
PandaPy has the speed of NumPy and the usability of Pandas 10x to 50x faster (by @firmai)

PandaPy "I came across PandaPy last week and have already used it in my current project. It is a fascinating Python library with a lot of potential to

Derek Snow 527 Jan 02, 2023
An implementation of the largeVis algorithm for visualizing large, high-dimensional datasets, for R

largeVis This is an implementation of the largeVis algorithm described in (https://arxiv.org/abs/1602.00370). It also incorporates: A very fast algori

336 May 25, 2022
The micro-framework to create dataframes from functions.

The micro-framework to create dataframes from functions.

Stitch Fix Technology 762 Jan 07, 2023
follow-analyzer helps GitHub users analyze their following and followers relationship

follow-analyzer follow-analyzer helps GitHub users analyze their following and followers relationship by providing a report in html format which conta

Yin-Chiuan Chen 2 May 02, 2022
Program that predicts the NBA mvp based on data from previous years.

NBA MVP Predictor A machine learning model using RandomForest Regression that predicts NBA MVP's using player data. Explore the docs » View Demo · Rep

Muhammad Rabee 1 Jan 21, 2022
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.

Spectacular AI SDK examples Spectacular AI SDK fuses data from cameras and IMU sensors (accelerometer and gyroscope) and outputs an accurate 6-degree-

Spectacular AI 94 Jan 04, 2023
Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather

Tuplex 791 Jan 04, 2023
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
MS in Data Science capstone project. Studying attacks on autonomous vehicles.

Surveying Attack Models for CAVs Guide to Installing CARLA and Collecting Data Our project focuses on surveying attack models for Connveced Autonomous

Isabela Caetano 1 Dec 09, 2021
Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions.

About Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions. The tool provides rich data and a summary g

9 Nov 16, 2022
LynxKite: a complete graph data science platform for very large graphs and other datasets.

LynxKite is a complete graph data science platform for very large graphs and other datasets. It seamlessly combines the benefits of a friendly graphical interface and a powerful Python API.

124 Dec 14, 2022
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons. Check out the binder link above for a sample c

Kevin Schwarzwald 42 Nov 09, 2022
A set of functions and analysis classes for solvation structure analysis

SolvationAnalysis The macroscopic behavior of a liquid is determined by its microscopic structure. For ionic systems, like batteries and many enzymes,

MDAnalysis 19 Nov 24, 2022
MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]

MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020] by Kaisiyuan Wang, Qianyi Wu, Linsen Song, Zhuoqian Yang, Wa

112 Dec 28, 2022
VHub - An API that permits uploading of vulnerability datasets and return of the serialized data

VHub - An API that permits uploading of vulnerability datasets and return of the serialized data

André Rodrigues 2 Feb 14, 2022
Bigdata Simulation Library Of Dream By Sandman Books

BIGDATA SIMULATION LIBRARY OF DREAM BY SANDMAN BOOKS ================= Solution Architecture Description In the realm of Dreaming, its ruler SANDMAN,

Maycon Cypriano 3 Jun 30, 2022
Implementation in Python of the reliability measures such as Omega.

reliabiliPy Summary Simple implementation in Python of the [reliability](https://en.wikipedia.org/wiki/Reliability_(statistics) measures for surveys:

Rafael Valero Fernández 2 Apr 27, 2022
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023
InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family.

CRISPRanalysis InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family. In this work, we present a workflow

2 Jan 31, 2022
Code for the DH project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval Muslim World"

Damast This repository contains code developed for the digital humanities project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval

University of Stuttgart Visualization Research Center 2 Jul 01, 2022