diff --git a/0.6/get-started/create_lab/index.html b/0.6/get-started/create_lab/index.html index df891059..4e099ea8 100644 --- a/0.6/get-started/create_lab/index.html +++ b/0.6/get-started/create_lab/index.html @@ -1135,46 +1135,49 @@

How to create your first Lab e combined with already loved tools such as scikit-learn, numpy and pandas. To create your first Lab, you can use the “Create Lab” from Fabric’s home, or you can access it from the Labs module by selecting it on the left side menu, and clicking the “Create Lab” button.

-

Create dataset with upload csv

-
-

- Select create a lab from Home -

-
- +

Select create a lab from Home

Next, a menu with different IDEs will be shown. As a quickstart select Jupyter Lab. As labs are development environments you will be also asked what language you would prefer your environment to support: R or Python. Select Python.

-
- Select an IDE - Python or R -
- + + + + + + + + + + + + + +
Select IDESelect language
Select an IDEPython or R

Bundles are environments with pre-installed packages. Select YData bundle, so we can leverage some other Fabric features such as Data Profiling, Synthetic Data and Pipelines.

-
- Select the bundle for development -
- +

Select the bundle for development

As a last step, you will be asked to configure the infrastructure resources for this new environment as well as giving it a Display Name. We will keep the defaults, but you have flexibility to select GPU acceleration or whether you need more computational resources for your developments.

-
- Select the computational resources -
- +

Select the computational resources

Finally, your Lab will be created and added to the "Labs" list, as per the image below. The status of the lab will be 🟡 while preparing, and this process takes a few minutes, as the infrastructure is being allocated to your development environment. As soon as the status changes to 🟢, you can open your lab by clicking in the button as shown below:

-
- Open your lab environment -
- +

Open your lab environment

Create a new notebook in the JupyterLab and give it a name. You are now ready to start your developments!

-
- Create a new notebook - Notebook created -
- + + + + + + + + + + + + + +
Create a new notebookNotebook created
Create a new notebookNotebook created

Congrats! 🚀 You have now successfully created your first Lab a code environment, so you can benefit from the most advanced Fabric features as well as compose complex data workflows. Get ready for your journey of improved quality data for AI.

diff --git a/0.6/get-started/create_pipeline/index.html b/0.6/get-started/create_pipeline/index.html index d2753fc3..3b36d48b 100644 --- a/0.6/get-started/create_pipeline/index.html +++ b/0.6/get-started/create_pipeline/index.html @@ -1141,86 +1141,80 @@

How to create your first Pipeline

In this tutorial we will build a simple and generic pipeline that use a Dataset from Fabric's Data Catalog and profile to check it's quality. We have the notebooks template already available. For that you need to access the "Academy" folder as per the image below.

-
- Academy folder -
- +

Academy folder

Make sure to copy all the files in the folder "3 - Pipelines/quickstart" to the root folder of your lab, as per the image below.

-
- Select your pipeline editor -
- +

Select your pipeline editor

Now that we have our notebooks we need to make a small change in the notebook "1. Read dataset". Go back to your Data Catalog, from one of the datasets in your Catalog list, select the three vertical dots and click in "Explore in Labs" as shown in the image below.

-
- Explore the dataset in the labs -
- +

Explore the dataset in the labs

The following screen will be shown. Click in copy.

-
- Dataset code snippet -
- +

Dataset code snippet

Now that we have copied the code, let's get back to our "1. Read data.ipynb" notebook, and replace the first code cell by with the new code. This will allow us to use a dataset from the Data Catalog in our pipeline.

-
- Dataset code snippet - Dataset code snippet -
- + + + + + + + + + + + + + +
Placeholder codeReplaced with code snippet
Select an IDEPython or R

With our notebooks ready, we can now configure our Pipeline. For this quickstart we will be leveraging an already existing pipeline - double-click the file my_first_pipeline.pipeline. You should see a pipeline as depicted in the images below. To create a new Pipeline, you can open the lab launcher tab and select "Pipeline Editor".

-
- Open pipeline - My first pipeline -
- + + + + + + + + + + + + + +
Open PipelineMy first pipeline
Open pipelinePython or R

Before running the pipeline, we need to check each component/step properties and configurations. Right-click each one of the steps, select "Open Properties", and a menu will be depicted in your right side. Make sure that you have "YData - CPU" selected as the Runtime Image as show below.

-
- Open pipeline - My first pipeline -
- + + + + + + + + + + + + + +
Open propertiesRuntime image
Open pipelinePython or R

We are now ready to create and run our first pipeline. In the top left corner of the pipeline editor, the run button will be available for you to click.

-
- Select your pipeline editor -
- +

Select your pipeline editor

Accept the default values shown in the run dialog and start the run

-
- Pipeline configuration confirm dialog -
- +

Pipeline configuration confirm dialog

If the following message is shown, it means that you have create a run of your first pipeline.

-
- Select your pipeline editor -
- +

Select your pipeline editor

Now that you have created your first pipeline, you can select the Pipeline from Fabric's left side menu.

-
- Select Fabric Pipelines -
- +

Select Fabric Pipelines

Your most recent pipeline will be listed, as shown in below image.

-
- My first pipeline listed -
- +

My first pipeline listed

To check the run of your pipeline, jump into the "Run" tab. You will be able to see your first pipeline running!

-
- My first pipeline listed -
- +

Run pipeline

By clicking on top of the record you will be able to see the progress of the run step-by-step, and visualize the outputs of each and every step by clicking on each step and selecting the Visualizations tab.

-
- My first pipeline listed -
- +

Run pipeline

Congrats! 🚀 You have now successfully created your first Pipeline a code environment, so you can benefit from Fabric's orchestration engine to crate scalable, versionable and comparable data workflows. Get ready for your journey of improved quality data for AI.

diff --git a/0.6/get-started/create_syntheticdata_generator/index.html b/0.6/get-started/create_syntheticdata_generator/index.html index e1b47835..cffbcdb8 100644 --- a/0.6/get-started/create_syntheticdata_generator/index.html +++ b/0.6/get-started/create_syntheticdata_generator/index.html @@ -1138,10 +1138,7 @@

How to create your fi

With your first dataset created, you are now able to start the creation of your Synthetic Data generator. You can either select "Synthetic Data" from your left side menu, or you can select "Create Synthetic Data" in your project Home as shown in the image below.

-
- Create Synthetic Data -
- +

Create Synthetic Data

You'll be asked to select the dataset you wish to generate synthetic data from and verify the columns you'd like to include in the synthesis process, validating their Variable and Data Types.

@@ -1151,42 +1148,24 @@

How to create your fi to consider it a String, under the light of a dataset where "Name" refers to the name of the product purchases, it might be more beneficial to set it as a Category.

-
- Configure Metadata -
- +

Configure Metadata

Finally, as the last step of our process it comes the Synthetic Data specific configurations, for this particular case we only need to define a Display Name, and we can finish the process by clicking in the "Save" button as per the image below.

-
- Save Synthetic Data configurations -
- +

Save Synthetic Data configurations

Your Synthetic Data generator is now training and listed under "Synthetic Data". While the model is being trained, the Status will be 🟡, as soon as the training is completed successfully it will transition to 🟢 as per the image below.

-
- Synthetic data generator trained successfully -
- +

Synthetic data generator trained successfully

Once the Synthetic Data generator has finished training, you're ready to start generating your first synthetic dataset. You can start by exploring an overview of the model configurations and even download a PDF report with a comprehensive overview of your Synthetic Data Quality Metrics. Next, you can generate synthetic data samples by accessing the Generation tab or click on "Go to Generation".

-
- Synthetic data generator overview -
- +

Synthetic data generator overview

In this section, you are able to generate as many synthetic samples as you want. For that you need to define the number rows to generate and click "Generate", as depicted in the image below.

-
- Generate synthetic data records -
- +

Generate synthetic data records

A new line in your "Sample History" will be shown and as soon as the sample generation is completed you will be able to "Compare" your synthetic data with the original data, add as a Dataset with "Add to Data Catalog" and last but not the least download it as a file with "Download csv".

-
- Synthetic data generator trained -
- +

Synthetic data generator trained

Congrats! 🚀 You have now successfully created your first Synthetic Data generator with Fabric. Get ready for your journey of improved quality data for AI.

diff --git a/0.6/search/search_index.json b/0.6/search/search_index.json index aedaa14b..f7c340f8 100644 --- a/0.6/search/search_index.json +++ b/0.6/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome","text":"

YData Fabric is a Data-Centric AI development platform that accelerates AI development by helping data practitioners achieve production-quality data.

Much like for software engineering the quality of code is a must for the success of software development, Fabric accounts for the data quality requirements for data-driven applications. It introduces standards, processes, and acceleration to empower data science, analytics, and data engineering teams.

"},{"location":"#try-fabric","title":"Try Fabric","text":""},{"location":"#why-adopt-ydata-fabric","title":"Why adopt YData Fabric?","text":"

With Fabric, you can standardize the understanding of your data, quickly identify data quality issues, streamline and version your data preparation workflows and finally leverage synthetic data for privacy-compliance or as a tool to boost ML performance. Fabric is a development environment that supports a faster and easier process of preparing data for AI development. Data practitioners are using Fabric to:

"},{"location":"#key-features","title":"\ud83d\udcdd Key features","text":""},{"location":"#data-catalog","title":"Data Catalog","text":"

Fabric Data Catalog provides a centralized perspective on datasets within a project-basis, optimizing data management through seamless integration with the organization's existing data architectures via scalable connectors (e.g., MySQL, Google Cloud Storage, AWS S3). It standardizes data quality profiling, streamlining the processes of efficient data cleaning and preparation, while also automating the identification of Personally Identifiable Information (PII) to facilitate compliance with privacy regulations.

Explore how a Data Catalog through a centralized repository of your datasets, schema validation, and automated data profiling.

"},{"location":"#labs","title":"Labs","text":"

Fabric's Labs environments provide collaborative, scalable, and secure workspaces layered on a flexible infrastructure, enabling users to seamlessly switch between CPUs and GPUs based on their computational needs. Labs are familiar environments that empower data developers with powerful IDEs (Jupyter Notebooks, Visual Code or H2O flow) and a seamless experience with the tools they already love combined with YData's cutting-edge SDK for data preparation.

Learn how to use the Labs to generate synthetic data in a familiar Python interface.

"},{"location":"#synthetic-data","title":"Synthetic data","text":"

Synthetic data, enabled by YData Fabric, provides data developers with a user-friendly interfaces (UI and code) for generating artificial datasets, offering a versatile solution across formats like tabular, time-series and multi-table datasets. The generated synthetic data holds the same value of the original and aligns intricately with specific business rules, contributing to machine learning models enhancement, mitigation of privacy concerns and more robustness for data developments. Fabric offers synthetic data that is ease to adapt and configure, allows customization in what concerns privacy-utility trade-offs.

Learn how you to create high-quality synthetic data within a user-friendly UI using Fabric\u2019s data synthesis flow.

"},{"location":"#pipelines","title":"Pipelines","text":"

Fabric Pipelines streamlines data preparation workflows by automating, orchestrating, and optimizing data pipelines, providing benefits such as flexibility, scalability, monitoring, and reproducibility for efficient and reliable data processing. The intuitive drag-and-drop interface, leveraging Jupyter notebooks or Python scripts, expedites the pipeline setup process, providing data developers with a quick and user-friendly experience.

Explore how you can leverage Fabric Pipelines to build versionable and reproducible data preparation workflows for ML development.

"},{"location":"#tutorials","title":"Tutorials","text":"

To understand how to best apply Fabric to your use cases, start by exploring the following tutorials:

You can find additional examples and use cases at YData Academy GitHub Repository.

"},{"location":"#support","title":"\ud83d\ude4b Support","text":"

Facing an issue? We\u2019re committed to providing all the support you need to ensure a smooth experience using Fabric:

"},{"location":"examples/synthesize_tabular_data/","title":"Synthesize tabular data","text":"

Use YData's RegularSynthesizer to generate tabular synthetic data

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import RegularSynthesizer\n# Do not forget to add your token as env variables\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined\ndef main():\n\"\"\"In this example, we demonstrate how to train a synthesizer from a pandas\n    DataFrame.\n    After training a Regular Synthesizer, we request a sample.\n    \"\"\"\nX = get_dataset('census')\n# We initialize a regular synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = RegularSynthesizer()\n# We train the synthesizer on our dataset\nsynth.fit(X)\n# We request a synthetic dataset with 50 rows\nsample = synth.sample(n_samples=50)\nprint(sample.shape)\nif __name__ == \"__main__\":\nmain()\n
"},{"location":"examples/synthesize_timeseries_data/","title":"Synthesize time-series data","text":"

Use YData's TimeSeriesSynthesizer to generate time-series synthetic data

Tabular data is the most common type of data we encounter in data problems.

When thinking about tabular data, we assume independence between different records, but this does not happen in reality. Suppose we check events from our day-to-day life, such as room temperature changes, bank account transactions, stock price fluctuations, and air quality measurements in our neighborhood. In that case, we might end up with datasets where measures and records evolve and are related through time. This type of data is known to be sequential or time-series data.

Thus, sequential or time-series data refers to any data containing elements ordered into sequences in a structured format. Dissecting any time-series dataset, we see differences in variables' behavior that need to be understood for an effective generation of synthetic data. Typically any time-series dataset is composed of the following:

Below find an example:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import TimeSeriesSynthesizer\n# Do not forget to add your token as env variable\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'\nX = get_dataset('occupancy')\n# We initialize a time series synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = TimeSeriesSynthesizer()\n# We train the synthesizer on our dataset\n# sortbykey -> variable that define the time order for the sequence\nsynth.fit(X, sortbykey='date')\n# By default it is requested a synthetic sample with the same length as the original data\n# The TimeSeriesSynthesizer is designed to replicate temporal series and therefore the original time-horizon is respected\nsample = synth.sample(n_entities=1)\n
"},{"location":"examples/synthesize_with_anonymization/","title":"Anonymization","text":"

YData Synthesizers offers a way to anonymize sensitive information such that the original values are not present in the synthetic data but replaced by fake values.

Does the model retain the original values?

No! The anonymization is performed before the model training such that it never sees the original values.

The anonymization is performed by specifying which columns need to be anonymized and how to perform the anonymization. The anonymization rules are defined as a dictionary with the following format:

{column_name: anonymization_rule}

While here are some predefined anonymization rules such as name, email, company, it is also possible to create a rule using a regular expression. The anonymization rules have to be passed to a synthesizer in its fit method using the parameter anonymize.

What is the difference between anonymization and privacy?

Anonymization makes sure sensitive information are hidden from the data. Privacy makes sure it is not possible to infer the original data points from the synthetic data points via statistical attacks.

Therefore, for data sharing anonymization and privacy controls are complementary.

The example below demonstrates how to anonymize the column Name by fake names and the column Ticket by a regular expression:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import RegularSynthesizer\n# Do not forget to add your token as env variables\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined\ndef main():\n\"\"\"In this example, we demonstrate how to train a synthesizer from a pandas\n    DataFrame.\n    After training a Regular Synthesizer, we request a sample.\n    \"\"\"\nX = get_dataset('titanic')\n# We initialize a regular synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = RegularSynthesizer()\n# We define anonymization rules, which is a dictionary with format:\n# {column_name: anonymization_rule, ...}\n# while here are some predefined anonymization rules like: name, email, company\n# it is also possible to create a rule using a regular expression\nrules = {\n\"Name\": \"name\",\n\"Ticket\": \"[A-Z]{2}-[A-Z]{4}\"\n}\n# We train the synthesizer on our dataset\nsynth.fit(\nX,\nname=\"titanic_synthesizer\",\nanonymize=rules\n)\n# We request a synthetic dataset with 50 rows\nsample = synth.sample(n_samples=50)\nprint(sample[[\"Name\", \"Ticket\"]].head(3))\nif __name__ == \"__main__\":\nmain()\n

"},{"location":"examples/synthesize_with_conditional_sampling/","title":"Conditional sampling","text":"

YData Synthesizers support conditional sampling. The fit method has an optional parameter named condition_on, which receives a list of features to condition upon. Furthermore, the sample method receives the conditions to be applied through another optional parameter also named condition_on. For now, two types of conditions are supported:

The example below demonstrates how to train and sample from a synthesizer using conditional sampling:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import RegularSynthesizer\n# Do not forget to add your token as env variables.\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined.\ndef main():\n\"\"\"In this example, we demonstrate how to train and\n    sample from a synthesizer using conditional sampling.\"\"\"\nX = get_dataset('census')\n# We initialize a regular synthesizer.\n# As long as the synthesizer does not call `fit`, it exists only locally.\nsynth = RegularSynthesizer()\n# We train the synthesizer on our dataset setting\n# the features to condition upon.\nsynth.fit(\nX,\nname=\"census_synthesizer\",\ncondition_on=[\"sex\", \"native-country\", \"age\"]\n)\n# We request a synthetic dataset with specific condition rules.\nsample = synth.sample(\nn_samples=500,\ncondition_on={\n\"sex\": {\n\"categories\": [\"Female\"]\n},\n\"native-country\": {\n\"categories\": [(\"United-States\", 0.6),\n(\"Mexico\", 0.4)]\n},\n\"age\": {\n\"minimum\": 55,\n\"maximum\": 60\n}\n}\n)\nprint(sample)\nif __name__ == \"__main__\":\nmain()\n
"},{"location":"examples/synthesize_with_privacy_control/","title":"Privacy control","text":"

YData Synthesizers offers 3 different levels of privacy:

  1. high privacy: the model is optimized for privacy purposes,
  2. high fidelity (default): the model is optimized for high fidelity,
  3. balanced: tradeoff between privacy and fidelity.

The default privacy level is high fidelity. The privacy level can be changed by the user at the moment a synthesizer level is trained by using the parameter privacy_level. The parameter expect a PrivacyLevel value.

What is the difference between anonymization and privacy?

Anonymization makes sure sensitive information are hidden from the data. Privacy makes sure it is not possible to infer the original data points from the synthetic data points via statistical attacks.

Therefore, for data sharing anonymization and privacy controls are complementary.

The example below demonstrates how to train a synthesizer configured for high privacy:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import PrivacyLevel, RegularSynthesizer\n# Do not forget to add your token as env variables\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined\ndef main():\n\"\"\"In this example, we demonstrate how to train a synthesizer\n    with a high-privacy setting from a pandas DataFrame.\n    After training a Regular Synthesizer, we request a sample.\n    \"\"\"\nX = get_dataset('titanic')\n# We initialize a regular synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = RegularSynthesizer()\n# We train the synthesizer on our dataset setting the privacy level to high\nsynth.fit(\nX,\nname=\"titanic_synthesizer\",\nprivacy_level=PrivacyLevel.HIGH_PRIVACY\n)\n# We request a synthetic dataset with 50 rows\nsample = synth.sample(n_samples=50)\nprint(sample)\nif __name__ == \"__main__\":\nmain()\n
"},{"location":"get-started/","title":"Get started with Fabric","text":"

The get started is here to help you if you are not yet familiar with YData Fabric or if you just want to learn more about data quality, data preparation workflows and how you can start leveraging synthetic data. Mention to YData Fabric Community

"},{"location":"get-started/#create-your-first-data-with-the-data-catalog","title":"\ud83d\udcda Create your first Data with the Data Catalog","text":""},{"location":"get-started/#create-your-first-synthetic-data-generator","title":"\u2699\ufe0f Create your first Synthetic Data generator","text":""},{"location":"get-started/#create-your-first-lab","title":"\ud83e\uddea Create your first Lab","text":""},{"location":"get-started/#create-your-first-data-pipeline","title":"\ud83c\udf00 Create your first data Pipeline","text":""},{"location":"get-started/create_lab/","title":"How to create your first Lab environment","text":"

Labs are code environments for a more flexible development of data-driven solutions while leveraging Fabric capabilities combined with already loved tools such as scikit-learn, numpy and pandas. To create your first Lab, you can use the \u201cCreate Lab\u201d from Fabric\u2019s home, or you can access it from the Labs module by selecting it on the left side menu, and clicking the \u201cCreate Lab\u201d button.

Next, a menu with different IDEs will be shown. As a quickstart select Jupyter Lab. As labs are development environments you will be also asked what language you would prefer your environment to support: R or Python. Select Python.

Bundles are environments with pre-installed packages. Select YData bundle, so we can leverage some other Fabric features such as Data Profiling, Synthetic Data and Pipelines.

As a last step, you will be asked to configure the infrastructure resources for this new environment as well as giving it a Display Name. We will keep the defaults, but you have flexibility to select GPU acceleration or whether you need more computational resources for your developments.

Finally, your Lab will be created and added to the \"Labs\" list, as per the image below. The status of the lab will be \ud83d\udfe1 while preparing, and this process takes a few minutes, as the infrastructure is being allocated to your development environment. As soon as the status changes to \ud83d\udfe2, you can open your lab by clicking in the button as shown below:

Create a new notebook in the JupyterLab and give it a name. You are now ready to start your developments!

Congrats! \ud83d\ude80 You have now successfully created your first Lab a code environment, so you can benefit from the most advanced Fabric features as well as compose complex data workflows. Get ready for your journey of improved quality data for AI.

"},{"location":"get-started/create_pipeline/","title":"How to create your first Pipeline","text":"

Check this quickstart video on how to create your first Pipeline.

The best way to get started with Pipelines is to use the interactive Pipeline editor available in the Labs with Jupyter Lab set as IDE. If you don't have a Lab yet, or you don't know how to create one, check our quickstart guide on how to create your first lab.

Open an already existing lab.

A Pipeline comprises one or more nodes that are connected (or not!) with each other to define execution dependencies. Each pipeline node is and should be implemented as a component that is expected to manage a single task, such as read the data, profiling the data, training a model, or even publishing a model to production environments.

In this tutorial we will build a simple and generic pipeline that use a Dataset from Fabric's Data Catalog and profile to check it's quality. We have the notebooks template already available. For that you need to access the \"Academy\" folder as per the image below.

Make sure to copy all the files in the folder \"3 - Pipelines/quickstart\" to the root folder of your lab, as per the image below.

Now that we have our notebooks we need to make a small change in the notebook \"1. Read dataset\". Go back to your Data Catalog, from one of the datasets in your Catalog list, select the three vertical dots and click in \"Explore in Labs\" as shown in the image below.

The following screen will be shown. Click in copy.

Now that we have copied the code, let's get back to our \"1. Read data.ipynb\" notebook, and replace the first code cell by with the new code. This will allow us to use a dataset from the Data Catalog in our pipeline.

With our notebooks ready, we can now configure our Pipeline. For this quickstart we will be leveraging an already existing pipeline - double-click the file my_first_pipeline.pipeline. You should see a pipeline as depicted in the images below. To create a new Pipeline, you can open the lab launcher tab and select \"Pipeline Editor\".

Before running the pipeline, we need to check each component/step properties and configurations. Right-click each one of the steps, select \"Open Properties\", and a menu will be depicted in your right side. Make sure that you have \"YData - CPU\" selected as the Runtime Image as show below.

We are now ready to create and run our first pipeline. In the top left corner of the pipeline editor, the run button will be available for you to click.

Accept the default values shown in the run dialog and start the run

If the following message is shown, it means that you have create a run of your first pipeline.

Now that you have created your first pipeline, you can select the Pipeline from Fabric's left side menu.

Your most recent pipeline will be listed, as shown in below image.

To check the run of your pipeline, jump into the \"Run\" tab. You will be able to see your first pipeline running!

By clicking on top of the record you will be able to see the progress of the run step-by-step, and visualize the outputs of each and every step by clicking on each step and selecting the Visualizations tab.

Congrats! \ud83d\ude80 You have now successfully created your first Pipeline a code environment, so you can benefit from Fabric's orchestration engine to crate scalable, versionable and comparable data workflows. Get ready for your journey of improved quality data for AI.

"},{"location":"get-started/create_syntheticdata_generator/","title":"How to create your first Synthetic Data generator","text":"

Check this quickstart video on how to create your first Synthetic Data generator.

To generate your first synthetic data, you need to have a Dataset already available in your Data Catalog. Check this tutorial to see how you can add your first dataset to Fabric\u2019s Data Catalog.

With your first dataset created, you are now able to start the creation of your Synthetic Data generator. You can either select \"Synthetic Data\" from your left side menu, or you can select \"Create Synthetic Data\" in your project Home as shown in the image below.

You'll be asked to select the dataset you wish to generate synthetic data from and verify the columns you'd like to include in the synthesis process, validating their Variable and Data Types.

Data types are relevant for synthetic data quality

Data Types are important to be revisited and aligned with the objectives for the synthetic data as they can highly impact the quality of the generated data. For example, let's say we have a column that is a \"Name\", while is some situations it would make sense to consider it a String, under the light of a dataset where \"Name\" refers to the name of the product purchases, it might be more beneficial to set it as a Category.

Finally, as the last step of our process it comes the Synthetic Data specific configurations, for this particular case we only need to define a Display Name, and we can finish the process by clicking in the \"Save\" button as per the image below.

Your Synthetic Data generator is now training and listed under \"Synthetic Data\". While the model is being trained, the Status will be \ud83d\udfe1, as soon as the training is completed successfully it will transition to \ud83d\udfe2 as per the image below.

Once the Synthetic Data generator has finished training, you're ready to start generating your first synthetic dataset. You can start by exploring an overview of the model configurations and even download a PDF report with a comprehensive overview of your Synthetic Data Quality Metrics. Next, you can generate synthetic data samples by accessing the Generation tab or click on \"Go to Generation\".

In this section, you are able to generate as many synthetic samples as you want. For that you need to define the number rows to generate and click \"Generate\", as depicted in the image below.

A new line in your \"Sample History\" will be shown and as soon as the sample generation is completed you will be able to \"Compare\" your synthetic data with the original data, add as a Dataset with \"Add to Data Catalog\" and last but not the least download it as a file with \"Download csv\".

Congrats! \ud83d\ude80 You have now successfully created your first Synthetic Data generator with Fabric. Get ready for your journey of improved quality data for AI.

"},{"location":"get-started/fabric_community/","title":"Get started with Fabric Community","text":"

Fabric Community is a SaaS version that allows you to explore all the functionalities of Fabric first-hand: free, forever, for everyone. You\u2019ll be able to validate your data quality with automated profiling, unlock data sharing and improve your ML models with synthetic data, and increase your productivity with seamless integration:

"},{"location":"get-started/fabric_community/#register","title":"Register","text":"

To register for Fabric Community:

Once you login, you'll access the Home page and get started with your data preparation!

"},{"location":"get-started/upload_csv/","title":"How to create your first Dataset from a CSV file","text":"

Check this quickstart video on how to create your first Dataset from a CSV file.

To create your first dataset in the Data Catalog, you can start by clicking on \"Add Dataset\" from the Home section. Or click to Data Catalog (on the left side menu) and click \u201cAdd Dataset\u201d.

After that the below modal will be shown. You will need to select a connector. To upload a CSV file, we need to select \u201cUpload CSV\u201d.

Once you've selected the \u201cUpload CSV\u201d connector, a new screen will appear, enabling you to upload your file and designate a name for your connector. This file upload connector will subsequently empower you to create one or more datasets from the same file at a later stage.

Loading area Upload csv file

With the Connector created, you'll be able to add a dataset and specify its properties:

Your created Connector (\u201cCensus File\u201d) and Dataset (\u201cCensus\u201d) will be added to the Data Catalog. As soon as the status is green, you can navigate your Dataset. Click in Open Dataset as per the image below.

Within the Dataset details, you can gain valuable insights through our automated data quality profiling. This includes comprehensive metadata and an overview of your data, encompassing details like row count, identification of duplicates, and insights into the overall quality of your dataset.

Or perhaps, you want to further explore through visualization, the profile of your data with both univariate and multivariate of your data.

Congrats! \ud83d\ude80 You have now successfully created your first Connector and Dataset in Fabric\u2019s Data Catalog. Get ready for your journey of improved quality data for AI.

"},{"location":"sdk/","title":"Overview","text":"

YData SDK for improved data quality everywhere!

ydata-sdk is here! Create a YData account so you can start using today!

Create account

"},{"location":"sdk/#overview","title":"Overview","text":"

The YData SDK is an ecosystem of methods that allows users to, through a python interface, adopt a Data-Centric approach towards the AI development. The solution includes a set of integrated components for data ingestion, standardized data quality evaluation and data improvement, such as synthetic data generation, allowing an iterative improvement of the datasets used in high-impact business applications.

Synthetic data can be used as Machine Learning performance enhancer, to augment or mitigate the presence of bias in real data. Furthermore, it can be used as a Privacy Enhancing Technology, to enable data-sharing initiatives or even to fuel testing environments.

Under the YData-SDK hood, you can find a set of algorithms and metrics based on statistics and deep learning based techniques, that will help you to accelerate your data preparation.

"},{"location":"sdk/#current-functionality","title":"Current functionality","text":"

YData SDK is currently composed by the following main modules:

"},{"location":"sdk/#supported-data-formats","title":"Supported data formats","text":"TabularTime-SeriesTransactionalRelational databases

The RegularSynthesizer is perfect to synthesize high-dimensional data, that is time-indepentent with high quality results.

Know more

The TimeSeriesSynthesizer is perfect to synthesize both regularly and not evenly spaced time-series, from smart-sensors to stock.

Know more

The TimeSeriesSynthesizer supports transactional data, known to have highly irregular time intervals between records and directional relations between entities.

Coming soon

Know more

The MultiTableSynthesizer is perfect to learn how to replicate the data within a relational database schema.

Coming soon

Know more

"},{"location":"sdk/installation/","title":"Installation","text":"

YData SDK is generally available through both Pypi and Conda allowing an easy process of installation. This experience allows combining YData SDK with other packages such as Pandas, Numpy or Scikit-Learn.

YData SDK is available for the public through a token-based authentication system. If you don\u2019t have one yet, you can get your free license key during the installation process. You can check what features are available in the free version here.

"},{"location":"sdk/installation/#installing-the-package","title":"Installing the package","text":"

YData SDK supports python versions bigger than python 3.8, and can be installed in Windows, Linux or MacOS operating systems.

Prior to the package installation, it is recommended the creation of a virtual or conda environment:

pyenv
pyenv virtualenv 3.10 ydatasdk\n

And install ydata-sdk

pypi
pip install ydata-sdk\n
"},{"location":"sdk/installation/#authentication","title":"Authentication","text":"

Once you've installed ydata-sdk package you will need a token to run the functionalities. YData SDK uses a token based authentication system. To get access to your token, you need to create a YData account.

YData SDK offers a free-trial and an enterprise version. To access your free-trial token, you need to create a YData account.

The token will be available here, after login:

With your account toke copied, you can set a new environment variable YDATA_TOKEN in the beginning of your development session.

    import os\nos.setenv['YDATA_TOKEN'] = '{add-your-token}'\n

Once you have set your token, you are good to go to start exploring the incredible world of data-centric AI and smart synthetic data generation!

Check out our quickstart guide!

"},{"location":"sdk/quickstart/","title":"Quickstart","text":"

YData SDK allows you to with an easy and familiar interface, to adopt a Data-Centric AI approach for the development of Machine Learning solutions. YData SDK features were designed to support structure data, including tabular data, time-series and transactional data.

"},{"location":"sdk/quickstart/#read-data","title":"Read data","text":"

To start leveraging the package features you should consume your data either through the Connectors or pandas.Dataframe. The list of available connectors can be found here [add a link].

From pandas dataframeFrom a connector
    # Example for a Google Cloud Storage Connector\ncredentials = \"{insert-credentials-file-path}\"\n# We create a new connector for Google Cloud Storage\nconnector = Connector(connector_type='gcs', credentials=credentials)\n# Create a Datasource from the connector\n# Note that a connector can be re-used for several datasources\nX = DataSource(connector=connector, path='gs://<my_bucket>.csv')\n
    # Load a small dataset\nX = pd.read_csv('{insert-file-path.csv}')\n# Init a synthesizer\nsynth = RegularSynthesizer()\n# Train the synthesizer with the pandas Dataframe as input\n# The data is then sent to the cluster for processing\nsynth.fit(X)\n

The synthesis process returns a pandas.DataFrame object. Note that if you are using the ydata-sdk free version, all of your data is sent to a remote cluster on YData's infrastructure.

"},{"location":"sdk/quickstart/#data-synthesis-flow","title":"Data synthesis flow","text":"

The process of data synthesis can be described into the following steps:

stateDiagram-v2\n  state read_data\n  read_data --> init_synth\n  init_synth --> train_synth\n  train_synth --> generate_samples\n  generate_samples --> [*]

The code snippet below shows how easy can be to start generating new synthetic data. The package includes a set of examples datasets for a quickstart.

    from ydata.sdk.dataset import get_dataset\n#read the example data\nX = get_dataset('census')\n# Init a synthesizer\nsynth = RegularSynthesizer()\n# Fit the synthesizer to the input data\nsynth.fit(X)\n# Sample new synthetic data. The below request ask for new 1000 synthetic rows\nsynth.sample(n_samples=1000)\n

Do I need to prepare my data before synthesis?

The sdk ensures that the original behaviour is replicated. For that reason, there is no need to preprocess outlier observations or missing data.

By default all the missing data is replicated as NaN.

"},{"location":"sdk/modules/connectors/","title":"Connectors","text":"

YData SDK allows users to consume data assets from remote storages through Connectors. YData Connectors support different types of storages, from filesystems to RDBMS'.

Below the list of available connectors:

Connector Name Type Supported File Types Useful Links Notes AWS S3 Remote object storage CSV, Parquet https://aws.amazon.com/s3/ Google Cloud Storage Remote object storage CSV, Parquet https://cloud.google.com/storage Azure Blob Storage Remote object storage CSV, Parquet https://azure.microsoft.com/en-us/services/storage/blobs/ File Upload Local CSV - Maximum file size is 220MB. Bigger files should be uploaded and read from remote object storages MySQL RDBMS Not applicable https://www.mysql.com/ Supports reading whole schemas or specifying a query Azure SQL Server RDBMS Not applicable https://azure.microsoft.com/en-us/services/sql-database/campaign/ Supports reading whole schemas or specifying a query PostgreSQL RDBMS Not applicable https://www.postgresql.org/ Supports reading whole schemas or specifying a query Snowflake RDBMS Not applicable https://docs.snowflake.com/en/sql-reference-commands Supports reading whole schemas or specifying a query Google BigQuery Data warehouse Not applicable https://cloud.google.com/bigquery Azure Data Lake Data lake CSV, Parquet https://azure.microsoft.com/en-us/services/storage/data-lake-storage/

More details can be found at Connectors APi Reference Docs.

"},{"location":"sdk/modules/synthetic_data/","title":"Synthetic data generation","text":""},{"location":"sdk/modules/synthetic_data/#data-formats","title":"Data formats","text":""},{"location":"sdk/modules/synthetic_data/#tabular-data","title":"Tabular data","text":""},{"location":"sdk/modules/synthetic_data/#time-series-data","title":"Time-series data","text":""},{"location":"sdk/modules/synthetic_data/#transactions-data","title":"Transactions data","text":""},{"location":"sdk/modules/synthetic_data/#best-practices","title":"Best practices","text":""},{"location":"sdk/reference/api/common/client/","title":"Get client","text":"

Deduce how to initialize or retrieve the client.

This is meant to be a zero configuration for the user.

Create and set a client globally
from ydata.sdk.client import get_client\nget_client(set_as_global=True)\n

Parameters:

Name Type Description Default client_or_creds Optional[Union[Client, dict, str, Path]]

Client to forward or credentials for initialization

None set_as_global bool

If True, set client as global

False wait_for_auth bool

If True, wait for the user to authenticate

True

Returns:

Type Description Client

Client instance

Source code in ydata/sdk/common/client/utils.py
def get_client(client_or_creds: Optional[Union[Client, Dict, str, Path]] = None, set_as_global: bool = False, wait_for_auth: bool = True) -> Client:\n\"\"\"Deduce how to initialize or retrieve the client.\n    This is meant to be a zero configuration for the user.\n    Example: Create and set a client globally\n            ```py\n            from ydata.sdk.client import get_client\n            get_client(set_as_global=True)\n            ```\n    Args:\n        client_or_creds (Optional[Union[Client, dict, str, Path]]): Client to forward or credentials for initialization\n        set_as_global (bool): If `True`, set client as global\n        wait_for_auth (bool): If `True`, wait for the user to authenticate\n    Returns:\n        Client instance\n    \"\"\"\nclient = None\nglobal WAITING_FOR_CLIENT\ntry:\n# If a client instance is set globally, return it\nif not set_as_global and Client.GLOBAL_CLIENT is not None:\nreturn Client.GLOBAL_CLIENT\n# Client exists, forward it\nif isinstance(client_or_creds, Client):\nreturn client_or_creds\n# Explicit credentials\n''' # For the first version, we deactivate explicit credentials via string or file for env var only\n        if isinstance(client_or_creds, (dict, str, Path)):\n            if isinstance(client_or_creds, str):  # noqa: SIM102\n                if Path(client_or_creds).is_file():\n                    client_or_creds = Path(client_or_creds)\n            if isinstance(client_or_creds, Path):\n                client_or_creds = json.loads(client_or_creds.open().read())\n            return Client(credentials=client_or_creds)\n        # Last try with environment variables\n        #if client_or_creds is None:\n        client = _client_from_env(wait_for_auth=wait_for_auth)\n        '''\ncredentials = environ.get(TOKEN_VAR)\nif credentials is not None:\nclient = Client(credentials=credentials)\nexcept ClientHandshakeError as e:\nwait_for_auth = False  # For now deactivate wait_for_auth until the backend is ready\nif wait_for_auth:\nWAITING_FOR_CLIENT = True\nstart = time()\nlogin_message_printed = False\nwhile client is None:\nif not login_message_printed:\nprint(\nf\"The token needs to be refreshed - please validate your token by browsing at the following URL:\\n\\n\\t{e.auth_link}\")\nlogin_message_printed = True\nwith suppress(ClientCreationError):\nsleep(BACKOFF)\nclient = get_client(wait_for_auth=False)\nnow = time()\nif now - start > CLIENT_INIT_TIMEOUT:\nWAITING_FOR_CLIENT = False\nbreak\nif client is None and not WAITING_FOR_CLIENT:\nsys.tracebacklimit = None\nraise ClientCreationError\nreturn client\n

Main Client class used to abstract the connection to the backend.

A normal user should not have to instanciate a Client by itself. However, in the future it will be useful for power-users to manage projects and connections.

Parameters:

Name Type Description Default credentials Optional[dict]

(optional) Credentials to connect

None project Optional[Project]

(optional) Project to connect to. If not specified, the client will connect to the default user's project.

None Source code in ydata/sdk/common/client/client.py
@typechecked\nclass Client(metaclass=SingletonClient):\n\"\"\"Main Client class used to abstract the connection to the backend.\n    A normal user should not have to instanciate a [`Client`][ydata.sdk.common.client.Client] by itself.\n    However, in the future it will be useful for power-users to manage projects and connections.\n    Args:\n        credentials (Optional[dict]): (optional) Credentials to connect\n        project (Optional[Project]): (optional) Project to connect to. If not specified, the client will connect to the default user's project.\n    \"\"\"\ncodes = codes\ndef __init__(self, credentials: Optional[Union[str, Dict]] = None, project: Optional[Project] = None, set_as_global: bool = False):\nself._base_url = environ.get(\"YDATA_BASE_URL\", DEFAULT_URL)\nself._scheme = 'https'\nself._headers = {'Authorization': credentials}\nself._http_client = httpClient(\nheaders=self._headers, timeout=Timeout(10, read=None))\nself._handshake()\nself._project = project if project is not None else self._get_default_project(\ncredentials)\nself.project = project\nif set_as_global:\nself.__set_global()\ndef post(self, endpoint: str, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"POST request to the backend.\n        Args:\n            endpoint (str): POST endpoint\n            data (Optional[dict]): (optional) multipart form data\n            json (Optional[dict]): (optional) json data\n            files (Optional[dict]): (optional) files to be sent\n            raise_for_status (bool): raise an exception on error\n        Returns:\n            Response object\n        \"\"\"\nurl_data = self.__build_url(endpoint, data=data, json=json, files=files)\nresponse = self._http_client.post(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\ndef get(self, endpoint: str, params: Optional[Dict] = None, cookies: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"GET request to the backend.\n        Args:\n            endpoint (str): GET endpoint\n            cookies (Optional[dict]): (optional) cookies data\n            raise_for_status (bool): raise an exception on error\n        Returns:\n            Response object\n        \"\"\"\nurl_data = self.__build_url(endpoint, params=params, cookies=cookies)\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\ndef get_static_file(self, endpoint: str, raise_for_status: bool = True) -> Response:\n\"\"\"Retrieve a static file from the backend.\n        Args:\n            endpoint (str): GET endpoint\n            raise_for_status (bool): raise an exception on error\n        Returns:\n            Response object\n        \"\"\"\nurl_data = self.__build_url(endpoint)\nurl_data['url'] = f'{self._scheme}://{self._base_url}/static-content{endpoint}'\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\ndef _handshake(self):\n\"\"\"Client handshake.\n        It is used to determine is the client can connect with its\n        current authorization token.\n        \"\"\"\nresponse = self.get('/profiles', params={}, raise_for_status=False)\nif response.status_code == Client.codes.FOUND:\nparser = LinkExtractor()\nparser.feed(response.text)\nraise ClientHandshakeError(auth_link=parser.link)\ndef _get_default_project(self, token: str):\nresponse = self.get('/profiles/me', params={}, cookies={'access_token': token})\ndata: Dict = response.json()\nreturn data['myWorkspace']\ndef __build_url(self, endpoint: str, params: Optional[Dict] = None, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, cookies: Optional[Dict] = None) -> Dict:\n\"\"\"Build a request for the backend.\n        Args:\n            endpoint (str): backend endpoint\n            params (Optional[dict]): URL parameters\n            data (Optional[Project]): (optional) multipart form data\n            json (Optional[dict]): (optional) json data\n            files (Optional[dict]): (optional) files to be sent\n            cookies (Optional[dict]): (optional) cookies data\n        Returns:\n            dictionary containing the information to perform a request\n        \"\"\"\n_params = params if params is not None else {\n'ns': self._project\n}\nurl_data = {\n'url': f'{self._scheme}://{self._base_url}/api{endpoint}',\n'headers': self._headers,\n'params': _params,\n}\nif data is not None:\nurl_data['data'] = data\nif json is not None:\nurl_data['json'] = json\nif files is not None:\nurl_data['files'] = files\nif cookies is not None:\nurl_data['cookies'] = cookies\nreturn url_data\ndef __set_global(self) -> None:\n\"\"\"Sets a client instance as global.\"\"\"\n# If the client is stateful, close it gracefully!\nClient.GLOBAL_CLIENT = self\ndef __raise_for_status(self, response: Response) -> None:\n\"\"\"Raise an exception if the response is not OK.\n        When an exception is raised, we try to convert it to a ResponseError which is\n        a wrapper around a backend error. This usually gives enough context and provides\n        nice error message.\n        If it cannot be converted to ResponseError, it is re-raised.\n        Args:\n            response (Response): response to analyze\n        \"\"\"\ntry:\nresponse.raise_for_status()\nexcept HTTPStatusError as e:\nwith suppress(Exception):\ne = ResponseError(**response.json())\nraise e\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.__build_url","title":"__build_url(endpoint, params=None, data=None, json=None, files=None, cookies=None)","text":"

Build a request for the backend.

Parameters:

Name Type Description Default endpoint str

backend endpoint

required params Optional[dict]

URL parameters

None data Optional[Project]

(optional) multipart form data

None json Optional[dict]

(optional) json data

None files Optional[dict]

(optional) files to be sent

None cookies Optional[dict]

(optional) cookies data

None

Returns:

Type Description Dict

dictionary containing the information to perform a request

Source code in ydata/sdk/common/client/client.py
def __build_url(self, endpoint: str, params: Optional[Dict] = None, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, cookies: Optional[Dict] = None) -> Dict:\n\"\"\"Build a request for the backend.\n    Args:\n        endpoint (str): backend endpoint\n        params (Optional[dict]): URL parameters\n        data (Optional[Project]): (optional) multipart form data\n        json (Optional[dict]): (optional) json data\n        files (Optional[dict]): (optional) files to be sent\n        cookies (Optional[dict]): (optional) cookies data\n    Returns:\n        dictionary containing the information to perform a request\n    \"\"\"\n_params = params if params is not None else {\n'ns': self._project\n}\nurl_data = {\n'url': f'{self._scheme}://{self._base_url}/api{endpoint}',\n'headers': self._headers,\n'params': _params,\n}\nif data is not None:\nurl_data['data'] = data\nif json is not None:\nurl_data['json'] = json\nif files is not None:\nurl_data['files'] = files\nif cookies is not None:\nurl_data['cookies'] = cookies\nreturn url_data\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.__raise_for_status","title":"__raise_for_status(response)","text":"

Raise an exception if the response is not OK.

When an exception is raised, we try to convert it to a ResponseError which is a wrapper around a backend error. This usually gives enough context and provides nice error message.

If it cannot be converted to ResponseError, it is re-raised.

Parameters:

Name Type Description Default response Response

response to analyze

required Source code in ydata/sdk/common/client/client.py
def __raise_for_status(self, response: Response) -> None:\n\"\"\"Raise an exception if the response is not OK.\n    When an exception is raised, we try to convert it to a ResponseError which is\n    a wrapper around a backend error. This usually gives enough context and provides\n    nice error message.\n    If it cannot be converted to ResponseError, it is re-raised.\n    Args:\n        response (Response): response to analyze\n    \"\"\"\ntry:\nresponse.raise_for_status()\nexcept HTTPStatusError as e:\nwith suppress(Exception):\ne = ResponseError(**response.json())\nraise e\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.__set_global","title":"__set_global()","text":"

Sets a client instance as global.

Source code in ydata/sdk/common/client/client.py
def __set_global(self) -> None:\n\"\"\"Sets a client instance as global.\"\"\"\n# If the client is stateful, close it gracefully!\nClient.GLOBAL_CLIENT = self\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.get","title":"get(endpoint, params=None, cookies=None, raise_for_status=True)","text":"

GET request to the backend.

Parameters:

Name Type Description Default endpoint str

GET endpoint

required cookies Optional[dict]

(optional) cookies data

None raise_for_status bool

raise an exception on error

True

Returns:

Type Description Response

Response object

Source code in ydata/sdk/common/client/client.py
def get(self, endpoint: str, params: Optional[Dict] = None, cookies: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"GET request to the backend.\n    Args:\n        endpoint (str): GET endpoint\n        cookies (Optional[dict]): (optional) cookies data\n        raise_for_status (bool): raise an exception on error\n    Returns:\n        Response object\n    \"\"\"\nurl_data = self.__build_url(endpoint, params=params, cookies=cookies)\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.get_static_file","title":"get_static_file(endpoint, raise_for_status=True)","text":"

Retrieve a static file from the backend.

Parameters:

Name Type Description Default endpoint str

GET endpoint

required raise_for_status bool

raise an exception on error

True

Returns:

Type Description Response

Response object

Source code in ydata/sdk/common/client/client.py
def get_static_file(self, endpoint: str, raise_for_status: bool = True) -> Response:\n\"\"\"Retrieve a static file from the backend.\n    Args:\n        endpoint (str): GET endpoint\n        raise_for_status (bool): raise an exception on error\n    Returns:\n        Response object\n    \"\"\"\nurl_data = self.__build_url(endpoint)\nurl_data['url'] = f'{self._scheme}://{self._base_url}/static-content{endpoint}'\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.post","title":"post(endpoint, data=None, json=None, files=None, raise_for_status=True)","text":"

POST request to the backend.

Parameters:

Name Type Description Default endpoint str

POST endpoint

required data Optional[dict]

(optional) multipart form data

None json Optional[dict]

(optional) json data

None files Optional[dict]

(optional) files to be sent

None raise_for_status bool

raise an exception on error

True

Returns:

Type Description Response

Response object

Source code in ydata/sdk/common/client/client.py
def post(self, endpoint: str, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"POST request to the backend.\n    Args:\n        endpoint (str): POST endpoint\n        data (Optional[dict]): (optional) multipart form data\n        json (Optional[dict]): (optional) json data\n        files (Optional[dict]): (optional) files to be sent\n        raise_for_status (bool): raise an exception on error\n    Returns:\n        Response object\n    \"\"\"\nurl_data = self.__build_url(endpoint, data=data, json=json, files=files)\nresponse = self._http_client.post(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\n
"},{"location":"sdk/reference/api/common/types/","title":"Types","text":""},{"location":"sdk/reference/api/connectors/connector/","title":"Connector","text":"

Bases: ModelFactoryMixin

A Connector allows to connect and access data stored in various places. The list of available connectors can be found here.

Parameters:

Name Type Description Default connector_type Union[ConnectorType, str]

Type of the connector to be created

None credentials dict

Connector credentials

None name Optional[str]

(optional) Connector name

None client Client

(optional) Client to connect to the backend

None

Attributes:

Name Type Description uid UID

UID fo the connector instance (creating internally)

type ConnectorType

Type of the connector

Source code in ydata/sdk/connectors/connector.py
class Connector(ModelFactoryMixin):\n\"\"\"A [`Connector`][ydata.sdk.connectors.Connector] allows to connect and\n    access data stored in various places. The list of available connectors can\n    be found [here][ydata.sdk.connectors.ConnectorType].\n    Arguments:\n        connector_type (Union[ConnectorType, str]): Type of the connector to be created\n        credentials (dict): Connector credentials\n        name (Optional[str]): (optional) Connector name\n        client (Client): (optional) Client to connect to the backend\n    Attributes:\n        uid (UID): UID fo the connector instance (creating internally)\n        type (ConnectorType): Type of the connector\n    \"\"\"\ndef __init__(self, connector_type: Union[ConnectorType, str] = None, credentials: Optional[Dict] = None,  name: Optional[str] = None, client: Optional[Client] = None):\nself._init_common(client=client)\nself._model: Optional[mConnector] = self._create_model(\nconnector_type, credentials, name, client=client)\n@init_client\ndef _init_common(self, client: Optional[Client] = None):\nself._client = client\nself._logger = create_logger(__name__, level=LOG_LEVEL)\n@property\ndef uid(self) -> UID:\nreturn self._model.uid\n@property\ndef type(self) -> str:\nreturn self._model.type\n@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Get an existing connector.\n        Arguments:\n            uid (UID): Connector identifier\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            Connector\n        \"\"\"\nconnectors: ConnectorsList = Connector.list(client=client)\ndata = connectors.get_by_uid(uid)\nmodel = mConnector(**data)\nconnector = ModelFactoryMixin._init_from_model_data(Connector, model)\nreturn connector\n@staticmethod\ndef _init_connector_type(connector_type: Union[ConnectorType, str]) -> ConnectorType:\nif isinstance(connector_type, str):\ntry:\nconnector_type = ConnectorType(connector_type)\nexcept Exception:\nc_list = \", \".join([c.value for c in ConnectorType])\nraise InvalidConnectorError(\nf\"ConnectorType '{connector_type}' does not exist.\\nValid connector types are: {c_list}.\")\nreturn connector_type\n@staticmethod\ndef _init_credentials(connector_type: ConnectorType, credentials: Union[str, Path, Dict, Credentials]) -> Credentials:\n_credentials = None\nif isinstance(credentials, str):\ncredentials = Path(credentials)\nif isinstance(credentials, Path):\ntry:\n_credentials = json_loads(credentials.open().read())\nexcept Exception:\nraise CredentialTypeError(\n'Could not read the credentials. Please, check your path or credentials structure.')\ntry:\nfrom ydata.sdk.connectors._models.connector_map import TYPE_TO_CLASS\ncredential_cls = TYPE_TO_CLASS.get(connector_type.value)\n_credentials = credential_cls(**_credentials)\nexcept Exception:\nraise CredentialTypeError(\n\"Could not create the credentials. Verify the path or the structure your credentials.\")\nreturn _credentials\n@staticmethod\ndef create(connector_type: Union[ConnectorType, str], credentials: Union[str, Path, Dict, Credentials], name: Optional[str] = None, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Create a new connector.\n        Arguments:\n            connector_type (Union[ConnectorType, str]): Type of the connector to be created\n            credentials (dict): Connector credentials\n            name (Optional[str]): (optional) Connector name\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            New connector\n        \"\"\"\nmodel = Connector._create_model(\nconnector_type=connector_type, credentials=credentials, name=name, client=client)\nconnector = ModelFactoryMixin._init_from_model_data(\nConnector, model)\nreturn connector\n@classmethod\n@init_client\ndef _create_model(cls, connector_type: Union[ConnectorType, str], credentials: Union[str, Path, Dict, Credentials], name: Optional[str] = None, client: Optional[Client] = None) -> mConnector:\n_name = name if name is not None else str(uuid4())\n_connector_type = Connector._init_connector_type(connector_type)\n_credentials = Connector._init_credentials(_connector_type, credentials)\npayload = {\n\"type\": _connector_type.value,\n\"credentials\": _credentials.dict(by_alias=True),\n\"name\": _name\n}\nresponse = client.post('/connector/', json=payload)\ndata: list = response.json()\nreturn mConnector(**data)\n@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> ConnectorsList:\n\"\"\"List the connectors instances.\n        Arguments:\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            List of connectors\n        \"\"\"\nresponse = client.get('/connector')\ndata: list = response.json()\nreturn ConnectorsList(data)\ndef __repr__(self):\nreturn self._model.__repr__()\n
"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.connector.Connector.create","title":"create(connector_type, credentials, name=None, client=None) staticmethod","text":"

Create a new connector.

Parameters:

Name Type Description Default connector_type Union[ConnectorType, str]

Type of the connector to be created

required credentials dict

Connector credentials

required name Optional[str]

(optional) Connector name

None client Client

(optional) Client to connect to the backend

None

Returns:

Type Description Connector

New connector

Source code in ydata/sdk/connectors/connector.py
@staticmethod\ndef create(connector_type: Union[ConnectorType, str], credentials: Union[str, Path, Dict, Credentials], name: Optional[str] = None, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Create a new connector.\n    Arguments:\n        connector_type (Union[ConnectorType, str]): Type of the connector to be created\n        credentials (dict): Connector credentials\n        name (Optional[str]): (optional) Connector name\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        New connector\n    \"\"\"\nmodel = Connector._create_model(\nconnector_type=connector_type, credentials=credentials, name=name, client=client)\nconnector = ModelFactoryMixin._init_from_model_data(\nConnector, model)\nreturn connector\n
"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.connector.Connector.get","title":"get(uid, client=None) staticmethod","text":"

Get an existing connector.

Parameters:

Name Type Description Default uid UID

Connector identifier

required client Client

(optional) Client to connect to the backend

None

Returns:

Type Description Connector

Connector

Source code in ydata/sdk/connectors/connector.py
@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Get an existing connector.\n    Arguments:\n        uid (UID): Connector identifier\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        Connector\n    \"\"\"\nconnectors: ConnectorsList = Connector.list(client=client)\ndata = connectors.get_by_uid(uid)\nmodel = mConnector(**data)\nconnector = ModelFactoryMixin._init_from_model_data(Connector, model)\nreturn connector\n
"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.connector.Connector.list","title":"list(client=None) staticmethod","text":"

List the connectors instances.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None

Returns:

Type Description ConnectorsList

List of connectors

Source code in ydata/sdk/connectors/connector.py
@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> ConnectorsList:\n\"\"\"List the connectors instances.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        List of connectors\n    \"\"\"\nresponse = client.get('/connector')\ndata: list = response.json()\nreturn ConnectorsList(data)\n
"},{"location":"sdk/reference/api/connectors/connector/#connectortype","title":"ConnectorType","text":"

Bases: Enum

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.AWS_S3","title":"AWS_S3 = 'aws-s3' class-attribute instance-attribute","text":"

AWS S3 connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.AZURE_BLOB","title":"AZURE_BLOB = 'azure-blob' class-attribute instance-attribute","text":"

Azure Blob connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.AZURE_SQL","title":"AZURE_SQL = 'azure-sql' class-attribute instance-attribute","text":"

AzureSQL connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.BIGQUERY","title":"BIGQUERY = 'google-bigquery' class-attribute instance-attribute","text":"

BigQuery connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.FILE","title":"FILE = 'file' class-attribute instance-attribute","text":"

File connector (placeholder)

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.GCS","title":"GCS = 'gcs' class-attribute instance-attribute","text":"

Google Cloud Storage connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.MYSQL","title":"MYSQL = 'mysql' class-attribute instance-attribute","text":"

MySQL connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.SNOWFLAKE","title":"SNOWFLAKE = 'snowflake' class-attribute instance-attribute","text":"

Snowflake connector

"},{"location":"sdk/reference/api/datasources/datasource/","title":"DataSource","text":"

Bases: ModelFactoryMixin

A DataSource represents a dataset to be used by a Synthesizer as training data.

Parameters:

Name Type Description Default connector Connector

Connector from which the datasource is created

required datatype Optional[Union[DataSourceType, str]]

(optional) DataSource type

TABULAR name Optional[str]

(optional) DataSource name

None wait_for_metadata bool

If True, wait until the metadata is fully calculated

True client Client

(optional) Client to connect to the backend

None **config

Datasource specific configuration

{}

Attributes:

Name Type Description uid UID

UID fo the datasource instance

datatype DataSourceType

Data source type

status Status

Status of the datasource

metadata Metadata

Metadata associated to the datasource

Source code in ydata/sdk/datasources/datasource.py
class DataSource(ModelFactoryMixin):\n\"\"\"A [`DataSource`][ydata.sdk.datasources.DataSource] represents a dataset\n    to be used by a Synthesizer as training data.\n    Arguments:\n        connector (Connector): Connector from which the datasource is created\n        datatype (Optional[Union[DataSourceType, str]]): (optional) DataSource type\n        name (Optional[str]): (optional) DataSource name\n        wait_for_metadata (bool): If `True`, wait until the metadata is fully calculated\n        client (Client): (optional) Client to connect to the backend\n        **config: Datasource specific configuration\n    Attributes:\n        uid (UID): UID fo the datasource instance\n        datatype (DataSourceType): Data source type\n        status (Status): Status of the datasource\n        metadata (Metadata): Metadata associated to the datasource\n    \"\"\"\ndef __init__(self, connector: Connector, datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None, **config):\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(connector.type)\nself._init_common(client=client)\nself._model: Optional[mDataSource] = self._create_model(\nconnector=connector, datasource_type=datasource_type, datatype=datatype, config=config, name=name, client=self._client)\nif wait_for_metadata:\nself._model = DataSource._wait_for_metadata(self)._model\n@init_client\ndef _init_common(self, client: Optional[Client] = None):\nself._client = client\nself._logger = create_logger(__name__, level=LOG_LEVEL)\n@property\ndef uid(self) -> UID:\nreturn self._model.uid\n@property\ndef datatype(self) -> DataSourceType:\nreturn self._model.datatype\n@property\ndef status(self) -> Status:\ntry:\nself._model = self.get(self._model.uid, self._client)._model\nreturn self._model.status\nexcept Exception:  # noqa: PIE786\nreturn Status.UNKNOWN\n@property\ndef metadata(self) -> Metadata:\nreturn self._model.metadata\n@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> DataSourceList:\n\"\"\"List the  [`DataSource`][ydata.sdk.datasources.DataSource]\n        instances.\n        Arguments:\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            List of datasources\n        \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/datasource')\ndata: list = response.json()\ndata = __process_data(data)\nreturn DataSourceList(data)\n@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"DataSource\":\n\"\"\"Get an existing [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            uid (UID): DataSource identifier\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            DataSource\n        \"\"\"\nresponse = client.get(f'/datasource/{uid}')\ndata: list = response.json()\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(\nConnectorType(data['connector']['type']))\nmodel = DataSource._model_from_api(data, datasource_type)\ndatasource = ModelFactoryMixin._init_from_model_data(DataSource, model)\nreturn datasource\n@classmethod\ndef create(cls, connector: Connector, datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None, **config) -> \"DataSource\":\n\"\"\"Create a new [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            connector (Connector): Connector from which the datasource is created\n            datatype (Optional[Union[DataSourceType, str]]): (optional) DataSource type\n            name (Optional[str]): (optional) DataSource name\n            wait_for_metadata (bool): If `True`, wait until the metadata is fully calculated\n            client (Client): (optional) Client to connect to the backend\n            **config: Datasource specific configuration\n        Returns:\n            DataSource\n        \"\"\"\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(connector.type)\nreturn cls._create(connector=connector, datasource_type=datasource_type, datatype=datatype, config=config, name=name, wait_for_metadata=wait_for_metadata, client=client)\n@classmethod\ndef _create(cls, connector: Connector, datasource_type: Type[mDataSource], datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, config: Optional[Dict] = None, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None) -> \"DataSource\":\nmodel = DataSource._create_model(\nconnector, datasource_type, datatype, config, name, client)\ndatasource = ModelFactoryMixin._init_from_model_data(DataSource, model)\nif wait_for_metadata:\ndatasource._model = DataSource._wait_for_metadata(datasource)._model\nreturn datasource\n@classmethod\n@init_client\ndef _create_model(cls, connector: Connector, datasource_type: Type[mDataSource], datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, config: Optional[Dict] = None, name: Optional[str] = None, client: Optional[Client] = None) -> mDataSource:\n_name = name if name is not None else str(uuid4())\n_config = config if config is not None else {}\npayload = {\n\"name\": _name,\n\"connector\": {\n\"uid\": connector.uid,\n\"type\": connector.type.value\n},\n\"dataType\": datatype.value\n}\nif connector.type != ConnectorType.FILE:\n_config = datasource_type(**config).to_payload()\npayload.update(_config)\nresponse = client.post('/datasource/', json=payload)\ndata: list = response.json()\nreturn DataSource._model_from_api(data, datasource_type)\n@staticmethod\ndef _wait_for_metadata(datasource):\nlogger = create_logger(__name__, level=LOG_LEVEL)\nwhile datasource.status not in [Status.AVAILABLE, Status.FAILED, Status.UNAVAILABLE]:\nlogger.info(f'Calculating metadata [{datasource.status}]')\ndatasource = DataSource.get(uid=datasource.uid, client=datasource._client)\nsleep(BACKOFF)\nreturn datasource\n@staticmethod\ndef _resolve_api_status(api_status: Dict) -> Status:\nstatus = Status(api_status.get('state', Status.UNKNOWN.name))\nvalidation = ValidationState(api_status.get('validation', {}).get(\n'state', ValidationState.UNKNOWN.name))\nif validation == ValidationState.FAILED:\nstatus = Status.FAILED\nreturn status\n@staticmethod\ndef _model_from_api(data: Dict, datasource_type: Type[mDataSource]) -> mDataSource:\ndata['datatype'] = data.pop('dataType')\ndata['state'] = data['status']\ndata['status'] = DataSource._resolve_api_status(data['status'])\ndata = filter_dict(datasource_type, data)\nmodel = datasource_type(**data)\nreturn model\ndef __repr__(self):\nreturn self._model.__repr__()\n
"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.datasource.DataSource.create","title":"create(connector, datatype=DataSourceType.TABULAR, name=None, wait_for_metadata=True, client=None, **config) classmethod","text":"

Create a new DataSource.

Parameters:

Name Type Description Default connector Connector

Connector from which the datasource is created

required datatype Optional[Union[DataSourceType, str]]

(optional) DataSource type

TABULAR name Optional[str]

(optional) DataSource name

None wait_for_metadata bool

If True, wait until the metadata is fully calculated

True client Client

(optional) Client to connect to the backend

None **config

Datasource specific configuration

{}

Returns:

Type Description DataSource

DataSource

Source code in ydata/sdk/datasources/datasource.py
@classmethod\ndef create(cls, connector: Connector, datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None, **config) -> \"DataSource\":\n\"\"\"Create a new [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        connector (Connector): Connector from which the datasource is created\n        datatype (Optional[Union[DataSourceType, str]]): (optional) DataSource type\n        name (Optional[str]): (optional) DataSource name\n        wait_for_metadata (bool): If `True`, wait until the metadata is fully calculated\n        client (Client): (optional) Client to connect to the backend\n        **config: Datasource specific configuration\n    Returns:\n        DataSource\n    \"\"\"\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(connector.type)\nreturn cls._create(connector=connector, datasource_type=datasource_type, datatype=datatype, config=config, name=name, wait_for_metadata=wait_for_metadata, client=client)\n
"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.datasource.DataSource.get","title":"get(uid, client=None) staticmethod","text":"

Get an existing DataSource.

Parameters:

Name Type Description Default uid UID

DataSource identifier

required client Client

(optional) Client to connect to the backend

None

Returns:

Type Description DataSource

DataSource

Source code in ydata/sdk/datasources/datasource.py
@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"DataSource\":\n\"\"\"Get an existing [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        uid (UID): DataSource identifier\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        DataSource\n    \"\"\"\nresponse = client.get(f'/datasource/{uid}')\ndata: list = response.json()\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(\nConnectorType(data['connector']['type']))\nmodel = DataSource._model_from_api(data, datasource_type)\ndatasource = ModelFactoryMixin._init_from_model_data(DataSource, model)\nreturn datasource\n
"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.datasource.DataSource.list","title":"list(client=None) staticmethod","text":"

List the DataSource instances.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None

Returns:

Type Description DataSourceList

List of datasources

Source code in ydata/sdk/datasources/datasource.py
@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> DataSourceList:\n\"\"\"List the  [`DataSource`][ydata.sdk.datasources.DataSource]\n    instances.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        List of datasources\n    \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/datasource')\ndata: list = response.json()\ndata = __process_data(data)\nreturn DataSourceList(data)\n
"},{"location":"sdk/reference/api/datasources/datasource/#status","title":"Status","text":"

Bases: StringEnum

Represent the status of a DataSource.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.AVAILABLE","title":"AVAILABLE = 'available' class-attribute instance-attribute","text":"

The DataSource is available and ready to be used.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.DELETED","title":"DELETED = 'deleted' class-attribute instance-attribute","text":"

The DataSource is to be deleted or has been deleted.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.FAILED","title":"FAILED = 'failed' class-attribute instance-attribute","text":"

The DataSource preparation or validation has failed.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.PREPARING","title":"PREPARING = 'preparing' class-attribute instance-attribute","text":"

The DataSource is being prepared.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.UNAVAILABLE","title":"UNAVAILABLE = 'unavailable' class-attribute instance-attribute","text":"

The DataSource is unavailable at the moment.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.UNKNOWN","title":"UNKNOWN = 'unknown' class-attribute instance-attribute","text":"

The DataSource status could not be retrieved.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.VALIDATING","title":"VALIDATING = 'validating' class-attribute instance-attribute","text":"

The DataSource is being validated.

"},{"location":"sdk/reference/api/datasources/datasource/#datasourcetype","title":"DataSourceType","text":"

Bases: StringEnum

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.DataSourceType.TABULAR","title":"TABULAR = 'tabular' class-attribute instance-attribute","text":"

The DataSource is tabular (i.e. it does not have a temporal dimension).

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.DataSourceType.TIMESERIES","title":"TIMESERIES = 'timeseries' class-attribute instance-attribute","text":"

The DataSource has a temporal dimension.

"},{"location":"sdk/reference/api/datasources/metadata/","title":"Metadata","text":"

Bases: BaseModel

The Metadata object contains descriptive information about a.

DataSource

Attributes:

Name Type Description columns List[Column]

columns information

"},{"location":"sdk/reference/api/synthesizers/base/","title":"Synthesizer","text":"

Bases: ABC, ModelFactoryMixin

Main synthesizer class.

This class cannot be directly instanciated because of the specificities between RegularSynthesizer and TimeSeriesSynthesizer sample methods.

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer--methods","title":"Methods","text":" Note

The synthesizer instance is created in the backend only when the fit method is called.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None Source code in ydata/sdk/synthesizers/synthesizer.py
@typechecked\nclass BaseSynthesizer(ABC, ModelFactoryMixin):\n\"\"\"Main synthesizer class.\n    This class cannot be directly instanciated because of the specificities between [`RegularSynthesizer`][ydata.sdk.synthesizers.RegularSynthesizer] and [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] `sample` methods.\n    Methods\n    -------\n    - `fit`: train a synthesizer instance.\n    - `sample`: request synthetic data.\n    - `status`: current status of the synthesizer instance.\n    Note:\n            The synthesizer instance is created in the backend only when the `fit` method is called.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    \"\"\"\ndef __init__(self, client: Optional[Client] = None):\nself._init_common(client=client)\nself._model: Optional[mSynthesizer] = None\n@init_client\ndef _init_common(self, client: Optional[Client] = None):\nself._client = client\nself._logger = create_logger(__name__, level=LOG_LEVEL)\ndef fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\ndatatype: Optional[Union[DataSourceType, str]] = None,\nsortbykey: Optional[Union[str, List[str]]] = None,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n        The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n        When the training dataset is a pandas [`DataFrame`][pandas.DataFrame], the argument `datatype` is required as it cannot be deduced.\n        The argument`sortbykey` is mandatory for [`TimeSeries`][ydata.sdk.datasources.DataSourceType.TIMESERIES].\n        By default, if `generate_cols` or `exclude_cols` are not specified, all columns are generated by the synthesizer.\n        The argument `exclude_cols` has precedence over `generate_cols`, i.e. a column `col` will not be generated if it is in both list.\n        Arguments:\n            X (Union[DataSource, pandas.DataFrame]): Training dataset\n            privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n            datatype (Optional[Union[DataSourceType, str]]): (optional) Dataset datatype - required if `X` is a [`pandas.DataFrame`][pandas.DataFrame]\n            sortbykey (Union[str, List[str]]): (optional) column(s) to use to sort timeseries datasets\n            entities (Union[str, List[str]]): (optional) columns representing entities ID\n            generate_cols (List[str]): (optional) columns that should be synthesized\n            exclude_cols (List[str]): (optional) columns that should not be synthesized\n            dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n            target (Optional[str]): (optional) Target for the dataset\n            name (Optional[str]): (optional) Synthesizer instance name\n            anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n            condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n        \"\"\"\nif self._is_initialized():\nraise AlreadyFittedError()\n_datatype = DataSourceType(datatype) if isinstance(\nX, pdDataFrame) else DataSourceType(X.datatype)\ndataset_attrs = self._init_datasource_attributes(\nsortbykey, entities, generate_cols, exclude_cols, dtypes)\nself._validate_datasource_attributes(X, dataset_attrs, _datatype, target)\n# If the training data is a pandas dataframe, we first need to create a data source and then the instance\nif isinstance(X, pdDataFrame):\nif X.empty:\nraise EmptyDataError(\"The DataFrame is empty\")\n_X = LocalDataSource(source=X, datatype=_datatype, client=self._client)\nelse:\nif datatype != _datatype:\nwarn(\"When the training data is a DataSource, the argument `datatype` is ignored.\",\nDataSourceTypeWarning)\n_X = X\nif _X.status != dsStatus.AVAILABLE:\nraise DataSourceNotAvailableError(\nf\"The datasource '{_X.uid}' is not available (status = {_X.status.value})\")\nif isinstance(dataset_attrs, dict):\ndataset_attrs = DataSourceAttrs(**dataset_attrs)\nself._fit_from_datasource(\nX=_X, dataset_attrs=dataset_attrs, target=target, name=name,\nanonymize=anonymize, privacy_level=privacy_level, condition_on=condition_on)\n@staticmethod\ndef _init_datasource_attributes(\nsortbykey: Optional[Union[str, List[str]]],\nentities: Optional[Union[str, List[str]]],\ngenerate_cols: Optional[List[str]],\nexclude_cols: Optional[List[str]],\ndtypes: Optional[Dict[str, Union[str, DataType]]]) -> DataSourceAttrs:\ndataset_attrs = {\n'sortbykey': sortbykey if sortbykey is not None else [],\n'entities': entities if entities is not None else [],\n'generate_cols': generate_cols if generate_cols is not None else [],\n'exclude_cols': exclude_cols if exclude_cols is not None else [],\n'dtypes': {k: DataType(v) for k, v in dtypes.items()} if dtypes is not None else {}\n}\nreturn DataSourceAttrs(**dataset_attrs)\n@staticmethod\ndef _validate_datasource_attributes(X: Union[DataSource, pdDataFrame], dataset_attrs: DataSourceAttrs, datatype: DataSourceType, target: Optional[str]):\ncolumns = []\nif isinstance(X, pdDataFrame):\ncolumns = X.columns\nif datatype is None:\nraise DataTypeMissingError(\n\"Argument `datatype` is mandatory for pandas.DataFrame training data\")\ndatatype = DataSourceType(datatype)\nelse:\ncolumns = [c.name for c in X.metadata.columns]\nif target is not None and target not in columns:\nraise DataSourceAttrsError(\n\"Invalid target: column '{target}' does not exist\")\nif datatype == DataSourceType.TIMESERIES:\nif not dataset_attrs.sortbykey:\nraise DataSourceAttrsError(\n\"The argument `sortbykey` is mandatory for timeseries datasource.\")\ninvalid_fields = {}\nfor field, v in dataset_attrs.dict().items():\nfield_columns = v if field != 'dtypes' else v.keys()\nnot_in_cols = [c for c in field_columns if c not in columns]\nif len(not_in_cols) > 0:\ninvalid_fields[field] = not_in_cols\nif len(invalid_fields) > 0:\nerror_msgs = [\"\\t- Field '{}': columns {} do not exist\".format(\nf, ', '.join(v)) for f, v in invalid_fields.items()]\nraise DataSourceAttrsError(\n\"The dataset attributes are invalid:\\n {}\".format('\\n'.join(error_msgs)))\n@staticmethod\ndef _metadata_to_payload(\ndatatype: DataSourceType, ds_metadata: Metadata,\ndataset_attrs: Optional[DataSourceAttrs] = None, target: str | None = None\n) -> dict:\n\"\"\"Transform a the metadata and dataset attributes into a valid\n        payload.\n        Arguments:\n            datatype (DataSourceType): datasource type\n            ds_metadata (Metadata): datasource metadata object\n            dataset_attrs ( Optional[DataSourceAttrs] ): (optional) Dataset attributes\n            target (Optional[str]): (optional) target column name\n        Returns:\n            metadata payload dictionary\n        \"\"\"\ncolumns = [\n{\n'name': c.name,\n'generation': True and c.name not in dataset_attrs.exclude_cols,\n'dataType': DataType(dataset_attrs.dtypes[c.name]).value if c.name in dataset_attrs.dtypes else c.datatype,\n'varType': c.vartype,\n}\nfor c in ds_metadata.columns]\nmetadata = {\n'columns': columns,\n'target': target\n}\nif dataset_attrs is not None:\nif datatype == DataSourceType.TIMESERIES:\nmetadata['sortBy'] = [c for c in dataset_attrs.sortbykey]\nmetadata['entity'] = [c for c in dataset_attrs.entities]\nreturn metadata\ndef _fit_from_datasource(\nself,\nX: DataSource,\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\ndataset_attrs: Optional[DataSourceAttrs] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None\n) -> None:\n_name = name if name is not None else str(uuid4())\nmetadata = self._metadata_to_payload(\nDataSourceType(X.datatype), X.metadata, dataset_attrs, target)\npayload = {\n'name': _name,\n'dataSourceUID': X.uid,\n'metadata': metadata,\n'extraData': {},\n'privacyLevel': privacy_level.value\n}\nif anonymize is not None:\npayload[\"extraData\"][\"anonymize\"] = anonymize\nif condition_on is not None:\npayload[\"extraData\"][\"condition_on\"] = condition_on\nresponse = self._client.post('/synthesizer/', json=payload)\ndata: list = response.json()\nself._model, _ = self._model_from_api(X.datatype, data)\nwhile self.status not in [Status.READY, Status.FAILED]:\nself._logger.info('Training the synthesizer...')\nsleep(BACKOFF)\nif self.status == Status.FAILED:\nraise FittingError('Could not train the synthesizer')\n@staticmethod\ndef _model_from_api(datatype: str, data: Dict) -> Tuple[mSynthesizer, Type[\"BaseSynthesizer\"]]:\nfrom ydata.sdk.synthesizers._models.synthesizer_map import TYPE_TO_CLASS\nsynth_cls = TYPE_TO_CLASS.get(SynthesizerType(datatype).value)\ndata['status'] = synth_cls._resolve_api_status(data['status'])\ndata = filter_dict(mSynthesizer, data)\nreturn mSynthesizer(**data), synth_cls\n@abstractmethod\ndef sample(self) -> pdDataFrame:\n\"\"\"Abstract method to sample from a synthesizer.\"\"\"\ndef _sample(self, payload: Dict) -> pdDataFrame:\n\"\"\"Sample from a synthesizer.\n        Arguments:\n            payload (dict): payload configuring the sample request\n        Returns:\n            pandas `DataFrame`\n        \"\"\"\nresponse = self._client.post(\nf\"/synthesizer/{self.uid}/sample\", json=payload)\ndata: Dict = response.json()\nsample_uid = data.get('uid')\nsample_status = None\nwhile sample_status not in ['finished', 'failed']:\nself._logger.info('Sampling from the synthesizer...')\nresponse = self._client.get(f'/synthesizer/{self.uid}/history')\nhistory: Dict = response.json()\nsample_data = next((s for s in history if s.get('uid') == sample_uid), None)\nsample_status = sample_data.get('status', {}).get('state')\nsleep(BACKOFF)\nresponse = self._client.get_static_file(\nf'/synthesizer/{self.uid}/sample/{sample_uid}/sample.csv')\ndata = StringIO(response.content.decode())\nreturn read_csv(data)\n@property\ndef uid(self) -> UID:\n\"\"\"Get the status of a synthesizer instance.\n        Returns:\n            Synthesizer status\n        \"\"\"\nif not self._is_initialized():\nreturn Status.NOT_INITIALIZED\nreturn self._model.uid\n@property\ndef status(self) -> Status:\n\"\"\"Get the status of a synthesizer instance.\n        Returns:\n            Synthesizer status\n        \"\"\"\nif not self._is_initialized():\nreturn Status.NOT_INITIALIZED\ntry:\nself = self.get(self._model.uid, self._client)\nreturn self._model.status\nexcept Exception:  # noqa: PIE786\nreturn Status.UNKNOWN\n@staticmethod\n@init_client\ndef get(uid: str, client: Optional[Client] = None) -> \"BaseSynthesizer\":\n\"\"\"List the synthesizer instances.\n        Arguments:\n            uid (str): synthesizer instance uid\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            Synthesizer instance\n        \"\"\"\nresponse = client.get(f'/synthesizer/{uid}')\ndata: list = response.json()\nmodel, synth_cls = BaseSynthesizer._model_from_api(\ndata['dataSource']['dataType'], data)\nreturn ModelFactoryMixin._init_from_model_data(synth_cls, model)\n@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> SynthesizersList:\n\"\"\"List the synthesizer instances.\n        Arguments:\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            List of synthesizers\n        \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata', 'report', 'mode']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/synthesizer')\ndata: list = response.json()\ndata = __process_data(data)\nreturn SynthesizersList(data)\ndef _is_initialized(self) -> bool:\n\"\"\"Determine if a synthesizer is instanciated or not.\n        Returns:\n            True if the synthesizer is instanciated\n        \"\"\"\nreturn self._model is not None\n@staticmethod\ndef _resolve_api_status(api_status: Dict) -> Status:\n\"\"\"Determine the status of the Synthesizer.\n        The status of the synthesizer instance is determined by the state of\n        its different components.\n        Arguments:\n            api_status (dict): json from the endpoint GET /synthesizer\n        Returns:\n            Synthesizer Status\n        \"\"\"\nstatus = Status(api_status.get('state', Status.UNKNOWN.name))\nif status == Status.PREPARE:\nif PrepareState(api_status.get('prepare', {}).get(\n'state', PrepareState.UNKNOWN.name)) == PrepareState.FAILED:\nreturn Status.FAILED\nelif status == Status.TRAIN:\nif TrainingState(api_status.get('training', {}).get(\n'state', TrainingState.UNKNOWN.name)) == TrainingState.FAILED:\nreturn Status.FAILED\nelif status == Status.REPORT:\nreturn Status.READY\nreturn status\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.status","title":"status: Status property","text":"

Get the status of a synthesizer instance.

Returns:

Type Description Status

Synthesizer status

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.uid","title":"uid: UID property","text":"

Get the status of a synthesizer instance.

Returns:

Type Description UID

Synthesizer status

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.fit","title":"fit(X, privacy_level=PrivacyLevel.HIGH_FIDELITY, datatype=None, sortbykey=None, entities=None, generate_cols=None, exclude_cols=None, dtypes=None, target=None, name=None, anonymize=None, condition_on=None)","text":"

Fit the synthesizer.

The synthesizer accepts as training dataset either a pandas DataFrame directly or a YData DataSource. When the training dataset is a pandas DataFrame, the argument datatype is required as it cannot be deduced.

The argumentsortbykey is mandatory for TimeSeries.

By default, if generate_cols or exclude_cols are not specified, all columns are generated by the synthesizer. The argument exclude_cols has precedence over generate_cols, i.e. a column col will not be generated if it is in both list.

Parameters:

Name Type Description Default X Union[DataSource, DataFrame]

Training dataset

required privacy_level PrivacyLevel

Synthesizer privacy level (defaults to high fidelity)

HIGH_FIDELITY datatype Optional[Union[DataSourceType, str]]

(optional) Dataset datatype - required if X is a pandas.DataFrame

None sortbykey Union[str, List[str]]

(optional) column(s) to use to sort timeseries datasets

None entities Union[str, List[str]]

(optional) columns representing entities ID

None generate_cols List[str]

(optional) columns that should be synthesized

None exclude_cols List[str]

(optional) columns that should not be synthesized

None dtypes Dict[str, Union[str, DataType]]

(optional) datatype mapping that will overwrite the datasource metadata column datatypes

None target Optional[str]

(optional) Target for the dataset

None name Optional[str]

(optional) Synthesizer instance name

None anonymize Optional[str]

(optional) fields to anonymize and the anonymization strategy

None condition_on Optional[List[str]]

(Optional[List[str]]): (optional) list of features to condition upon

None Source code in ydata/sdk/synthesizers/synthesizer.py
def fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\ndatatype: Optional[Union[DataSourceType, str]] = None,\nsortbykey: Optional[Union[str, List[str]]] = None,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n    The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n    When the training dataset is a pandas [`DataFrame`][pandas.DataFrame], the argument `datatype` is required as it cannot be deduced.\n    The argument`sortbykey` is mandatory for [`TimeSeries`][ydata.sdk.datasources.DataSourceType.TIMESERIES].\n    By default, if `generate_cols` or `exclude_cols` are not specified, all columns are generated by the synthesizer.\n    The argument `exclude_cols` has precedence over `generate_cols`, i.e. a column `col` will not be generated if it is in both list.\n    Arguments:\n        X (Union[DataSource, pandas.DataFrame]): Training dataset\n        privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n        datatype (Optional[Union[DataSourceType, str]]): (optional) Dataset datatype - required if `X` is a [`pandas.DataFrame`][pandas.DataFrame]\n        sortbykey (Union[str, List[str]]): (optional) column(s) to use to sort timeseries datasets\n        entities (Union[str, List[str]]): (optional) columns representing entities ID\n        generate_cols (List[str]): (optional) columns that should be synthesized\n        exclude_cols (List[str]): (optional) columns that should not be synthesized\n        dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n        target (Optional[str]): (optional) Target for the dataset\n        name (Optional[str]): (optional) Synthesizer instance name\n        anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n        condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n    \"\"\"\nif self._is_initialized():\nraise AlreadyFittedError()\n_datatype = DataSourceType(datatype) if isinstance(\nX, pdDataFrame) else DataSourceType(X.datatype)\ndataset_attrs = self._init_datasource_attributes(\nsortbykey, entities, generate_cols, exclude_cols, dtypes)\nself._validate_datasource_attributes(X, dataset_attrs, _datatype, target)\n# If the training data is a pandas dataframe, we first need to create a data source and then the instance\nif isinstance(X, pdDataFrame):\nif X.empty:\nraise EmptyDataError(\"The DataFrame is empty\")\n_X = LocalDataSource(source=X, datatype=_datatype, client=self._client)\nelse:\nif datatype != _datatype:\nwarn(\"When the training data is a DataSource, the argument `datatype` is ignored.\",\nDataSourceTypeWarning)\n_X = X\nif _X.status != dsStatus.AVAILABLE:\nraise DataSourceNotAvailableError(\nf\"The datasource '{_X.uid}' is not available (status = {_X.status.value})\")\nif isinstance(dataset_attrs, dict):\ndataset_attrs = DataSourceAttrs(**dataset_attrs)\nself._fit_from_datasource(\nX=_X, dataset_attrs=dataset_attrs, target=target, name=name,\nanonymize=anonymize, privacy_level=privacy_level, condition_on=condition_on)\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.get","title":"get(uid, client=None) staticmethod","text":"

List the synthesizer instances.

Parameters:

Name Type Description Default uid str

synthesizer instance uid

required client Client

(optional) Client to connect to the backend

None

Returns:

Type Description BaseSynthesizer

Synthesizer instance

Source code in ydata/sdk/synthesizers/synthesizer.py
@staticmethod\n@init_client\ndef get(uid: str, client: Optional[Client] = None) -> \"BaseSynthesizer\":\n\"\"\"List the synthesizer instances.\n    Arguments:\n        uid (str): synthesizer instance uid\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        Synthesizer instance\n    \"\"\"\nresponse = client.get(f'/synthesizer/{uid}')\ndata: list = response.json()\nmodel, synth_cls = BaseSynthesizer._model_from_api(\ndata['dataSource']['dataType'], data)\nreturn ModelFactoryMixin._init_from_model_data(synth_cls, model)\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.list","title":"list(client=None) staticmethod","text":"

List the synthesizer instances.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None

Returns:

Type Description SynthesizersList

List of synthesizers

Source code in ydata/sdk/synthesizers/synthesizer.py
@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> SynthesizersList:\n\"\"\"List the synthesizer instances.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        List of synthesizers\n    \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata', 'report', 'mode']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/synthesizer')\ndata: list = response.json()\ndata = __process_data(data)\nreturn SynthesizersList(data)\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.sample","title":"sample() abstractmethod","text":"

Abstract method to sample from a synthesizer.

Source code in ydata/sdk/synthesizers/synthesizer.py
@abstractmethod\ndef sample(self) -> pdDataFrame:\n\"\"\"Abstract method to sample from a synthesizer.\"\"\"\n
"},{"location":"sdk/reference/api/synthesizers/base/#privacylevel","title":"PrivacyLevel","text":"

Bases: StringEnum

Privacy level exposed to the end-user.

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.PrivacyLevel.BALANCED_PRIVACY_FIDELITY","title":"BALANCED_PRIVACY_FIDELITY = 'BALANCED_PRIVACY_FIDELITY' class-attribute instance-attribute","text":"

Balanced privacy/fidelity

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_FIDELITY","title":"HIGH_FIDELITY = 'HIGH_FIDELITY' class-attribute instance-attribute","text":"

High fidelity

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_PRIVACY","title":"HIGH_PRIVACY = 'HIGH_PRIVACY' class-attribute instance-attribute","text":"

High privacy

"},{"location":"sdk/reference/api/synthesizers/regular/","title":"Regular","text":"

Bases: BaseSynthesizer

Source code in ydata/sdk/synthesizers/regular.py
class RegularSynthesizer(BaseSynthesizer):\ndef sample(self, n_samples: int = 1, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`RegularSynthesizer`][ydata.sdk.synthesizers.RegularSynthesizer]\n        instance.\n        Arguments:\n            n_samples (int): number of rows in the sample\n            condition_on: (Optional[dict]): (optional) conditional sampling parameters\n        Returns:\n            synthetic data\n        \"\"\"\nif n_samples < 1:\nraise InputError(\"Parameter 'n_samples' must be greater than 0\")\npayload = {\"numberOfRecords\": n_samples}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\ndef fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n        The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            X (Union[DataSource, pandas.DataFrame]): Training dataset\n            privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n            entities (Union[str, List[str]]): (optional) columns representing entities ID\n            generate_cols (List[str]): (optional) columns that should be synthesized\n            exclude_cols (List[str]): (optional) columns that should not be synthesized\n            dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n            target (Optional[str]): (optional) Target column\n            name (Optional[str]): (optional) Synthesizer instance name\n            anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n            condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n        \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TABULAR, entities=entities,\ngenerate_cols=generate_cols, exclude_cols=exclude_cols, dtypes=dtypes,\ntarget=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\ndef __repr__(self):\nif self._model is not None:\nreturn self._model.__repr__()\nelse:\nreturn \"RegularSynthesizer(Not Initialized)\"\n
"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.regular.RegularSynthesizer.fit","title":"fit(X, privacy_level=PrivacyLevel.HIGH_FIDELITY, entities=None, generate_cols=None, exclude_cols=None, dtypes=None, target=None, name=None, anonymize=None, condition_on=None)","text":"

Fit the synthesizer.

The synthesizer accepts as training dataset either a pandas DataFrame directly or a YData DataSource.

Parameters:

Name Type Description Default X Union[DataSource, DataFrame]

Training dataset

required privacy_level PrivacyLevel

Synthesizer privacy level (defaults to high fidelity)

HIGH_FIDELITY entities Union[str, List[str]]

(optional) columns representing entities ID

None generate_cols List[str]

(optional) columns that should be synthesized

None exclude_cols List[str]

(optional) columns that should not be synthesized

None dtypes Dict[str, Union[str, DataType]]

(optional) datatype mapping that will overwrite the datasource metadata column datatypes

None target Optional[str]

(optional) Target column

None name Optional[str]

(optional) Synthesizer instance name

None anonymize Optional[str]

(optional) fields to anonymize and the anonymization strategy

None condition_on Optional[List[str]]

(Optional[List[str]]): (optional) list of features to condition upon

None Source code in ydata/sdk/synthesizers/regular.py
def fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n    The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        X (Union[DataSource, pandas.DataFrame]): Training dataset\n        privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n        entities (Union[str, List[str]]): (optional) columns representing entities ID\n        generate_cols (List[str]): (optional) columns that should be synthesized\n        exclude_cols (List[str]): (optional) columns that should not be synthesized\n        dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n        target (Optional[str]): (optional) Target column\n        name (Optional[str]): (optional) Synthesizer instance name\n        anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n        condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n    \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TABULAR, entities=entities,\ngenerate_cols=generate_cols, exclude_cols=exclude_cols, dtypes=dtypes,\ntarget=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\n
"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.regular.RegularSynthesizer.sample","title":"sample(n_samples=1, condition_on=None)","text":"

Sample from a RegularSynthesizer instance.

Parameters:

Name Type Description Default n_samples int

number of rows in the sample

1 condition_on Optional[dict]

(Optional[dict]): (optional) conditional sampling parameters

None

Returns:

Type Description DataFrame

synthetic data

Source code in ydata/sdk/synthesizers/regular.py
def sample(self, n_samples: int = 1, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`RegularSynthesizer`][ydata.sdk.synthesizers.RegularSynthesizer]\n    instance.\n    Arguments:\n        n_samples (int): number of rows in the sample\n        condition_on: (Optional[dict]): (optional) conditional sampling parameters\n    Returns:\n        synthetic data\n    \"\"\"\nif n_samples < 1:\nraise InputError(\"Parameter 'n_samples' must be greater than 0\")\npayload = {\"numberOfRecords\": n_samples}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\n
"},{"location":"sdk/reference/api/synthesizers/regular/#privacylevel","title":"PrivacyLevel","text":"

Bases: StringEnum

Privacy level exposed to the end-user.

"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.PrivacyLevel.BALANCED_PRIVACY_FIDELITY","title":"BALANCED_PRIVACY_FIDELITY = 'BALANCED_PRIVACY_FIDELITY' class-attribute instance-attribute","text":"

Balanced privacy/fidelity

"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_FIDELITY","title":"HIGH_FIDELITY = 'HIGH_FIDELITY' class-attribute instance-attribute","text":"

High fidelity

"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_PRIVACY","title":"HIGH_PRIVACY = 'HIGH_PRIVACY' class-attribute instance-attribute","text":"

High privacy

"},{"location":"sdk/reference/api/synthesizers/timeseries/","title":"TimeSeries","text":"

Bases: BaseSynthesizer

Source code in ydata/sdk/synthesizers/timeseries.py
class TimeSeriesSynthesizer(BaseSynthesizer):\ndef sample(self, n_entities: int, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] instance.\n        If a training dataset was not using any `entity` column, the Synthesizer assumes a single entity.\n        A [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] always sample the full trajectory of its entities.\n        Arguments:\n            n_entities (int): number of entities to sample\n            condition_on: (Optional[dict]): (optional) conditional sampling parameters\n        Returns:\n            synthetic data\n        \"\"\"\nif n_entities is not None and n_entities < 1:\nraise InputError(\"Parameter 'n_entities' must be greater than 0\")\npayload = {\"numberOfRecords\": n_entities}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\ndef fit(self, X: Union[DataSource, pdDataFrame],\nsortbykey: Optional[Union[str, List[str]]],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n        The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            X (Union[DataSource, pandas.DataFrame]): Training dataset\n            sortbykey (Union[str, List[str]]): column(s) to use to sort timeseries datasets\n            privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n            entities (Union[str, List[str]]): (optional) columns representing entities ID\n            generate_cols (List[str]): (optional) columns that should be synthesized\n            exclude_cols (List[str]): (optional) columns that should not be synthesized\n            dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n            target (Optional[str]): (optional) Metadata associated to the datasource\n            name (Optional[str]): (optional) Synthesizer instance name\n            anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n            condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n        \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TIMESERIES, sortbykey=sortbykey,\nentities=entities, generate_cols=generate_cols, exclude_cols=exclude_cols,\ndtypes=dtypes, target=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\ndef __repr__(self):\nif self._model is not None:\nreturn self._model.__repr__()\nelse:\nreturn \"TimeSeriesSynthesizer(Not Initialized)\"\n
"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.timeseries.TimeSeriesSynthesizer.fit","title":"fit(X, sortbykey, privacy_level=PrivacyLevel.HIGH_FIDELITY, entities=None, generate_cols=None, exclude_cols=None, dtypes=None, target=None, name=None, anonymize=None, condition_on=None)","text":"

Fit the synthesizer.

The synthesizer accepts as training dataset either a pandas DataFrame directly or a YData DataSource.

Parameters:

Name Type Description Default X Union[DataSource, DataFrame]

Training dataset

required sortbykey Union[str, List[str]]

column(s) to use to sort timeseries datasets

required privacy_level PrivacyLevel

Synthesizer privacy level (defaults to high fidelity)

HIGH_FIDELITY entities Union[str, List[str]]

(optional) columns representing entities ID

None generate_cols List[str]

(optional) columns that should be synthesized

None exclude_cols List[str]

(optional) columns that should not be synthesized

None dtypes Dict[str, Union[str, DataType]]

(optional) datatype mapping that will overwrite the datasource metadata column datatypes

None target Optional[str]

(optional) Metadata associated to the datasource

None name Optional[str]

(optional) Synthesizer instance name

None anonymize Optional[str]

(optional) fields to anonymize and the anonymization strategy

None condition_on Optional[List[str]]

(Optional[List[str]]): (optional) list of features to condition upon

None Source code in ydata/sdk/synthesizers/timeseries.py
def fit(self, X: Union[DataSource, pdDataFrame],\nsortbykey: Optional[Union[str, List[str]]],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n    The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        X (Union[DataSource, pandas.DataFrame]): Training dataset\n        sortbykey (Union[str, List[str]]): column(s) to use to sort timeseries datasets\n        privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n        entities (Union[str, List[str]]): (optional) columns representing entities ID\n        generate_cols (List[str]): (optional) columns that should be synthesized\n        exclude_cols (List[str]): (optional) columns that should not be synthesized\n        dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n        target (Optional[str]): (optional) Metadata associated to the datasource\n        name (Optional[str]): (optional) Synthesizer instance name\n        anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n        condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n    \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TIMESERIES, sortbykey=sortbykey,\nentities=entities, generate_cols=generate_cols, exclude_cols=exclude_cols,\ndtypes=dtypes, target=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\n
"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.timeseries.TimeSeriesSynthesizer.sample","title":"sample(n_entities, condition_on=None)","text":"

Sample from a TimeSeriesSynthesizer instance.

If a training dataset was not using any entity column, the Synthesizer assumes a single entity. A TimeSeriesSynthesizer always sample the full trajectory of its entities.

Parameters:

Name Type Description Default n_entities int

number of entities to sample

required condition_on Optional[dict]

(Optional[dict]): (optional) conditional sampling parameters

None

Returns:

Type Description DataFrame

synthetic data

Source code in ydata/sdk/synthesizers/timeseries.py
def sample(self, n_entities: int, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] instance.\n    If a training dataset was not using any `entity` column, the Synthesizer assumes a single entity.\n    A [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] always sample the full trajectory of its entities.\n    Arguments:\n        n_entities (int): number of entities to sample\n        condition_on: (Optional[dict]): (optional) conditional sampling parameters\n    Returns:\n        synthetic data\n    \"\"\"\nif n_entities is not None and n_entities < 1:\nraise InputError(\"Parameter 'n_entities' must be greater than 0\")\npayload = {\"numberOfRecords\": n_entities}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\n
"},{"location":"sdk/reference/api/synthesizers/timeseries/#privacylevel","title":"PrivacyLevel","text":"

Bases: StringEnum

Privacy level exposed to the end-user.

"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.PrivacyLevel.BALANCED_PRIVACY_FIDELITY","title":"BALANCED_PRIVACY_FIDELITY = 'BALANCED_PRIVACY_FIDELITY' class-attribute instance-attribute","text":"

Balanced privacy/fidelity

"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_FIDELITY","title":"HIGH_FIDELITY = 'HIGH_FIDELITY' class-attribute instance-attribute","text":"

High fidelity

"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_PRIVACY","title":"HIGH_PRIVACY = 'HIGH_PRIVACY' class-attribute instance-attribute","text":"

High privacy

"},{"location":"support/help-troubleshooting/","title":"Help & Troubleshooting","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome","text":"

YData Fabric is a Data-Centric AI development platform that accelerates AI development by helping data practitioners achieve production-quality data.

Much like for software engineering the quality of code is a must for the success of software development, Fabric accounts for the data quality requirements for data-driven applications. It introduces standards, processes, and acceleration to empower data science, analytics, and data engineering teams.

"},{"location":"#try-fabric","title":"Try Fabric","text":""},{"location":"#why-adopt-ydata-fabric","title":"Why adopt YData Fabric?","text":"

With Fabric, you can standardize the understanding of your data, quickly identify data quality issues, streamline and version your data preparation workflows and finally leverage synthetic data for privacy-compliance or as a tool to boost ML performance. Fabric is a development environment that supports a faster and easier process of preparing data for AI development. Data practitioners are using Fabric to:

"},{"location":"#key-features","title":"\ud83d\udcdd Key features","text":""},{"location":"#data-catalog","title":"Data Catalog","text":"

Fabric Data Catalog provides a centralized perspective on datasets within a project-basis, optimizing data management through seamless integration with the organization's existing data architectures via scalable connectors (e.g., MySQL, Google Cloud Storage, AWS S3). It standardizes data quality profiling, streamlining the processes of efficient data cleaning and preparation, while also automating the identification of Personally Identifiable Information (PII) to facilitate compliance with privacy regulations.

Explore how a Data Catalog through a centralized repository of your datasets, schema validation, and automated data profiling.

"},{"location":"#labs","title":"Labs","text":"

Fabric's Labs environments provide collaborative, scalable, and secure workspaces layered on a flexible infrastructure, enabling users to seamlessly switch between CPUs and GPUs based on their computational needs. Labs are familiar environments that empower data developers with powerful IDEs (Jupyter Notebooks, Visual Code or H2O flow) and a seamless experience with the tools they already love combined with YData's cutting-edge SDK for data preparation.

Learn how to use the Labs to generate synthetic data in a familiar Python interface.

"},{"location":"#synthetic-data","title":"Synthetic data","text":"

Synthetic data, enabled by YData Fabric, provides data developers with a user-friendly interfaces (UI and code) for generating artificial datasets, offering a versatile solution across formats like tabular, time-series and multi-table datasets. The generated synthetic data holds the same value of the original and aligns intricately with specific business rules, contributing to machine learning models enhancement, mitigation of privacy concerns and more robustness for data developments. Fabric offers synthetic data that is ease to adapt and configure, allows customization in what concerns privacy-utility trade-offs.

Learn how you to create high-quality synthetic data within a user-friendly UI using Fabric\u2019s data synthesis flow.

"},{"location":"#pipelines","title":"Pipelines","text":"

Fabric Pipelines streamlines data preparation workflows by automating, orchestrating, and optimizing data pipelines, providing benefits such as flexibility, scalability, monitoring, and reproducibility for efficient and reliable data processing. The intuitive drag-and-drop interface, leveraging Jupyter notebooks or Python scripts, expedites the pipeline setup process, providing data developers with a quick and user-friendly experience.

Explore how you can leverage Fabric Pipelines to build versionable and reproducible data preparation workflows for ML development.

"},{"location":"#tutorials","title":"Tutorials","text":"

To understand how to best apply Fabric to your use cases, start by exploring the following tutorials:

You can find additional examples and use cases at YData Academy GitHub Repository.

"},{"location":"#support","title":"\ud83d\ude4b Support","text":"

Facing an issue? We\u2019re committed to providing all the support you need to ensure a smooth experience using Fabric:

"},{"location":"examples/synthesize_tabular_data/","title":"Synthesize tabular data","text":"

Use YData's RegularSynthesizer to generate tabular synthetic data

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import RegularSynthesizer\n# Do not forget to add your token as env variables\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined\ndef main():\n\"\"\"In this example, we demonstrate how to train a synthesizer from a pandas\n    DataFrame.\n    After training a Regular Synthesizer, we request a sample.\n    \"\"\"\nX = get_dataset('census')\n# We initialize a regular synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = RegularSynthesizer()\n# We train the synthesizer on our dataset\nsynth.fit(X)\n# We request a synthetic dataset with 50 rows\nsample = synth.sample(n_samples=50)\nprint(sample.shape)\nif __name__ == \"__main__\":\nmain()\n
"},{"location":"examples/synthesize_timeseries_data/","title":"Synthesize time-series data","text":"

Use YData's TimeSeriesSynthesizer to generate time-series synthetic data

Tabular data is the most common type of data we encounter in data problems.

When thinking about tabular data, we assume independence between different records, but this does not happen in reality. Suppose we check events from our day-to-day life, such as room temperature changes, bank account transactions, stock price fluctuations, and air quality measurements in our neighborhood. In that case, we might end up with datasets where measures and records evolve and are related through time. This type of data is known to be sequential or time-series data.

Thus, sequential or time-series data refers to any data containing elements ordered into sequences in a structured format. Dissecting any time-series dataset, we see differences in variables' behavior that need to be understood for an effective generation of synthetic data. Typically any time-series dataset is composed of the following:

Below find an example:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import TimeSeriesSynthesizer\n# Do not forget to add your token as env variable\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'\nX = get_dataset('occupancy')\n# We initialize a time series synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = TimeSeriesSynthesizer()\n# We train the synthesizer on our dataset\n# sortbykey -> variable that define the time order for the sequence\nsynth.fit(X, sortbykey='date')\n# By default it is requested a synthetic sample with the same length as the original data\n# The TimeSeriesSynthesizer is designed to replicate temporal series and therefore the original time-horizon is respected\nsample = synth.sample(n_entities=1)\n
"},{"location":"examples/synthesize_with_anonymization/","title":"Anonymization","text":"

YData Synthesizers offers a way to anonymize sensitive information such that the original values are not present in the synthetic data but replaced by fake values.

Does the model retain the original values?

No! The anonymization is performed before the model training such that it never sees the original values.

The anonymization is performed by specifying which columns need to be anonymized and how to perform the anonymization. The anonymization rules are defined as a dictionary with the following format:

{column_name: anonymization_rule}

While here are some predefined anonymization rules such as name, email, company, it is also possible to create a rule using a regular expression. The anonymization rules have to be passed to a synthesizer in its fit method using the parameter anonymize.

What is the difference between anonymization and privacy?

Anonymization makes sure sensitive information are hidden from the data. Privacy makes sure it is not possible to infer the original data points from the synthetic data points via statistical attacks.

Therefore, for data sharing anonymization and privacy controls are complementary.

The example below demonstrates how to anonymize the column Name by fake names and the column Ticket by a regular expression:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import RegularSynthesizer\n# Do not forget to add your token as env variables\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined\ndef main():\n\"\"\"In this example, we demonstrate how to train a synthesizer from a pandas\n    DataFrame.\n    After training a Regular Synthesizer, we request a sample.\n    \"\"\"\nX = get_dataset('titanic')\n# We initialize a regular synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = RegularSynthesizer()\n# We define anonymization rules, which is a dictionary with format:\n# {column_name: anonymization_rule, ...}\n# while here are some predefined anonymization rules like: name, email, company\n# it is also possible to create a rule using a regular expression\nrules = {\n\"Name\": \"name\",\n\"Ticket\": \"[A-Z]{2}-[A-Z]{4}\"\n}\n# We train the synthesizer on our dataset\nsynth.fit(\nX,\nname=\"titanic_synthesizer\",\nanonymize=rules\n)\n# We request a synthetic dataset with 50 rows\nsample = synth.sample(n_samples=50)\nprint(sample[[\"Name\", \"Ticket\"]].head(3))\nif __name__ == \"__main__\":\nmain()\n

"},{"location":"examples/synthesize_with_conditional_sampling/","title":"Conditional sampling","text":"

YData Synthesizers support conditional sampling. The fit method has an optional parameter named condition_on, which receives a list of features to condition upon. Furthermore, the sample method receives the conditions to be applied through another optional parameter also named condition_on. For now, two types of conditions are supported:

The example below demonstrates how to train and sample from a synthesizer using conditional sampling:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import RegularSynthesizer\n# Do not forget to add your token as env variables.\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined.\ndef main():\n\"\"\"In this example, we demonstrate how to train and\n    sample from a synthesizer using conditional sampling.\"\"\"\nX = get_dataset('census')\n# We initialize a regular synthesizer.\n# As long as the synthesizer does not call `fit`, it exists only locally.\nsynth = RegularSynthesizer()\n# We train the synthesizer on our dataset setting\n# the features to condition upon.\nsynth.fit(\nX,\nname=\"census_synthesizer\",\ncondition_on=[\"sex\", \"native-country\", \"age\"]\n)\n# We request a synthetic dataset with specific condition rules.\nsample = synth.sample(\nn_samples=500,\ncondition_on={\n\"sex\": {\n\"categories\": [\"Female\"]\n},\n\"native-country\": {\n\"categories\": [(\"United-States\", 0.6),\n(\"Mexico\", 0.4)]\n},\n\"age\": {\n\"minimum\": 55,\n\"maximum\": 60\n}\n}\n)\nprint(sample)\nif __name__ == \"__main__\":\nmain()\n
"},{"location":"examples/synthesize_with_privacy_control/","title":"Privacy control","text":"

YData Synthesizers offers 3 different levels of privacy:

  1. high privacy: the model is optimized for privacy purposes,
  2. high fidelity (default): the model is optimized for high fidelity,
  3. balanced: tradeoff between privacy and fidelity.

The default privacy level is high fidelity. The privacy level can be changed by the user at the moment a synthesizer level is trained by using the parameter privacy_level. The parameter expect a PrivacyLevel value.

What is the difference between anonymization and privacy?

Anonymization makes sure sensitive information are hidden from the data. Privacy makes sure it is not possible to infer the original data points from the synthetic data points via statistical attacks.

Therefore, for data sharing anonymization and privacy controls are complementary.

The example below demonstrates how to train a synthesizer configured for high privacy:

import os\nfrom ydata.sdk.dataset import get_dataset\nfrom ydata.sdk.synthesizers import PrivacyLevel, RegularSynthesizer\n# Do not forget to add your token as env variables\nos.environ[\"YDATA_TOKEN\"] = '<TOKEN>'  # Remove if already defined\ndef main():\n\"\"\"In this example, we demonstrate how to train a synthesizer\n    with a high-privacy setting from a pandas DataFrame.\n    After training a Regular Synthesizer, we request a sample.\n    \"\"\"\nX = get_dataset('titanic')\n# We initialize a regular synthesizer\n# As long as the synthesizer does not call `fit`, it exists only locally\nsynth = RegularSynthesizer()\n# We train the synthesizer on our dataset setting the privacy level to high\nsynth.fit(\nX,\nname=\"titanic_synthesizer\",\nprivacy_level=PrivacyLevel.HIGH_PRIVACY\n)\n# We request a synthetic dataset with 50 rows\nsample = synth.sample(n_samples=50)\nprint(sample)\nif __name__ == \"__main__\":\nmain()\n
"},{"location":"get-started/","title":"Get started with Fabric","text":"

The get started is here to help you if you are not yet familiar with YData Fabric or if you just want to learn more about data quality, data preparation workflows and how you can start leveraging synthetic data. Mention to YData Fabric Community

"},{"location":"get-started/#create-your-first-data-with-the-data-catalog","title":"\ud83d\udcda Create your first Data with the Data Catalog","text":""},{"location":"get-started/#create-your-first-synthetic-data-generator","title":"\u2699\ufe0f Create your first Synthetic Data generator","text":""},{"location":"get-started/#create-your-first-lab","title":"\ud83e\uddea Create your first Lab","text":""},{"location":"get-started/#create-your-first-data-pipeline","title":"\ud83c\udf00 Create your first data Pipeline","text":""},{"location":"get-started/create_lab/","title":"How to create your first Lab environment","text":"

Labs are code environments for a more flexible development of data-driven solutions while leveraging Fabric capabilities combined with already loved tools such as scikit-learn, numpy and pandas. To create your first Lab, you can use the \u201cCreate Lab\u201d from Fabric\u2019s home, or you can access it from the Labs module by selecting it on the left side menu, and clicking the \u201cCreate Lab\u201d button.

Next, a menu with different IDEs will be shown. As a quickstart select Jupyter Lab. As labs are development environments you will be also asked what language you would prefer your environment to support: R or Python. Select Python.

Select IDE Select language

Bundles are environments with pre-installed packages. Select YData bundle, so we can leverage some other Fabric features such as Data Profiling, Synthetic Data and Pipelines.

As a last step, you will be asked to configure the infrastructure resources for this new environment as well as giving it a Display Name. We will keep the defaults, but you have flexibility to select GPU acceleration or whether you need more computational resources for your developments.

Finally, your Lab will be created and added to the \"Labs\" list, as per the image below. The status of the lab will be \ud83d\udfe1 while preparing, and this process takes a few minutes, as the infrastructure is being allocated to your development environment. As soon as the status changes to \ud83d\udfe2, you can open your lab by clicking in the button as shown below:

Create a new notebook in the JupyterLab and give it a name. You are now ready to start your developments!

Create a new notebook Notebook created

Congrats! \ud83d\ude80 You have now successfully created your first Lab a code environment, so you can benefit from the most advanced Fabric features as well as compose complex data workflows. Get ready for your journey of improved quality data for AI.

"},{"location":"get-started/create_pipeline/","title":"How to create your first Pipeline","text":"

Check this quickstart video on how to create your first Pipeline.

The best way to get started with Pipelines is to use the interactive Pipeline editor available in the Labs with Jupyter Lab set as IDE. If you don't have a Lab yet, or you don't know how to create one, check our quickstart guide on how to create your first lab.

Open an already existing lab.

A Pipeline comprises one or more nodes that are connected (or not!) with each other to define execution dependencies. Each pipeline node is and should be implemented as a component that is expected to manage a single task, such as read the data, profiling the data, training a model, or even publishing a model to production environments.

In this tutorial we will build a simple and generic pipeline that use a Dataset from Fabric's Data Catalog and profile to check it's quality. We have the notebooks template already available. For that you need to access the \"Academy\" folder as per the image below.

Make sure to copy all the files in the folder \"3 - Pipelines/quickstart\" to the root folder of your lab, as per the image below.

Now that we have our notebooks we need to make a small change in the notebook \"1. Read dataset\". Go back to your Data Catalog, from one of the datasets in your Catalog list, select the three vertical dots and click in \"Explore in Labs\" as shown in the image below.

The following screen will be shown. Click in copy.

Now that we have copied the code, let's get back to our \"1. Read data.ipynb\" notebook, and replace the first code cell by with the new code. This will allow us to use a dataset from the Data Catalog in our pipeline.

Placeholder code Replaced with code snippet

With our notebooks ready, we can now configure our Pipeline. For this quickstart we will be leveraging an already existing pipeline - double-click the file my_first_pipeline.pipeline. You should see a pipeline as depicted in the images below. To create a new Pipeline, you can open the lab launcher tab and select \"Pipeline Editor\".

Open Pipeline My first pipeline

Before running the pipeline, we need to check each component/step properties and configurations. Right-click each one of the steps, select \"Open Properties\", and a menu will be depicted in your right side. Make sure that you have \"YData - CPU\" selected as the Runtime Image as show below.

Open properties Runtime image

We are now ready to create and run our first pipeline. In the top left corner of the pipeline editor, the run button will be available for you to click.

Accept the default values shown in the run dialog and start the run

If the following message is shown, it means that you have create a run of your first pipeline.

Now that you have created your first pipeline, you can select the Pipeline from Fabric's left side menu.

Your most recent pipeline will be listed, as shown in below image.

To check the run of your pipeline, jump into the \"Run\" tab. You will be able to see your first pipeline running!

By clicking on top of the record you will be able to see the progress of the run step-by-step, and visualize the outputs of each and every step by clicking on each step and selecting the Visualizations tab.

Congrats! \ud83d\ude80 You have now successfully created your first Pipeline a code environment, so you can benefit from Fabric's orchestration engine to crate scalable, versionable and comparable data workflows. Get ready for your journey of improved quality data for AI.

"},{"location":"get-started/create_syntheticdata_generator/","title":"How to create your first Synthetic Data generator","text":"

Check this quickstart video on how to create your first Synthetic Data generator.

To generate your first synthetic data, you need to have a Dataset already available in your Data Catalog. Check this tutorial to see how you can add your first dataset to Fabric\u2019s Data Catalog.

With your first dataset created, you are now able to start the creation of your Synthetic Data generator. You can either select \"Synthetic Data\" from your left side menu, or you can select \"Create Synthetic Data\" in your project Home as shown in the image below.

You'll be asked to select the dataset you wish to generate synthetic data from and verify the columns you'd like to include in the synthesis process, validating their Variable and Data Types.

Data types are relevant for synthetic data quality

Data Types are important to be revisited and aligned with the objectives for the synthetic data as they can highly impact the quality of the generated data. For example, let's say we have a column that is a \"Name\", while is some situations it would make sense to consider it a String, under the light of a dataset where \"Name\" refers to the name of the product purchases, it might be more beneficial to set it as a Category.

Finally, as the last step of our process it comes the Synthetic Data specific configurations, for this particular case we only need to define a Display Name, and we can finish the process by clicking in the \"Save\" button as per the image below.

Your Synthetic Data generator is now training and listed under \"Synthetic Data\". While the model is being trained, the Status will be \ud83d\udfe1, as soon as the training is completed successfully it will transition to \ud83d\udfe2 as per the image below.

Once the Synthetic Data generator has finished training, you're ready to start generating your first synthetic dataset. You can start by exploring an overview of the model configurations and even download a PDF report with a comprehensive overview of your Synthetic Data Quality Metrics. Next, you can generate synthetic data samples by accessing the Generation tab or click on \"Go to Generation\".

In this section, you are able to generate as many synthetic samples as you want. For that you need to define the number rows to generate and click \"Generate\", as depicted in the image below.

A new line in your \"Sample History\" will be shown and as soon as the sample generation is completed you will be able to \"Compare\" your synthetic data with the original data, add as a Dataset with \"Add to Data Catalog\" and last but not the least download it as a file with \"Download csv\".

Congrats! \ud83d\ude80 You have now successfully created your first Synthetic Data generator with Fabric. Get ready for your journey of improved quality data for AI.

"},{"location":"get-started/fabric_community/","title":"Get started with Fabric Community","text":"

Fabric Community is a SaaS version that allows you to explore all the functionalities of Fabric first-hand: free, forever, for everyone. You\u2019ll be able to validate your data quality with automated profiling, unlock data sharing and improve your ML models with synthetic data, and increase your productivity with seamless integration:

"},{"location":"get-started/fabric_community/#register","title":"Register","text":"

To register for Fabric Community:

Once you login, you'll access the Home page and get started with your data preparation!

"},{"location":"get-started/upload_csv/","title":"How to create your first Dataset from a CSV file","text":"

Check this quickstart video on how to create your first Dataset from a CSV file.

To create your first dataset in the Data Catalog, you can start by clicking on \"Add Dataset\" from the Home section. Or click to Data Catalog (on the left side menu) and click \u201cAdd Dataset\u201d.

After that the below modal will be shown. You will need to select a connector. To upload a CSV file, we need to select \u201cUpload CSV\u201d.

Once you've selected the \u201cUpload CSV\u201d connector, a new screen will appear, enabling you to upload your file and designate a name for your connector. This file upload connector will subsequently empower you to create one or more datasets from the same file at a later stage.

Loading area Upload csv file

With the Connector created, you'll be able to add a dataset and specify its properties:

Your created Connector (\u201cCensus File\u201d) and Dataset (\u201cCensus\u201d) will be added to the Data Catalog. As soon as the status is green, you can navigate your Dataset. Click in Open Dataset as per the image below.

Within the Dataset details, you can gain valuable insights through our automated data quality profiling. This includes comprehensive metadata and an overview of your data, encompassing details like row count, identification of duplicates, and insights into the overall quality of your dataset.

Or perhaps, you want to further explore through visualization, the profile of your data with both univariate and multivariate of your data.

Congrats! \ud83d\ude80 You have now successfully created your first Connector and Dataset in Fabric\u2019s Data Catalog. Get ready for your journey of improved quality data for AI.

"},{"location":"sdk/","title":"Overview","text":"

YData SDK for improved data quality everywhere!

ydata-sdk is here! Create a YData account so you can start using today!

Create account

"},{"location":"sdk/#overview","title":"Overview","text":"

The YData SDK is an ecosystem of methods that allows users to, through a python interface, adopt a Data-Centric approach towards the AI development. The solution includes a set of integrated components for data ingestion, standardized data quality evaluation and data improvement, such as synthetic data generation, allowing an iterative improvement of the datasets used in high-impact business applications.

Synthetic data can be used as Machine Learning performance enhancer, to augment or mitigate the presence of bias in real data. Furthermore, it can be used as a Privacy Enhancing Technology, to enable data-sharing initiatives or even to fuel testing environments.

Under the YData-SDK hood, you can find a set of algorithms and metrics based on statistics and deep learning based techniques, that will help you to accelerate your data preparation.

"},{"location":"sdk/#current-functionality","title":"Current functionality","text":"

YData SDK is currently composed by the following main modules:

"},{"location":"sdk/#supported-data-formats","title":"Supported data formats","text":"TabularTime-SeriesTransactionalRelational databases

The RegularSynthesizer is perfect to synthesize high-dimensional data, that is time-indepentent with high quality results.

Know more

The TimeSeriesSynthesizer is perfect to synthesize both regularly and not evenly spaced time-series, from smart-sensors to stock.

Know more

The TimeSeriesSynthesizer supports transactional data, known to have highly irregular time intervals between records and directional relations between entities.

Coming soon

Know more

The MultiTableSynthesizer is perfect to learn how to replicate the data within a relational database schema.

Coming soon

Know more

"},{"location":"sdk/installation/","title":"Installation","text":"

YData SDK is generally available through both Pypi and Conda allowing an easy process of installation. This experience allows combining YData SDK with other packages such as Pandas, Numpy or Scikit-Learn.

YData SDK is available for the public through a token-based authentication system. If you don\u2019t have one yet, you can get your free license key during the installation process. You can check what features are available in the free version here.

"},{"location":"sdk/installation/#installing-the-package","title":"Installing the package","text":"

YData SDK supports python versions bigger than python 3.8, and can be installed in Windows, Linux or MacOS operating systems.

Prior to the package installation, it is recommended the creation of a virtual or conda environment:

pyenv
pyenv virtualenv 3.10 ydatasdk\n

And install ydata-sdk

pypi
pip install ydata-sdk\n
"},{"location":"sdk/installation/#authentication","title":"Authentication","text":"

Once you've installed ydata-sdk package you will need a token to run the functionalities. YData SDK uses a token based authentication system. To get access to your token, you need to create a YData account.

YData SDK offers a free-trial and an enterprise version. To access your free-trial token, you need to create a YData account.

The token will be available here, after login:

With your account toke copied, you can set a new environment variable YDATA_TOKEN in the beginning of your development session.

    import os\nos.setenv['YDATA_TOKEN'] = '{add-your-token}'\n

Once you have set your token, you are good to go to start exploring the incredible world of data-centric AI and smart synthetic data generation!

Check out our quickstart guide!

"},{"location":"sdk/quickstart/","title":"Quickstart","text":"

YData SDK allows you to with an easy and familiar interface, to adopt a Data-Centric AI approach for the development of Machine Learning solutions. YData SDK features were designed to support structure data, including tabular data, time-series and transactional data.

"},{"location":"sdk/quickstart/#read-data","title":"Read data","text":"

To start leveraging the package features you should consume your data either through the Connectors or pandas.Dataframe. The list of available connectors can be found here [add a link].

From pandas dataframeFrom a connector
    # Example for a Google Cloud Storage Connector\ncredentials = \"{insert-credentials-file-path}\"\n# We create a new connector for Google Cloud Storage\nconnector = Connector(connector_type='gcs', credentials=credentials)\n# Create a Datasource from the connector\n# Note that a connector can be re-used for several datasources\nX = DataSource(connector=connector, path='gs://<my_bucket>.csv')\n
    # Load a small dataset\nX = pd.read_csv('{insert-file-path.csv}')\n# Init a synthesizer\nsynth = RegularSynthesizer()\n# Train the synthesizer with the pandas Dataframe as input\n# The data is then sent to the cluster for processing\nsynth.fit(X)\n

The synthesis process returns a pandas.DataFrame object. Note that if you are using the ydata-sdk free version, all of your data is sent to a remote cluster on YData's infrastructure.

"},{"location":"sdk/quickstart/#data-synthesis-flow","title":"Data synthesis flow","text":"

The process of data synthesis can be described into the following steps:

stateDiagram-v2\n  state read_data\n  read_data --> init_synth\n  init_synth --> train_synth\n  train_synth --> generate_samples\n  generate_samples --> [*]

The code snippet below shows how easy can be to start generating new synthetic data. The package includes a set of examples datasets for a quickstart.

    from ydata.sdk.dataset import get_dataset\n#read the example data\nX = get_dataset('census')\n# Init a synthesizer\nsynth = RegularSynthesizer()\n# Fit the synthesizer to the input data\nsynth.fit(X)\n# Sample new synthetic data. The below request ask for new 1000 synthetic rows\nsynth.sample(n_samples=1000)\n

Do I need to prepare my data before synthesis?

The sdk ensures that the original behaviour is replicated. For that reason, there is no need to preprocess outlier observations or missing data.

By default all the missing data is replicated as NaN.

"},{"location":"sdk/modules/connectors/","title":"Connectors","text":"

YData SDK allows users to consume data assets from remote storages through Connectors. YData Connectors support different types of storages, from filesystems to RDBMS'.

Below the list of available connectors:

Connector Name Type Supported File Types Useful Links Notes AWS S3 Remote object storage CSV, Parquet https://aws.amazon.com/s3/ Google Cloud Storage Remote object storage CSV, Parquet https://cloud.google.com/storage Azure Blob Storage Remote object storage CSV, Parquet https://azure.microsoft.com/en-us/services/storage/blobs/ File Upload Local CSV - Maximum file size is 220MB. Bigger files should be uploaded and read from remote object storages MySQL RDBMS Not applicable https://www.mysql.com/ Supports reading whole schemas or specifying a query Azure SQL Server RDBMS Not applicable https://azure.microsoft.com/en-us/services/sql-database/campaign/ Supports reading whole schemas or specifying a query PostgreSQL RDBMS Not applicable https://www.postgresql.org/ Supports reading whole schemas or specifying a query Snowflake RDBMS Not applicable https://docs.snowflake.com/en/sql-reference-commands Supports reading whole schemas or specifying a query Google BigQuery Data warehouse Not applicable https://cloud.google.com/bigquery Azure Data Lake Data lake CSV, Parquet https://azure.microsoft.com/en-us/services/storage/data-lake-storage/

More details can be found at Connectors APi Reference Docs.

"},{"location":"sdk/modules/synthetic_data/","title":"Synthetic data generation","text":""},{"location":"sdk/modules/synthetic_data/#data-formats","title":"Data formats","text":""},{"location":"sdk/modules/synthetic_data/#tabular-data","title":"Tabular data","text":""},{"location":"sdk/modules/synthetic_data/#time-series-data","title":"Time-series data","text":""},{"location":"sdk/modules/synthetic_data/#transactions-data","title":"Transactions data","text":""},{"location":"sdk/modules/synthetic_data/#best-practices","title":"Best practices","text":""},{"location":"sdk/reference/api/common/client/","title":"Get client","text":"

Deduce how to initialize or retrieve the client.

This is meant to be a zero configuration for the user.

Create and set a client globally
from ydata.sdk.client import get_client\nget_client(set_as_global=True)\n

Parameters:

Name Type Description Default client_or_creds Optional[Union[Client, dict, str, Path]]

Client to forward or credentials for initialization

None set_as_global bool

If True, set client as global

False wait_for_auth bool

If True, wait for the user to authenticate

True

Returns:

Type Description Client

Client instance

Source code in ydata/sdk/common/client/utils.py
def get_client(client_or_creds: Optional[Union[Client, Dict, str, Path]] = None, set_as_global: bool = False, wait_for_auth: bool = True) -> Client:\n\"\"\"Deduce how to initialize or retrieve the client.\n    This is meant to be a zero configuration for the user.\n    Example: Create and set a client globally\n            ```py\n            from ydata.sdk.client import get_client\n            get_client(set_as_global=True)\n            ```\n    Args:\n        client_or_creds (Optional[Union[Client, dict, str, Path]]): Client to forward or credentials for initialization\n        set_as_global (bool): If `True`, set client as global\n        wait_for_auth (bool): If `True`, wait for the user to authenticate\n    Returns:\n        Client instance\n    \"\"\"\nclient = None\nglobal WAITING_FOR_CLIENT\ntry:\n# If a client instance is set globally, return it\nif not set_as_global and Client.GLOBAL_CLIENT is not None:\nreturn Client.GLOBAL_CLIENT\n# Client exists, forward it\nif isinstance(client_or_creds, Client):\nreturn client_or_creds\n# Explicit credentials\n''' # For the first version, we deactivate explicit credentials via string or file for env var only\n        if isinstance(client_or_creds, (dict, str, Path)):\n            if isinstance(client_or_creds, str):  # noqa: SIM102\n                if Path(client_or_creds).is_file():\n                    client_or_creds = Path(client_or_creds)\n            if isinstance(client_or_creds, Path):\n                client_or_creds = json.loads(client_or_creds.open().read())\n            return Client(credentials=client_or_creds)\n        # Last try with environment variables\n        #if client_or_creds is None:\n        client = _client_from_env(wait_for_auth=wait_for_auth)\n        '''\ncredentials = environ.get(TOKEN_VAR)\nif credentials is not None:\nclient = Client(credentials=credentials)\nexcept ClientHandshakeError as e:\nwait_for_auth = False  # For now deactivate wait_for_auth until the backend is ready\nif wait_for_auth:\nWAITING_FOR_CLIENT = True\nstart = time()\nlogin_message_printed = False\nwhile client is None:\nif not login_message_printed:\nprint(\nf\"The token needs to be refreshed - please validate your token by browsing at the following URL:\\n\\n\\t{e.auth_link}\")\nlogin_message_printed = True\nwith suppress(ClientCreationError):\nsleep(BACKOFF)\nclient = get_client(wait_for_auth=False)\nnow = time()\nif now - start > CLIENT_INIT_TIMEOUT:\nWAITING_FOR_CLIENT = False\nbreak\nif client is None and not WAITING_FOR_CLIENT:\nsys.tracebacklimit = None\nraise ClientCreationError\nreturn client\n

Main Client class used to abstract the connection to the backend.

A normal user should not have to instanciate a Client by itself. However, in the future it will be useful for power-users to manage projects and connections.

Parameters:

Name Type Description Default credentials Optional[dict]

(optional) Credentials to connect

None project Optional[Project]

(optional) Project to connect to. If not specified, the client will connect to the default user's project.

None Source code in ydata/sdk/common/client/client.py
@typechecked\nclass Client(metaclass=SingletonClient):\n\"\"\"Main Client class used to abstract the connection to the backend.\n    A normal user should not have to instanciate a [`Client`][ydata.sdk.common.client.Client] by itself.\n    However, in the future it will be useful for power-users to manage projects and connections.\n    Args:\n        credentials (Optional[dict]): (optional) Credentials to connect\n        project (Optional[Project]): (optional) Project to connect to. If not specified, the client will connect to the default user's project.\n    \"\"\"\ncodes = codes\ndef __init__(self, credentials: Optional[Union[str, Dict]] = None, project: Optional[Project] = None, set_as_global: bool = False):\nself._base_url = environ.get(\"YDATA_BASE_URL\", DEFAULT_URL)\nself._scheme = 'https'\nself._headers = {'Authorization': credentials}\nself._http_client = httpClient(\nheaders=self._headers, timeout=Timeout(10, read=None))\nself._handshake()\nself._project = project if project is not None else self._get_default_project(\ncredentials)\nself.project = project\nif set_as_global:\nself.__set_global()\ndef post(self, endpoint: str, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"POST request to the backend.\n        Args:\n            endpoint (str): POST endpoint\n            data (Optional[dict]): (optional) multipart form data\n            json (Optional[dict]): (optional) json data\n            files (Optional[dict]): (optional) files to be sent\n            raise_for_status (bool): raise an exception on error\n        Returns:\n            Response object\n        \"\"\"\nurl_data = self.__build_url(endpoint, data=data, json=json, files=files)\nresponse = self._http_client.post(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\ndef get(self, endpoint: str, params: Optional[Dict] = None, cookies: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"GET request to the backend.\n        Args:\n            endpoint (str): GET endpoint\n            cookies (Optional[dict]): (optional) cookies data\n            raise_for_status (bool): raise an exception on error\n        Returns:\n            Response object\n        \"\"\"\nurl_data = self.__build_url(endpoint, params=params, cookies=cookies)\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\ndef get_static_file(self, endpoint: str, raise_for_status: bool = True) -> Response:\n\"\"\"Retrieve a static file from the backend.\n        Args:\n            endpoint (str): GET endpoint\n            raise_for_status (bool): raise an exception on error\n        Returns:\n            Response object\n        \"\"\"\nurl_data = self.__build_url(endpoint)\nurl_data['url'] = f'{self._scheme}://{self._base_url}/static-content{endpoint}'\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\ndef _handshake(self):\n\"\"\"Client handshake.\n        It is used to determine is the client can connect with its\n        current authorization token.\n        \"\"\"\nresponse = self.get('/profiles', params={}, raise_for_status=False)\nif response.status_code == Client.codes.FOUND:\nparser = LinkExtractor()\nparser.feed(response.text)\nraise ClientHandshakeError(auth_link=parser.link)\ndef _get_default_project(self, token: str):\nresponse = self.get('/profiles/me', params={}, cookies={'access_token': token})\ndata: Dict = response.json()\nreturn data['myWorkspace']\ndef __build_url(self, endpoint: str, params: Optional[Dict] = None, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, cookies: Optional[Dict] = None) -> Dict:\n\"\"\"Build a request for the backend.\n        Args:\n            endpoint (str): backend endpoint\n            params (Optional[dict]): URL parameters\n            data (Optional[Project]): (optional) multipart form data\n            json (Optional[dict]): (optional) json data\n            files (Optional[dict]): (optional) files to be sent\n            cookies (Optional[dict]): (optional) cookies data\n        Returns:\n            dictionary containing the information to perform a request\n        \"\"\"\n_params = params if params is not None else {\n'ns': self._project\n}\nurl_data = {\n'url': f'{self._scheme}://{self._base_url}/api{endpoint}',\n'headers': self._headers,\n'params': _params,\n}\nif data is not None:\nurl_data['data'] = data\nif json is not None:\nurl_data['json'] = json\nif files is not None:\nurl_data['files'] = files\nif cookies is not None:\nurl_data['cookies'] = cookies\nreturn url_data\ndef __set_global(self) -> None:\n\"\"\"Sets a client instance as global.\"\"\"\n# If the client is stateful, close it gracefully!\nClient.GLOBAL_CLIENT = self\ndef __raise_for_status(self, response: Response) -> None:\n\"\"\"Raise an exception if the response is not OK.\n        When an exception is raised, we try to convert it to a ResponseError which is\n        a wrapper around a backend error. This usually gives enough context and provides\n        nice error message.\n        If it cannot be converted to ResponseError, it is re-raised.\n        Args:\n            response (Response): response to analyze\n        \"\"\"\ntry:\nresponse.raise_for_status()\nexcept HTTPStatusError as e:\nwith suppress(Exception):\ne = ResponseError(**response.json())\nraise e\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.__build_url","title":"__build_url(endpoint, params=None, data=None, json=None, files=None, cookies=None)","text":"

Build a request for the backend.

Parameters:

Name Type Description Default endpoint str

backend endpoint

required params Optional[dict]

URL parameters

None data Optional[Project]

(optional) multipart form data

None json Optional[dict]

(optional) json data

None files Optional[dict]

(optional) files to be sent

None cookies Optional[dict]

(optional) cookies data

None

Returns:

Type Description Dict

dictionary containing the information to perform a request

Source code in ydata/sdk/common/client/client.py
def __build_url(self, endpoint: str, params: Optional[Dict] = None, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, cookies: Optional[Dict] = None) -> Dict:\n\"\"\"Build a request for the backend.\n    Args:\n        endpoint (str): backend endpoint\n        params (Optional[dict]): URL parameters\n        data (Optional[Project]): (optional) multipart form data\n        json (Optional[dict]): (optional) json data\n        files (Optional[dict]): (optional) files to be sent\n        cookies (Optional[dict]): (optional) cookies data\n    Returns:\n        dictionary containing the information to perform a request\n    \"\"\"\n_params = params if params is not None else {\n'ns': self._project\n}\nurl_data = {\n'url': f'{self._scheme}://{self._base_url}/api{endpoint}',\n'headers': self._headers,\n'params': _params,\n}\nif data is not None:\nurl_data['data'] = data\nif json is not None:\nurl_data['json'] = json\nif files is not None:\nurl_data['files'] = files\nif cookies is not None:\nurl_data['cookies'] = cookies\nreturn url_data\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.__raise_for_status","title":"__raise_for_status(response)","text":"

Raise an exception if the response is not OK.

When an exception is raised, we try to convert it to a ResponseError which is a wrapper around a backend error. This usually gives enough context and provides nice error message.

If it cannot be converted to ResponseError, it is re-raised.

Parameters:

Name Type Description Default response Response

response to analyze

required Source code in ydata/sdk/common/client/client.py
def __raise_for_status(self, response: Response) -> None:\n\"\"\"Raise an exception if the response is not OK.\n    When an exception is raised, we try to convert it to a ResponseError which is\n    a wrapper around a backend error. This usually gives enough context and provides\n    nice error message.\n    If it cannot be converted to ResponseError, it is re-raised.\n    Args:\n        response (Response): response to analyze\n    \"\"\"\ntry:\nresponse.raise_for_status()\nexcept HTTPStatusError as e:\nwith suppress(Exception):\ne = ResponseError(**response.json())\nraise e\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.__set_global","title":"__set_global()","text":"

Sets a client instance as global.

Source code in ydata/sdk/common/client/client.py
def __set_global(self) -> None:\n\"\"\"Sets a client instance as global.\"\"\"\n# If the client is stateful, close it gracefully!\nClient.GLOBAL_CLIENT = self\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.get","title":"get(endpoint, params=None, cookies=None, raise_for_status=True)","text":"

GET request to the backend.

Parameters:

Name Type Description Default endpoint str

GET endpoint

required cookies Optional[dict]

(optional) cookies data

None raise_for_status bool

raise an exception on error

True

Returns:

Type Description Response

Response object

Source code in ydata/sdk/common/client/client.py
def get(self, endpoint: str, params: Optional[Dict] = None, cookies: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"GET request to the backend.\n    Args:\n        endpoint (str): GET endpoint\n        cookies (Optional[dict]): (optional) cookies data\n        raise_for_status (bool): raise an exception on error\n    Returns:\n        Response object\n    \"\"\"\nurl_data = self.__build_url(endpoint, params=params, cookies=cookies)\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.get_static_file","title":"get_static_file(endpoint, raise_for_status=True)","text":"

Retrieve a static file from the backend.

Parameters:

Name Type Description Default endpoint str

GET endpoint

required raise_for_status bool

raise an exception on error

True

Returns:

Type Description Response

Response object

Source code in ydata/sdk/common/client/client.py
def get_static_file(self, endpoint: str, raise_for_status: bool = True) -> Response:\n\"\"\"Retrieve a static file from the backend.\n    Args:\n        endpoint (str): GET endpoint\n        raise_for_status (bool): raise an exception on error\n    Returns:\n        Response object\n    \"\"\"\nurl_data = self.__build_url(endpoint)\nurl_data['url'] = f'{self._scheme}://{self._base_url}/static-content{endpoint}'\nresponse = self._http_client.get(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\n
"},{"location":"sdk/reference/api/common/client/#ydata.sdk.common.client.client.Client.post","title":"post(endpoint, data=None, json=None, files=None, raise_for_status=True)","text":"

POST request to the backend.

Parameters:

Name Type Description Default endpoint str

POST endpoint

required data Optional[dict]

(optional) multipart form data

None json Optional[dict]

(optional) json data

None files Optional[dict]

(optional) files to be sent

None raise_for_status bool

raise an exception on error

True

Returns:

Type Description Response

Response object

Source code in ydata/sdk/common/client/client.py
def post(self, endpoint: str, data: Optional[Dict] = None, json: Optional[Dict] = None, files: Optional[Dict] = None, raise_for_status: bool = True) -> Response:\n\"\"\"POST request to the backend.\n    Args:\n        endpoint (str): POST endpoint\n        data (Optional[dict]): (optional) multipart form data\n        json (Optional[dict]): (optional) json data\n        files (Optional[dict]): (optional) files to be sent\n        raise_for_status (bool): raise an exception on error\n    Returns:\n        Response object\n    \"\"\"\nurl_data = self.__build_url(endpoint, data=data, json=json, files=files)\nresponse = self._http_client.post(**url_data)\nif response.status_code != Client.codes.OK and raise_for_status:\nself.__raise_for_status(response)\nreturn response\n
"},{"location":"sdk/reference/api/common/types/","title":"Types","text":""},{"location":"sdk/reference/api/connectors/connector/","title":"Connector","text":"

Bases: ModelFactoryMixin

A Connector allows to connect and access data stored in various places. The list of available connectors can be found here.

Parameters:

Name Type Description Default connector_type Union[ConnectorType, str]

Type of the connector to be created

None credentials dict

Connector credentials

None name Optional[str]

(optional) Connector name

None client Client

(optional) Client to connect to the backend

None

Attributes:

Name Type Description uid UID

UID fo the connector instance (creating internally)

type ConnectorType

Type of the connector

Source code in ydata/sdk/connectors/connector.py
class Connector(ModelFactoryMixin):\n\"\"\"A [`Connector`][ydata.sdk.connectors.Connector] allows to connect and\n    access data stored in various places. The list of available connectors can\n    be found [here][ydata.sdk.connectors.ConnectorType].\n    Arguments:\n        connector_type (Union[ConnectorType, str]): Type of the connector to be created\n        credentials (dict): Connector credentials\n        name (Optional[str]): (optional) Connector name\n        client (Client): (optional) Client to connect to the backend\n    Attributes:\n        uid (UID): UID fo the connector instance (creating internally)\n        type (ConnectorType): Type of the connector\n    \"\"\"\ndef __init__(self, connector_type: Union[ConnectorType, str] = None, credentials: Optional[Dict] = None,  name: Optional[str] = None, client: Optional[Client] = None):\nself._init_common(client=client)\nself._model: Optional[mConnector] = self._create_model(\nconnector_type, credentials, name, client=client)\n@init_client\ndef _init_common(self, client: Optional[Client] = None):\nself._client = client\nself._logger = create_logger(__name__, level=LOG_LEVEL)\n@property\ndef uid(self) -> UID:\nreturn self._model.uid\n@property\ndef type(self) -> str:\nreturn self._model.type\n@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Get an existing connector.\n        Arguments:\n            uid (UID): Connector identifier\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            Connector\n        \"\"\"\nconnectors: ConnectorsList = Connector.list(client=client)\ndata = connectors.get_by_uid(uid)\nmodel = mConnector(**data)\nconnector = ModelFactoryMixin._init_from_model_data(Connector, model)\nreturn connector\n@staticmethod\ndef _init_connector_type(connector_type: Union[ConnectorType, str]) -> ConnectorType:\nif isinstance(connector_type, str):\ntry:\nconnector_type = ConnectorType(connector_type)\nexcept Exception:\nc_list = \", \".join([c.value for c in ConnectorType])\nraise InvalidConnectorError(\nf\"ConnectorType '{connector_type}' does not exist.\\nValid connector types are: {c_list}.\")\nreturn connector_type\n@staticmethod\ndef _init_credentials(connector_type: ConnectorType, credentials: Union[str, Path, Dict, Credentials]) -> Credentials:\n_credentials = None\nif isinstance(credentials, str):\ncredentials = Path(credentials)\nif isinstance(credentials, Path):\ntry:\n_credentials = json_loads(credentials.open().read())\nexcept Exception:\nraise CredentialTypeError(\n'Could not read the credentials. Please, check your path or credentials structure.')\ntry:\nfrom ydata.sdk.connectors._models.connector_map import TYPE_TO_CLASS\ncredential_cls = TYPE_TO_CLASS.get(connector_type.value)\n_credentials = credential_cls(**_credentials)\nexcept Exception:\nraise CredentialTypeError(\n\"Could not create the credentials. Verify the path or the structure your credentials.\")\nreturn _credentials\n@staticmethod\ndef create(connector_type: Union[ConnectorType, str], credentials: Union[str, Path, Dict, Credentials], name: Optional[str] = None, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Create a new connector.\n        Arguments:\n            connector_type (Union[ConnectorType, str]): Type of the connector to be created\n            credentials (dict): Connector credentials\n            name (Optional[str]): (optional) Connector name\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            New connector\n        \"\"\"\nmodel = Connector._create_model(\nconnector_type=connector_type, credentials=credentials, name=name, client=client)\nconnector = ModelFactoryMixin._init_from_model_data(\nConnector, model)\nreturn connector\n@classmethod\n@init_client\ndef _create_model(cls, connector_type: Union[ConnectorType, str], credentials: Union[str, Path, Dict, Credentials], name: Optional[str] = None, client: Optional[Client] = None) -> mConnector:\n_name = name if name is not None else str(uuid4())\n_connector_type = Connector._init_connector_type(connector_type)\n_credentials = Connector._init_credentials(_connector_type, credentials)\npayload = {\n\"type\": _connector_type.value,\n\"credentials\": _credentials.dict(by_alias=True),\n\"name\": _name\n}\nresponse = client.post('/connector/', json=payload)\ndata: list = response.json()\nreturn mConnector(**data)\n@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> ConnectorsList:\n\"\"\"List the connectors instances.\n        Arguments:\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            List of connectors\n        \"\"\"\nresponse = client.get('/connector')\ndata: list = response.json()\nreturn ConnectorsList(data)\ndef __repr__(self):\nreturn self._model.__repr__()\n
"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.connector.Connector.create","title":"create(connector_type, credentials, name=None, client=None) staticmethod","text":"

Create a new connector.

Parameters:

Name Type Description Default connector_type Union[ConnectorType, str]

Type of the connector to be created

required credentials dict

Connector credentials

required name Optional[str]

(optional) Connector name

None client Client

(optional) Client to connect to the backend

None

Returns:

Type Description Connector

New connector

Source code in ydata/sdk/connectors/connector.py
@staticmethod\ndef create(connector_type: Union[ConnectorType, str], credentials: Union[str, Path, Dict, Credentials], name: Optional[str] = None, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Create a new connector.\n    Arguments:\n        connector_type (Union[ConnectorType, str]): Type of the connector to be created\n        credentials (dict): Connector credentials\n        name (Optional[str]): (optional) Connector name\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        New connector\n    \"\"\"\nmodel = Connector._create_model(\nconnector_type=connector_type, credentials=credentials, name=name, client=client)\nconnector = ModelFactoryMixin._init_from_model_data(\nConnector, model)\nreturn connector\n
"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.connector.Connector.get","title":"get(uid, client=None) staticmethod","text":"

Get an existing connector.

Parameters:

Name Type Description Default uid UID

Connector identifier

required client Client

(optional) Client to connect to the backend

None

Returns:

Type Description Connector

Connector

Source code in ydata/sdk/connectors/connector.py
@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"Connector\":\n\"\"\"Get an existing connector.\n    Arguments:\n        uid (UID): Connector identifier\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        Connector\n    \"\"\"\nconnectors: ConnectorsList = Connector.list(client=client)\ndata = connectors.get_by_uid(uid)\nmodel = mConnector(**data)\nconnector = ModelFactoryMixin._init_from_model_data(Connector, model)\nreturn connector\n
"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.connector.Connector.list","title":"list(client=None) staticmethod","text":"

List the connectors instances.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None

Returns:

Type Description ConnectorsList

List of connectors

Source code in ydata/sdk/connectors/connector.py
@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> ConnectorsList:\n\"\"\"List the connectors instances.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        List of connectors\n    \"\"\"\nresponse = client.get('/connector')\ndata: list = response.json()\nreturn ConnectorsList(data)\n
"},{"location":"sdk/reference/api/connectors/connector/#connectortype","title":"ConnectorType","text":"

Bases: Enum

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.AWS_S3","title":"AWS_S3 = 'aws-s3' class-attribute instance-attribute","text":"

AWS S3 connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.AZURE_BLOB","title":"AZURE_BLOB = 'azure-blob' class-attribute instance-attribute","text":"

Azure Blob connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.AZURE_SQL","title":"AZURE_SQL = 'azure-sql' class-attribute instance-attribute","text":"

AzureSQL connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.BIGQUERY","title":"BIGQUERY = 'google-bigquery' class-attribute instance-attribute","text":"

BigQuery connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.FILE","title":"FILE = 'file' class-attribute instance-attribute","text":"

File connector (placeholder)

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.GCS","title":"GCS = 'gcs' class-attribute instance-attribute","text":"

Google Cloud Storage connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.MYSQL","title":"MYSQL = 'mysql' class-attribute instance-attribute","text":"

MySQL connector

"},{"location":"sdk/reference/api/connectors/connector/#ydata.sdk.connectors.ConnectorType.SNOWFLAKE","title":"SNOWFLAKE = 'snowflake' class-attribute instance-attribute","text":"

Snowflake connector

"},{"location":"sdk/reference/api/datasources/datasource/","title":"DataSource","text":"

Bases: ModelFactoryMixin

A DataSource represents a dataset to be used by a Synthesizer as training data.

Parameters:

Name Type Description Default connector Connector

Connector from which the datasource is created

required datatype Optional[Union[DataSourceType, str]]

(optional) DataSource type

TABULAR name Optional[str]

(optional) DataSource name

None wait_for_metadata bool

If True, wait until the metadata is fully calculated

True client Client

(optional) Client to connect to the backend

None **config

Datasource specific configuration

{}

Attributes:

Name Type Description uid UID

UID fo the datasource instance

datatype DataSourceType

Data source type

status Status

Status of the datasource

metadata Metadata

Metadata associated to the datasource

Source code in ydata/sdk/datasources/datasource.py
class DataSource(ModelFactoryMixin):\n\"\"\"A [`DataSource`][ydata.sdk.datasources.DataSource] represents a dataset\n    to be used by a Synthesizer as training data.\n    Arguments:\n        connector (Connector): Connector from which the datasource is created\n        datatype (Optional[Union[DataSourceType, str]]): (optional) DataSource type\n        name (Optional[str]): (optional) DataSource name\n        wait_for_metadata (bool): If `True`, wait until the metadata is fully calculated\n        client (Client): (optional) Client to connect to the backend\n        **config: Datasource specific configuration\n    Attributes:\n        uid (UID): UID fo the datasource instance\n        datatype (DataSourceType): Data source type\n        status (Status): Status of the datasource\n        metadata (Metadata): Metadata associated to the datasource\n    \"\"\"\ndef __init__(self, connector: Connector, datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None, **config):\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(connector.type)\nself._init_common(client=client)\nself._model: Optional[mDataSource] = self._create_model(\nconnector=connector, datasource_type=datasource_type, datatype=datatype, config=config, name=name, client=self._client)\nif wait_for_metadata:\nself._model = DataSource._wait_for_metadata(self)._model\n@init_client\ndef _init_common(self, client: Optional[Client] = None):\nself._client = client\nself._logger = create_logger(__name__, level=LOG_LEVEL)\n@property\ndef uid(self) -> UID:\nreturn self._model.uid\n@property\ndef datatype(self) -> DataSourceType:\nreturn self._model.datatype\n@property\ndef status(self) -> Status:\ntry:\nself._model = self.get(self._model.uid, self._client)._model\nreturn self._model.status\nexcept Exception:  # noqa: PIE786\nreturn Status.UNKNOWN\n@property\ndef metadata(self) -> Metadata:\nreturn self._model.metadata\n@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> DataSourceList:\n\"\"\"List the  [`DataSource`][ydata.sdk.datasources.DataSource]\n        instances.\n        Arguments:\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            List of datasources\n        \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/datasource')\ndata: list = response.json()\ndata = __process_data(data)\nreturn DataSourceList(data)\n@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"DataSource\":\n\"\"\"Get an existing [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            uid (UID): DataSource identifier\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            DataSource\n        \"\"\"\nresponse = client.get(f'/datasource/{uid}')\ndata: list = response.json()\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(\nConnectorType(data['connector']['type']))\nmodel = DataSource._model_from_api(data, datasource_type)\ndatasource = ModelFactoryMixin._init_from_model_data(DataSource, model)\nreturn datasource\n@classmethod\ndef create(cls, connector: Connector, datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None, **config) -> \"DataSource\":\n\"\"\"Create a new [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            connector (Connector): Connector from which the datasource is created\n            datatype (Optional[Union[DataSourceType, str]]): (optional) DataSource type\n            name (Optional[str]): (optional) DataSource name\n            wait_for_metadata (bool): If `True`, wait until the metadata is fully calculated\n            client (Client): (optional) Client to connect to the backend\n            **config: Datasource specific configuration\n        Returns:\n            DataSource\n        \"\"\"\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(connector.type)\nreturn cls._create(connector=connector, datasource_type=datasource_type, datatype=datatype, config=config, name=name, wait_for_metadata=wait_for_metadata, client=client)\n@classmethod\ndef _create(cls, connector: Connector, datasource_type: Type[mDataSource], datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, config: Optional[Dict] = None, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None) -> \"DataSource\":\nmodel = DataSource._create_model(\nconnector, datasource_type, datatype, config, name, client)\ndatasource = ModelFactoryMixin._init_from_model_data(DataSource, model)\nif wait_for_metadata:\ndatasource._model = DataSource._wait_for_metadata(datasource)._model\nreturn datasource\n@classmethod\n@init_client\ndef _create_model(cls, connector: Connector, datasource_type: Type[mDataSource], datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, config: Optional[Dict] = None, name: Optional[str] = None, client: Optional[Client] = None) -> mDataSource:\n_name = name if name is not None else str(uuid4())\n_config = config if config is not None else {}\npayload = {\n\"name\": _name,\n\"connector\": {\n\"uid\": connector.uid,\n\"type\": connector.type.value\n},\n\"dataType\": datatype.value\n}\nif connector.type != ConnectorType.FILE:\n_config = datasource_type(**config).to_payload()\npayload.update(_config)\nresponse = client.post('/datasource/', json=payload)\ndata: list = response.json()\nreturn DataSource._model_from_api(data, datasource_type)\n@staticmethod\ndef _wait_for_metadata(datasource):\nlogger = create_logger(__name__, level=LOG_LEVEL)\nwhile datasource.status not in [Status.AVAILABLE, Status.FAILED, Status.UNAVAILABLE]:\nlogger.info(f'Calculating metadata [{datasource.status}]')\ndatasource = DataSource.get(uid=datasource.uid, client=datasource._client)\nsleep(BACKOFF)\nreturn datasource\n@staticmethod\ndef _resolve_api_status(api_status: Dict) -> Status:\nstatus = Status(api_status.get('state', Status.UNKNOWN.name))\nvalidation = ValidationState(api_status.get('validation', {}).get(\n'state', ValidationState.UNKNOWN.name))\nif validation == ValidationState.FAILED:\nstatus = Status.FAILED\nreturn status\n@staticmethod\ndef _model_from_api(data: Dict, datasource_type: Type[mDataSource]) -> mDataSource:\ndata['datatype'] = data.pop('dataType')\ndata['state'] = data['status']\ndata['status'] = DataSource._resolve_api_status(data['status'])\ndata = filter_dict(datasource_type, data)\nmodel = datasource_type(**data)\nreturn model\ndef __repr__(self):\nreturn self._model.__repr__()\n
"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.datasource.DataSource.create","title":"create(connector, datatype=DataSourceType.TABULAR, name=None, wait_for_metadata=True, client=None, **config) classmethod","text":"

Create a new DataSource.

Parameters:

Name Type Description Default connector Connector

Connector from which the datasource is created

required datatype Optional[Union[DataSourceType, str]]

(optional) DataSource type

TABULAR name Optional[str]

(optional) DataSource name

None wait_for_metadata bool

If True, wait until the metadata is fully calculated

True client Client

(optional) Client to connect to the backend

None **config

Datasource specific configuration

{}

Returns:

Type Description DataSource

DataSource

Source code in ydata/sdk/datasources/datasource.py
@classmethod\ndef create(cls, connector: Connector, datatype: Optional[Union[DataSourceType, str]] = DataSourceType.TABULAR, name: Optional[str] = None, wait_for_metadata: bool = True, client: Optional[Client] = None, **config) -> \"DataSource\":\n\"\"\"Create a new [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        connector (Connector): Connector from which the datasource is created\n        datatype (Optional[Union[DataSourceType, str]]): (optional) DataSource type\n        name (Optional[str]): (optional) DataSource name\n        wait_for_metadata (bool): If `True`, wait until the metadata is fully calculated\n        client (Client): (optional) Client to connect to the backend\n        **config: Datasource specific configuration\n    Returns:\n        DataSource\n    \"\"\"\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(connector.type)\nreturn cls._create(connector=connector, datasource_type=datasource_type, datatype=datatype, config=config, name=name, wait_for_metadata=wait_for_metadata, client=client)\n
"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.datasource.DataSource.get","title":"get(uid, client=None) staticmethod","text":"

Get an existing DataSource.

Parameters:

Name Type Description Default uid UID

DataSource identifier

required client Client

(optional) Client to connect to the backend

None

Returns:

Type Description DataSource

DataSource

Source code in ydata/sdk/datasources/datasource.py
@staticmethod\n@init_client\ndef get(uid: UID, client: Optional[Client] = None) -> \"DataSource\":\n\"\"\"Get an existing [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        uid (UID): DataSource identifier\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        DataSource\n    \"\"\"\nresponse = client.get(f'/datasource/{uid}')\ndata: list = response.json()\ndatasource_type = CONNECTOR_TO_DATASOURCE.get(\nConnectorType(data['connector']['type']))\nmodel = DataSource._model_from_api(data, datasource_type)\ndatasource = ModelFactoryMixin._init_from_model_data(DataSource, model)\nreturn datasource\n
"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.datasource.DataSource.list","title":"list(client=None) staticmethod","text":"

List the DataSource instances.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None

Returns:

Type Description DataSourceList

List of datasources

Source code in ydata/sdk/datasources/datasource.py
@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> DataSourceList:\n\"\"\"List the  [`DataSource`][ydata.sdk.datasources.DataSource]\n    instances.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        List of datasources\n    \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/datasource')\ndata: list = response.json()\ndata = __process_data(data)\nreturn DataSourceList(data)\n
"},{"location":"sdk/reference/api/datasources/datasource/#status","title":"Status","text":"

Bases: StringEnum

Represent the status of a DataSource.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.AVAILABLE","title":"AVAILABLE = 'available' class-attribute instance-attribute","text":"

The DataSource is available and ready to be used.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.DELETED","title":"DELETED = 'deleted' class-attribute instance-attribute","text":"

The DataSource is to be deleted or has been deleted.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.FAILED","title":"FAILED = 'failed' class-attribute instance-attribute","text":"

The DataSource preparation or validation has failed.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.PREPARING","title":"PREPARING = 'preparing' class-attribute instance-attribute","text":"

The DataSource is being prepared.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.UNAVAILABLE","title":"UNAVAILABLE = 'unavailable' class-attribute instance-attribute","text":"

The DataSource is unavailable at the moment.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.UNKNOWN","title":"UNKNOWN = 'unknown' class-attribute instance-attribute","text":"

The DataSource status could not be retrieved.

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.Status.VALIDATING","title":"VALIDATING = 'validating' class-attribute instance-attribute","text":"

The DataSource is being validated.

"},{"location":"sdk/reference/api/datasources/datasource/#datasourcetype","title":"DataSourceType","text":"

Bases: StringEnum

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.DataSourceType.TABULAR","title":"TABULAR = 'tabular' class-attribute instance-attribute","text":"

The DataSource is tabular (i.e. it does not have a temporal dimension).

"},{"location":"sdk/reference/api/datasources/datasource/#ydata.sdk.datasources.DataSourceType.TIMESERIES","title":"TIMESERIES = 'timeseries' class-attribute instance-attribute","text":"

The DataSource has a temporal dimension.

"},{"location":"sdk/reference/api/datasources/metadata/","title":"Metadata","text":"

Bases: BaseModel

The Metadata object contains descriptive information about a.

DataSource

Attributes:

Name Type Description columns List[Column]

columns information

"},{"location":"sdk/reference/api/synthesizers/base/","title":"Synthesizer","text":"

Bases: ABC, ModelFactoryMixin

Main synthesizer class.

This class cannot be directly instanciated because of the specificities between RegularSynthesizer and TimeSeriesSynthesizer sample methods.

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer--methods","title":"Methods","text":" Note

The synthesizer instance is created in the backend only when the fit method is called.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None Source code in ydata/sdk/synthesizers/synthesizer.py
@typechecked\nclass BaseSynthesizer(ABC, ModelFactoryMixin):\n\"\"\"Main synthesizer class.\n    This class cannot be directly instanciated because of the specificities between [`RegularSynthesizer`][ydata.sdk.synthesizers.RegularSynthesizer] and [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] `sample` methods.\n    Methods\n    -------\n    - `fit`: train a synthesizer instance.\n    - `sample`: request synthetic data.\n    - `status`: current status of the synthesizer instance.\n    Note:\n            The synthesizer instance is created in the backend only when the `fit` method is called.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    \"\"\"\ndef __init__(self, client: Optional[Client] = None):\nself._init_common(client=client)\nself._model: Optional[mSynthesizer] = None\n@init_client\ndef _init_common(self, client: Optional[Client] = None):\nself._client = client\nself._logger = create_logger(__name__, level=LOG_LEVEL)\ndef fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\ndatatype: Optional[Union[DataSourceType, str]] = None,\nsortbykey: Optional[Union[str, List[str]]] = None,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n        The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n        When the training dataset is a pandas [`DataFrame`][pandas.DataFrame], the argument `datatype` is required as it cannot be deduced.\n        The argument`sortbykey` is mandatory for [`TimeSeries`][ydata.sdk.datasources.DataSourceType.TIMESERIES].\n        By default, if `generate_cols` or `exclude_cols` are not specified, all columns are generated by the synthesizer.\n        The argument `exclude_cols` has precedence over `generate_cols`, i.e. a column `col` will not be generated if it is in both list.\n        Arguments:\n            X (Union[DataSource, pandas.DataFrame]): Training dataset\n            privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n            datatype (Optional[Union[DataSourceType, str]]): (optional) Dataset datatype - required if `X` is a [`pandas.DataFrame`][pandas.DataFrame]\n            sortbykey (Union[str, List[str]]): (optional) column(s) to use to sort timeseries datasets\n            entities (Union[str, List[str]]): (optional) columns representing entities ID\n            generate_cols (List[str]): (optional) columns that should be synthesized\n            exclude_cols (List[str]): (optional) columns that should not be synthesized\n            dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n            target (Optional[str]): (optional) Target for the dataset\n            name (Optional[str]): (optional) Synthesizer instance name\n            anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n            condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n        \"\"\"\nif self._is_initialized():\nraise AlreadyFittedError()\n_datatype = DataSourceType(datatype) if isinstance(\nX, pdDataFrame) else DataSourceType(X.datatype)\ndataset_attrs = self._init_datasource_attributes(\nsortbykey, entities, generate_cols, exclude_cols, dtypes)\nself._validate_datasource_attributes(X, dataset_attrs, _datatype, target)\n# If the training data is a pandas dataframe, we first need to create a data source and then the instance\nif isinstance(X, pdDataFrame):\nif X.empty:\nraise EmptyDataError(\"The DataFrame is empty\")\n_X = LocalDataSource(source=X, datatype=_datatype, client=self._client)\nelse:\nif datatype != _datatype:\nwarn(\"When the training data is a DataSource, the argument `datatype` is ignored.\",\nDataSourceTypeWarning)\n_X = X\nif _X.status != dsStatus.AVAILABLE:\nraise DataSourceNotAvailableError(\nf\"The datasource '{_X.uid}' is not available (status = {_X.status.value})\")\nif isinstance(dataset_attrs, dict):\ndataset_attrs = DataSourceAttrs(**dataset_attrs)\nself._fit_from_datasource(\nX=_X, dataset_attrs=dataset_attrs, target=target, name=name,\nanonymize=anonymize, privacy_level=privacy_level, condition_on=condition_on)\n@staticmethod\ndef _init_datasource_attributes(\nsortbykey: Optional[Union[str, List[str]]],\nentities: Optional[Union[str, List[str]]],\ngenerate_cols: Optional[List[str]],\nexclude_cols: Optional[List[str]],\ndtypes: Optional[Dict[str, Union[str, DataType]]]) -> DataSourceAttrs:\ndataset_attrs = {\n'sortbykey': sortbykey if sortbykey is not None else [],\n'entities': entities if entities is not None else [],\n'generate_cols': generate_cols if generate_cols is not None else [],\n'exclude_cols': exclude_cols if exclude_cols is not None else [],\n'dtypes': {k: DataType(v) for k, v in dtypes.items()} if dtypes is not None else {}\n}\nreturn DataSourceAttrs(**dataset_attrs)\n@staticmethod\ndef _validate_datasource_attributes(X: Union[DataSource, pdDataFrame], dataset_attrs: DataSourceAttrs, datatype: DataSourceType, target: Optional[str]):\ncolumns = []\nif isinstance(X, pdDataFrame):\ncolumns = X.columns\nif datatype is None:\nraise DataTypeMissingError(\n\"Argument `datatype` is mandatory for pandas.DataFrame training data\")\ndatatype = DataSourceType(datatype)\nelse:\ncolumns = [c.name for c in X.metadata.columns]\nif target is not None and target not in columns:\nraise DataSourceAttrsError(\n\"Invalid target: column '{target}' does not exist\")\nif datatype == DataSourceType.TIMESERIES:\nif not dataset_attrs.sortbykey:\nraise DataSourceAttrsError(\n\"The argument `sortbykey` is mandatory for timeseries datasource.\")\ninvalid_fields = {}\nfor field, v in dataset_attrs.dict().items():\nfield_columns = v if field != 'dtypes' else v.keys()\nnot_in_cols = [c for c in field_columns if c not in columns]\nif len(not_in_cols) > 0:\ninvalid_fields[field] = not_in_cols\nif len(invalid_fields) > 0:\nerror_msgs = [\"\\t- Field '{}': columns {} do not exist\".format(\nf, ', '.join(v)) for f, v in invalid_fields.items()]\nraise DataSourceAttrsError(\n\"The dataset attributes are invalid:\\n {}\".format('\\n'.join(error_msgs)))\n@staticmethod\ndef _metadata_to_payload(\ndatatype: DataSourceType, ds_metadata: Metadata,\ndataset_attrs: Optional[DataSourceAttrs] = None, target: str | None = None\n) -> dict:\n\"\"\"Transform a the metadata and dataset attributes into a valid\n        payload.\n        Arguments:\n            datatype (DataSourceType): datasource type\n            ds_metadata (Metadata): datasource metadata object\n            dataset_attrs ( Optional[DataSourceAttrs] ): (optional) Dataset attributes\n            target (Optional[str]): (optional) target column name\n        Returns:\n            metadata payload dictionary\n        \"\"\"\ncolumns = [\n{\n'name': c.name,\n'generation': True and c.name not in dataset_attrs.exclude_cols,\n'dataType': DataType(dataset_attrs.dtypes[c.name]).value if c.name in dataset_attrs.dtypes else c.datatype,\n'varType': c.vartype,\n}\nfor c in ds_metadata.columns]\nmetadata = {\n'columns': columns,\n'target': target\n}\nif dataset_attrs is not None:\nif datatype == DataSourceType.TIMESERIES:\nmetadata['sortBy'] = [c for c in dataset_attrs.sortbykey]\nmetadata['entity'] = [c for c in dataset_attrs.entities]\nreturn metadata\ndef _fit_from_datasource(\nself,\nX: DataSource,\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\ndataset_attrs: Optional[DataSourceAttrs] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None\n) -> None:\n_name = name if name is not None else str(uuid4())\nmetadata = self._metadata_to_payload(\nDataSourceType(X.datatype), X.metadata, dataset_attrs, target)\npayload = {\n'name': _name,\n'dataSourceUID': X.uid,\n'metadata': metadata,\n'extraData': {},\n'privacyLevel': privacy_level.value\n}\nif anonymize is not None:\npayload[\"extraData\"][\"anonymize\"] = anonymize\nif condition_on is not None:\npayload[\"extraData\"][\"condition_on\"] = condition_on\nresponse = self._client.post('/synthesizer/', json=payload)\ndata: list = response.json()\nself._model, _ = self._model_from_api(X.datatype, data)\nwhile self.status not in [Status.READY, Status.FAILED]:\nself._logger.info('Training the synthesizer...')\nsleep(BACKOFF)\nif self.status == Status.FAILED:\nraise FittingError('Could not train the synthesizer')\n@staticmethod\ndef _model_from_api(datatype: str, data: Dict) -> Tuple[mSynthesizer, Type[\"BaseSynthesizer\"]]:\nfrom ydata.sdk.synthesizers._models.synthesizer_map import TYPE_TO_CLASS\nsynth_cls = TYPE_TO_CLASS.get(SynthesizerType(datatype).value)\ndata['status'] = synth_cls._resolve_api_status(data['status'])\ndata = filter_dict(mSynthesizer, data)\nreturn mSynthesizer(**data), synth_cls\n@abstractmethod\ndef sample(self) -> pdDataFrame:\n\"\"\"Abstract method to sample from a synthesizer.\"\"\"\ndef _sample(self, payload: Dict) -> pdDataFrame:\n\"\"\"Sample from a synthesizer.\n        Arguments:\n            payload (dict): payload configuring the sample request\n        Returns:\n            pandas `DataFrame`\n        \"\"\"\nresponse = self._client.post(\nf\"/synthesizer/{self.uid}/sample\", json=payload)\ndata: Dict = response.json()\nsample_uid = data.get('uid')\nsample_status = None\nwhile sample_status not in ['finished', 'failed']:\nself._logger.info('Sampling from the synthesizer...')\nresponse = self._client.get(f'/synthesizer/{self.uid}/history')\nhistory: Dict = response.json()\nsample_data = next((s for s in history if s.get('uid') == sample_uid), None)\nsample_status = sample_data.get('status', {}).get('state')\nsleep(BACKOFF)\nresponse = self._client.get_static_file(\nf'/synthesizer/{self.uid}/sample/{sample_uid}/sample.csv')\ndata = StringIO(response.content.decode())\nreturn read_csv(data)\n@property\ndef uid(self) -> UID:\n\"\"\"Get the status of a synthesizer instance.\n        Returns:\n            Synthesizer status\n        \"\"\"\nif not self._is_initialized():\nreturn Status.NOT_INITIALIZED\nreturn self._model.uid\n@property\ndef status(self) -> Status:\n\"\"\"Get the status of a synthesizer instance.\n        Returns:\n            Synthesizer status\n        \"\"\"\nif not self._is_initialized():\nreturn Status.NOT_INITIALIZED\ntry:\nself = self.get(self._model.uid, self._client)\nreturn self._model.status\nexcept Exception:  # noqa: PIE786\nreturn Status.UNKNOWN\n@staticmethod\n@init_client\ndef get(uid: str, client: Optional[Client] = None) -> \"BaseSynthesizer\":\n\"\"\"List the synthesizer instances.\n        Arguments:\n            uid (str): synthesizer instance uid\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            Synthesizer instance\n        \"\"\"\nresponse = client.get(f'/synthesizer/{uid}')\ndata: list = response.json()\nmodel, synth_cls = BaseSynthesizer._model_from_api(\ndata['dataSource']['dataType'], data)\nreturn ModelFactoryMixin._init_from_model_data(synth_cls, model)\n@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> SynthesizersList:\n\"\"\"List the synthesizer instances.\n        Arguments:\n            client (Client): (optional) Client to connect to the backend\n        Returns:\n            List of synthesizers\n        \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata', 'report', 'mode']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/synthesizer')\ndata: list = response.json()\ndata = __process_data(data)\nreturn SynthesizersList(data)\ndef _is_initialized(self) -> bool:\n\"\"\"Determine if a synthesizer is instanciated or not.\n        Returns:\n            True if the synthesizer is instanciated\n        \"\"\"\nreturn self._model is not None\n@staticmethod\ndef _resolve_api_status(api_status: Dict) -> Status:\n\"\"\"Determine the status of the Synthesizer.\n        The status of the synthesizer instance is determined by the state of\n        its different components.\n        Arguments:\n            api_status (dict): json from the endpoint GET /synthesizer\n        Returns:\n            Synthesizer Status\n        \"\"\"\nstatus = Status(api_status.get('state', Status.UNKNOWN.name))\nif status == Status.PREPARE:\nif PrepareState(api_status.get('prepare', {}).get(\n'state', PrepareState.UNKNOWN.name)) == PrepareState.FAILED:\nreturn Status.FAILED\nelif status == Status.TRAIN:\nif TrainingState(api_status.get('training', {}).get(\n'state', TrainingState.UNKNOWN.name)) == TrainingState.FAILED:\nreturn Status.FAILED\nelif status == Status.REPORT:\nreturn Status.READY\nreturn status\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.status","title":"status: Status property","text":"

Get the status of a synthesizer instance.

Returns:

Type Description Status

Synthesizer status

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.uid","title":"uid: UID property","text":"

Get the status of a synthesizer instance.

Returns:

Type Description UID

Synthesizer status

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.fit","title":"fit(X, privacy_level=PrivacyLevel.HIGH_FIDELITY, datatype=None, sortbykey=None, entities=None, generate_cols=None, exclude_cols=None, dtypes=None, target=None, name=None, anonymize=None, condition_on=None)","text":"

Fit the synthesizer.

The synthesizer accepts as training dataset either a pandas DataFrame directly or a YData DataSource. When the training dataset is a pandas DataFrame, the argument datatype is required as it cannot be deduced.

The argumentsortbykey is mandatory for TimeSeries.

By default, if generate_cols or exclude_cols are not specified, all columns are generated by the synthesizer. The argument exclude_cols has precedence over generate_cols, i.e. a column col will not be generated if it is in both list.

Parameters:

Name Type Description Default X Union[DataSource, DataFrame]

Training dataset

required privacy_level PrivacyLevel

Synthesizer privacy level (defaults to high fidelity)

HIGH_FIDELITY datatype Optional[Union[DataSourceType, str]]

(optional) Dataset datatype - required if X is a pandas.DataFrame

None sortbykey Union[str, List[str]]

(optional) column(s) to use to sort timeseries datasets

None entities Union[str, List[str]]

(optional) columns representing entities ID

None generate_cols List[str]

(optional) columns that should be synthesized

None exclude_cols List[str]

(optional) columns that should not be synthesized

None dtypes Dict[str, Union[str, DataType]]

(optional) datatype mapping that will overwrite the datasource metadata column datatypes

None target Optional[str]

(optional) Target for the dataset

None name Optional[str]

(optional) Synthesizer instance name

None anonymize Optional[str]

(optional) fields to anonymize and the anonymization strategy

None condition_on Optional[List[str]]

(Optional[List[str]]): (optional) list of features to condition upon

None Source code in ydata/sdk/synthesizers/synthesizer.py
def fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\ndatatype: Optional[Union[DataSourceType, str]] = None,\nsortbykey: Optional[Union[str, List[str]]] = None,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n    The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n    When the training dataset is a pandas [`DataFrame`][pandas.DataFrame], the argument `datatype` is required as it cannot be deduced.\n    The argument`sortbykey` is mandatory for [`TimeSeries`][ydata.sdk.datasources.DataSourceType.TIMESERIES].\n    By default, if `generate_cols` or `exclude_cols` are not specified, all columns are generated by the synthesizer.\n    The argument `exclude_cols` has precedence over `generate_cols`, i.e. a column `col` will not be generated if it is in both list.\n    Arguments:\n        X (Union[DataSource, pandas.DataFrame]): Training dataset\n        privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n        datatype (Optional[Union[DataSourceType, str]]): (optional) Dataset datatype - required if `X` is a [`pandas.DataFrame`][pandas.DataFrame]\n        sortbykey (Union[str, List[str]]): (optional) column(s) to use to sort timeseries datasets\n        entities (Union[str, List[str]]): (optional) columns representing entities ID\n        generate_cols (List[str]): (optional) columns that should be synthesized\n        exclude_cols (List[str]): (optional) columns that should not be synthesized\n        dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n        target (Optional[str]): (optional) Target for the dataset\n        name (Optional[str]): (optional) Synthesizer instance name\n        anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n        condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n    \"\"\"\nif self._is_initialized():\nraise AlreadyFittedError()\n_datatype = DataSourceType(datatype) if isinstance(\nX, pdDataFrame) else DataSourceType(X.datatype)\ndataset_attrs = self._init_datasource_attributes(\nsortbykey, entities, generate_cols, exclude_cols, dtypes)\nself._validate_datasource_attributes(X, dataset_attrs, _datatype, target)\n# If the training data is a pandas dataframe, we first need to create a data source and then the instance\nif isinstance(X, pdDataFrame):\nif X.empty:\nraise EmptyDataError(\"The DataFrame is empty\")\n_X = LocalDataSource(source=X, datatype=_datatype, client=self._client)\nelse:\nif datatype != _datatype:\nwarn(\"When the training data is a DataSource, the argument `datatype` is ignored.\",\nDataSourceTypeWarning)\n_X = X\nif _X.status != dsStatus.AVAILABLE:\nraise DataSourceNotAvailableError(\nf\"The datasource '{_X.uid}' is not available (status = {_X.status.value})\")\nif isinstance(dataset_attrs, dict):\ndataset_attrs = DataSourceAttrs(**dataset_attrs)\nself._fit_from_datasource(\nX=_X, dataset_attrs=dataset_attrs, target=target, name=name,\nanonymize=anonymize, privacy_level=privacy_level, condition_on=condition_on)\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.get","title":"get(uid, client=None) staticmethod","text":"

List the synthesizer instances.

Parameters:

Name Type Description Default uid str

synthesizer instance uid

required client Client

(optional) Client to connect to the backend

None

Returns:

Type Description BaseSynthesizer

Synthesizer instance

Source code in ydata/sdk/synthesizers/synthesizer.py
@staticmethod\n@init_client\ndef get(uid: str, client: Optional[Client] = None) -> \"BaseSynthesizer\":\n\"\"\"List the synthesizer instances.\n    Arguments:\n        uid (str): synthesizer instance uid\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        Synthesizer instance\n    \"\"\"\nresponse = client.get(f'/synthesizer/{uid}')\ndata: list = response.json()\nmodel, synth_cls = BaseSynthesizer._model_from_api(\ndata['dataSource']['dataType'], data)\nreturn ModelFactoryMixin._init_from_model_data(synth_cls, model)\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.list","title":"list(client=None) staticmethod","text":"

List the synthesizer instances.

Parameters:

Name Type Description Default client Client

(optional) Client to connect to the backend

None

Returns:

Type Description SynthesizersList

List of synthesizers

Source code in ydata/sdk/synthesizers/synthesizer.py
@staticmethod\n@init_client\ndef list(client: Optional[Client] = None) -> SynthesizersList:\n\"\"\"List the synthesizer instances.\n    Arguments:\n        client (Client): (optional) Client to connect to the backend\n    Returns:\n        List of synthesizers\n    \"\"\"\ndef __process_data(data: list) -> list:\nto_del = ['metadata', 'report', 'mode']\nfor e in data:\nfor k in to_del:\ne.pop(k, None)\nreturn data\nresponse = client.get('/synthesizer')\ndata: list = response.json()\ndata = __process_data(data)\nreturn SynthesizersList(data)\n
"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.synthesizer.BaseSynthesizer.sample","title":"sample() abstractmethod","text":"

Abstract method to sample from a synthesizer.

Source code in ydata/sdk/synthesizers/synthesizer.py
@abstractmethod\ndef sample(self) -> pdDataFrame:\n\"\"\"Abstract method to sample from a synthesizer.\"\"\"\n
"},{"location":"sdk/reference/api/synthesizers/base/#privacylevel","title":"PrivacyLevel","text":"

Bases: StringEnum

Privacy level exposed to the end-user.

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.PrivacyLevel.BALANCED_PRIVACY_FIDELITY","title":"BALANCED_PRIVACY_FIDELITY = 'BALANCED_PRIVACY_FIDELITY' class-attribute instance-attribute","text":"

Balanced privacy/fidelity

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_FIDELITY","title":"HIGH_FIDELITY = 'HIGH_FIDELITY' class-attribute instance-attribute","text":"

High fidelity

"},{"location":"sdk/reference/api/synthesizers/base/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_PRIVACY","title":"HIGH_PRIVACY = 'HIGH_PRIVACY' class-attribute instance-attribute","text":"

High privacy

"},{"location":"sdk/reference/api/synthesizers/regular/","title":"Regular","text":"

Bases: BaseSynthesizer

Source code in ydata/sdk/synthesizers/regular.py
class RegularSynthesizer(BaseSynthesizer):\ndef sample(self, n_samples: int = 1, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`RegularSynthesizer`][ydata.sdk.synthesizers.RegularSynthesizer]\n        instance.\n        Arguments:\n            n_samples (int): number of rows in the sample\n            condition_on: (Optional[dict]): (optional) conditional sampling parameters\n        Returns:\n            synthetic data\n        \"\"\"\nif n_samples < 1:\nraise InputError(\"Parameter 'n_samples' must be greater than 0\")\npayload = {\"numberOfRecords\": n_samples}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\ndef fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n        The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            X (Union[DataSource, pandas.DataFrame]): Training dataset\n            privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n            entities (Union[str, List[str]]): (optional) columns representing entities ID\n            generate_cols (List[str]): (optional) columns that should be synthesized\n            exclude_cols (List[str]): (optional) columns that should not be synthesized\n            dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n            target (Optional[str]): (optional) Target column\n            name (Optional[str]): (optional) Synthesizer instance name\n            anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n            condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n        \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TABULAR, entities=entities,\ngenerate_cols=generate_cols, exclude_cols=exclude_cols, dtypes=dtypes,\ntarget=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\ndef __repr__(self):\nif self._model is not None:\nreturn self._model.__repr__()\nelse:\nreturn \"RegularSynthesizer(Not Initialized)\"\n
"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.regular.RegularSynthesizer.fit","title":"fit(X, privacy_level=PrivacyLevel.HIGH_FIDELITY, entities=None, generate_cols=None, exclude_cols=None, dtypes=None, target=None, name=None, anonymize=None, condition_on=None)","text":"

Fit the synthesizer.

The synthesizer accepts as training dataset either a pandas DataFrame directly or a YData DataSource.

Parameters:

Name Type Description Default X Union[DataSource, DataFrame]

Training dataset

required privacy_level PrivacyLevel

Synthesizer privacy level (defaults to high fidelity)

HIGH_FIDELITY entities Union[str, List[str]]

(optional) columns representing entities ID

None generate_cols List[str]

(optional) columns that should be synthesized

None exclude_cols List[str]

(optional) columns that should not be synthesized

None dtypes Dict[str, Union[str, DataType]]

(optional) datatype mapping that will overwrite the datasource metadata column datatypes

None target Optional[str]

(optional) Target column

None name Optional[str]

(optional) Synthesizer instance name

None anonymize Optional[str]

(optional) fields to anonymize and the anonymization strategy

None condition_on Optional[List[str]]

(Optional[List[str]]): (optional) list of features to condition upon

None Source code in ydata/sdk/synthesizers/regular.py
def fit(self, X: Union[DataSource, pdDataFrame],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n    The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        X (Union[DataSource, pandas.DataFrame]): Training dataset\n        privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n        entities (Union[str, List[str]]): (optional) columns representing entities ID\n        generate_cols (List[str]): (optional) columns that should be synthesized\n        exclude_cols (List[str]): (optional) columns that should not be synthesized\n        dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n        target (Optional[str]): (optional) Target column\n        name (Optional[str]): (optional) Synthesizer instance name\n        anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n        condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n    \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TABULAR, entities=entities,\ngenerate_cols=generate_cols, exclude_cols=exclude_cols, dtypes=dtypes,\ntarget=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\n
"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.regular.RegularSynthesizer.sample","title":"sample(n_samples=1, condition_on=None)","text":"

Sample from a RegularSynthesizer instance.

Parameters:

Name Type Description Default n_samples int

number of rows in the sample

1 condition_on Optional[dict]

(Optional[dict]): (optional) conditional sampling parameters

None

Returns:

Type Description DataFrame

synthetic data

Source code in ydata/sdk/synthesizers/regular.py
def sample(self, n_samples: int = 1, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`RegularSynthesizer`][ydata.sdk.synthesizers.RegularSynthesizer]\n    instance.\n    Arguments:\n        n_samples (int): number of rows in the sample\n        condition_on: (Optional[dict]): (optional) conditional sampling parameters\n    Returns:\n        synthetic data\n    \"\"\"\nif n_samples < 1:\nraise InputError(\"Parameter 'n_samples' must be greater than 0\")\npayload = {\"numberOfRecords\": n_samples}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\n
"},{"location":"sdk/reference/api/synthesizers/regular/#privacylevel","title":"PrivacyLevel","text":"

Bases: StringEnum

Privacy level exposed to the end-user.

"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.PrivacyLevel.BALANCED_PRIVACY_FIDELITY","title":"BALANCED_PRIVACY_FIDELITY = 'BALANCED_PRIVACY_FIDELITY' class-attribute instance-attribute","text":"

Balanced privacy/fidelity

"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_FIDELITY","title":"HIGH_FIDELITY = 'HIGH_FIDELITY' class-attribute instance-attribute","text":"

High fidelity

"},{"location":"sdk/reference/api/synthesizers/regular/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_PRIVACY","title":"HIGH_PRIVACY = 'HIGH_PRIVACY' class-attribute instance-attribute","text":"

High privacy

"},{"location":"sdk/reference/api/synthesizers/timeseries/","title":"TimeSeries","text":"

Bases: BaseSynthesizer

Source code in ydata/sdk/synthesizers/timeseries.py
class TimeSeriesSynthesizer(BaseSynthesizer):\ndef sample(self, n_entities: int, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] instance.\n        If a training dataset was not using any `entity` column, the Synthesizer assumes a single entity.\n        A [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] always sample the full trajectory of its entities.\n        Arguments:\n            n_entities (int): number of entities to sample\n            condition_on: (Optional[dict]): (optional) conditional sampling parameters\n        Returns:\n            synthetic data\n        \"\"\"\nif n_entities is not None and n_entities < 1:\nraise InputError(\"Parameter 'n_entities' must be greater than 0\")\npayload = {\"numberOfRecords\": n_entities}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\ndef fit(self, X: Union[DataSource, pdDataFrame],\nsortbykey: Optional[Union[str, List[str]]],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n        The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n        Arguments:\n            X (Union[DataSource, pandas.DataFrame]): Training dataset\n            sortbykey (Union[str, List[str]]): column(s) to use to sort timeseries datasets\n            privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n            entities (Union[str, List[str]]): (optional) columns representing entities ID\n            generate_cols (List[str]): (optional) columns that should be synthesized\n            exclude_cols (List[str]): (optional) columns that should not be synthesized\n            dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n            target (Optional[str]): (optional) Metadata associated to the datasource\n            name (Optional[str]): (optional) Synthesizer instance name\n            anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n            condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n        \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TIMESERIES, sortbykey=sortbykey,\nentities=entities, generate_cols=generate_cols, exclude_cols=exclude_cols,\ndtypes=dtypes, target=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\ndef __repr__(self):\nif self._model is not None:\nreturn self._model.__repr__()\nelse:\nreturn \"TimeSeriesSynthesizer(Not Initialized)\"\n
"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.timeseries.TimeSeriesSynthesizer.fit","title":"fit(X, sortbykey, privacy_level=PrivacyLevel.HIGH_FIDELITY, entities=None, generate_cols=None, exclude_cols=None, dtypes=None, target=None, name=None, anonymize=None, condition_on=None)","text":"

Fit the synthesizer.

The synthesizer accepts as training dataset either a pandas DataFrame directly or a YData DataSource.

Parameters:

Name Type Description Default X Union[DataSource, DataFrame]

Training dataset

required sortbykey Union[str, List[str]]

column(s) to use to sort timeseries datasets

required privacy_level PrivacyLevel

Synthesizer privacy level (defaults to high fidelity)

HIGH_FIDELITY entities Union[str, List[str]]

(optional) columns representing entities ID

None generate_cols List[str]

(optional) columns that should be synthesized

None exclude_cols List[str]

(optional) columns that should not be synthesized

None dtypes Dict[str, Union[str, DataType]]

(optional) datatype mapping that will overwrite the datasource metadata column datatypes

None target Optional[str]

(optional) Metadata associated to the datasource

None name Optional[str]

(optional) Synthesizer instance name

None anonymize Optional[str]

(optional) fields to anonymize and the anonymization strategy

None condition_on Optional[List[str]]

(Optional[List[str]]): (optional) list of features to condition upon

None Source code in ydata/sdk/synthesizers/timeseries.py
def fit(self, X: Union[DataSource, pdDataFrame],\nsortbykey: Optional[Union[str, List[str]]],\nprivacy_level: PrivacyLevel = PrivacyLevel.HIGH_FIDELITY,\nentities: Optional[Union[str, List[str]]] = None,\ngenerate_cols: Optional[List[str]] = None,\nexclude_cols: Optional[List[str]] = None,\ndtypes: Optional[Dict[str, Union[str, DataType]]] = None,\ntarget: Optional[str] = None,\nname: Optional[str] = None,\nanonymize: Optional[dict] = None,\ncondition_on: Optional[List[str]] = None) -> None:\n\"\"\"Fit the synthesizer.\n    The synthesizer accepts as training dataset either a pandas [`DataFrame`][pandas.DataFrame] directly or a YData [`DataSource`][ydata.sdk.datasources.DataSource].\n    Arguments:\n        X (Union[DataSource, pandas.DataFrame]): Training dataset\n        sortbykey (Union[str, List[str]]): column(s) to use to sort timeseries datasets\n        privacy_level (PrivacyLevel): Synthesizer privacy level (defaults to high fidelity)\n        entities (Union[str, List[str]]): (optional) columns representing entities ID\n        generate_cols (List[str]): (optional) columns that should be synthesized\n        exclude_cols (List[str]): (optional) columns that should not be synthesized\n        dtypes (Dict[str, Union[str, DataType]]): (optional) datatype mapping that will overwrite the datasource metadata column datatypes\n        target (Optional[str]): (optional) Metadata associated to the datasource\n        name (Optional[str]): (optional) Synthesizer instance name\n        anonymize (Optional[str]): (optional) fields to anonymize and the anonymization strategy\n        condition_on: (Optional[List[str]]): (optional) list of features to condition upon\n    \"\"\"\nBaseSynthesizer.fit(self, X=X, datatype=DataSourceType.TIMESERIES, sortbykey=sortbykey,\nentities=entities, generate_cols=generate_cols, exclude_cols=exclude_cols,\ndtypes=dtypes, target=target, name=name, anonymize=anonymize, privacy_level=privacy_level,\ncondition_on=condition_on)\n
"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.timeseries.TimeSeriesSynthesizer.sample","title":"sample(n_entities, condition_on=None)","text":"

Sample from a TimeSeriesSynthesizer instance.

If a training dataset was not using any entity column, the Synthesizer assumes a single entity. A TimeSeriesSynthesizer always sample the full trajectory of its entities.

Parameters:

Name Type Description Default n_entities int

number of entities to sample

required condition_on Optional[dict]

(Optional[dict]): (optional) conditional sampling parameters

None

Returns:

Type Description DataFrame

synthetic data

Source code in ydata/sdk/synthesizers/timeseries.py
def sample(self, n_entities: int, condition_on: Optional[dict] = None) -> pdDataFrame:\n\"\"\"Sample from a [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] instance.\n    If a training dataset was not using any `entity` column, the Synthesizer assumes a single entity.\n    A [`TimeSeriesSynthesizer`][ydata.sdk.synthesizers.TimeSeriesSynthesizer] always sample the full trajectory of its entities.\n    Arguments:\n        n_entities (int): number of entities to sample\n        condition_on: (Optional[dict]): (optional) conditional sampling parameters\n    Returns:\n        synthetic data\n    \"\"\"\nif n_entities is not None and n_entities < 1:\nraise InputError(\"Parameter 'n_entities' must be greater than 0\")\npayload = {\"numberOfRecords\": n_entities}\nif condition_on is not None:\npayload[\"extraData\"] = {\n\"condition_on\": condition_on\n}\nreturn self._sample(payload=payload)\n
"},{"location":"sdk/reference/api/synthesizers/timeseries/#privacylevel","title":"PrivacyLevel","text":"

Bases: StringEnum

Privacy level exposed to the end-user.

"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.PrivacyLevel.BALANCED_PRIVACY_FIDELITY","title":"BALANCED_PRIVACY_FIDELITY = 'BALANCED_PRIVACY_FIDELITY' class-attribute instance-attribute","text":"

Balanced privacy/fidelity

"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_FIDELITY","title":"HIGH_FIDELITY = 'HIGH_FIDELITY' class-attribute instance-attribute","text":"

High fidelity

"},{"location":"sdk/reference/api/synthesizers/timeseries/#ydata.sdk.synthesizers.PrivacyLevel.HIGH_PRIVACY","title":"HIGH_PRIVACY = 'HIGH_PRIVACY' class-attribute instance-attribute","text":"

High privacy

"},{"location":"support/help-troubleshooting/","title":"Help & Troubleshooting","text":""}]} \ No newline at end of file diff --git a/0.6/sitemap.xml.gz b/0.6/sitemap.xml.gz index 5c190180..c2ee08d2 100644 Binary files a/0.6/sitemap.xml.gz and b/0.6/sitemap.xml.gz differ