Skip to content

Latest commit

 

History

History
294 lines (149 loc) · 23.1 KB

datasheet.md

File metadata and controls

294 lines (149 loc) · 23.1 KB
title has_children nav_order
Datasheet
false
2

Data Sheet for DiffusionDB

Author: Jay Wang, Evan Montoya, David Munechika, Alex Yang, Ben Hoover, Polo Chau

Organization: Georgia Institute of Technology

Motivation

The questions in this section are primarily intended to encourage dataset creators to clearly articulate their reasons for creating the dataset and to promote transparency about funding interests.

  1. For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.

    The DiffusionDB project was inspired by important needs in research focused on diffusion models and prompt engineering. As large text-to-image models are relatively new, there is a pressing need to understand how these models work, how to write effective prompts, and how to design tools to help users generate images. To tackle these critical challenges, we present DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.

  2. Who created this dataset (e.g. which team, research group) and on behalf of which entity (e.g. company, institution, organization)?

    The dataset was created by Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau at the Georgia Institute of Technology.

  3. What support was needed to make this dataset? (e.g. who funded the creation of the dataset? If there is an associated grant, provide the name of the grantor and the grant name and number, or if it was supported by a company or government agency, give those details.)

    Funded in part by J.P. Morgan PhD Fellowship, NSF grants IIS-1563816, DARPA GARD, and gifts from Intel, Cisco, NVIDIA, Bosch, Google.

  4. Any other comments?

    None.

Composition

Dataset creators should read through the questions in this section prior to any data collection and then provide answers once collection is complete. Most of these questions are intended to provide dataset consumers with the information they need to make informed decisions about using the dataset for specific tasks. The answers to some of these questions reveal information about compliance with the EU’s General Data Protection Regulation (GDPR) or comparable regulations in other jurisdictions.

  1. What do the instances that comprise the dataset represent (e.g. documents, photos, people, countries)? Are there multiple types of instances (e.g. movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.

    Each instance consists of an image generated by the Stable Diffusion model and the prompt as well as parameters that were input into the model to generate the image.

  2. How many instances are there in total (of each type, if appropriate)?

    There are 14 million instances in total in the dataset.

  3. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g. geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g. to cover a more diverse range of instances, because instances were withheld or unavailable).

    The dataset is a sample of instances. It represents a sample of images from the Stable Diffusion discord server. No tests were run to determine representativeness.

  4. What data does each instance consist of? "Raw" data (e.g. unprocessed text or images) or features? In either case, please provide a description.

    Each instance consists of the image generated by the Stable Diffusion model (with a unique id), along with the prompt used to generate the image and the model parameters as a JSON file.

  5. Is there a label or target associated with each instance? If so, please provide a description.

    The labels associated with each image are the prompt and other input parameters.

  6. Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g. because it was unavailable). This does not include intentionally removed information, but might include, e.g. redacted text.

    Everything is included. No data is missing.

  7. Are relationships between individual instances made explicit (e.g. users' movie ratings, social network links)? If so, please describe how these relationships are made explicit.

    Not applicable.

  8. Are there recommended data splits (e.g. training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.

    No. This dataset is not for ML model benchmarking. Researchers can use any subsets of it.

  9. Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.

    No. All images and prompts are extracted as is from the Discord chat log.

  10. Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g. websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g. licenses, fees) associated with any of the external resources that might apply to a future user? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.

    The dataset is entirely self-contained.

  11. Does the dataset contain data that might be considered confidential (e.g. data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)? If so, please provide a description.

    It is possible that some prompts contain sensitive information. However, it would be rare, as the Stable Diffusion Discord has rules against writing personal information in the prompts, and there are moderators removing messages that violate the Discord rules.

  12. Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why.

    We collect images and their prompts from the Stable Diffusion discord server. Even though the discord server has rules against users sharing any NSFW (not suitable for work, such as sexual and violent content) and illegal images, it is possible that some discord users had posted harmful images that were not removed by the server moderators.

  13. Does the dataset relate to people? If not, you may skip the remaining questions in this section.

  14. The dataset is gathered from messages from the public Stable Diffusion discord server.

  15. Does the dataset identify any subpopulations (e.g. by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.

    No.

  16. Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.

    No.

  17. Does the dataset contain data that might be considered sensitive in any way (e.g. data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description.

    No.

  18. Any other comments?

    None.

Collection

As with the previous section, dataset creators should read through these questions prior to any data collection to flag potential issues and then provide answers once collection is complete. In addition to the goals of the prior section, the answers to questions here may provide information that allow others to reconstruct the dataset without access to it.

  1. How was the data associated with each instance acquired? Was the data directly observable (e.g. raw text, movie ratings), reported by subjects (e.g. survey responses), or indirectly inferred/derived from other data (e.g. part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how.

    The data was directly observed from the Stable Diffusion Discord Channel. It was gathered from channels where users can generate images by interacting with a bot, which consisted of messages of user generated images and the prompts used to generate those images.

  2. What mechanisms or procedures were used to collect the data (e.g. hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated?

    The data was gathered using a DiscordChatExporter, which collected images and chat messages from each channel specified. We then extracted and linked prompts to images using Beautiful Soup. Random images and prompts were selected and manually verified to validate the prompt-image mapping.

  3. If the dataset is a sample from a larger set, what was the sampling strategy (e.g. deterministic, probabilistic with specific sampling probabilities)?

    For certain messages, there would exist a collage of n images (e.g., n=2, 4, 9) with identical prompts consolidated into a single image. These images were split and a single image would be randomly selected from n images with equal probability of any image being selected. This saved space and prioritized unique prompts.

  4. Who was involved in the data collection process (e.g. students, crowdworkers, contractors) and how were they compensated (e.g. how much were crowdworkers paid)?

    Students conducted the data collection process and were compensated with stipend or course credits.

  5. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g. recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. Finally, list when the dataset was first published.

    All messages were generated in August 2022 and messages were collected between October 18th and 24th 2022.

  6. Were any ethical review processes conducted (e.g. by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.

    There were no ethical review processes conducted.

  7. Does the dataset relate to people? If not, you may skip the remainder of the questions in this section.

    Yes.

  8. Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g. websites)?

    The data was directly obtained from individual messages in the Discord server.

  9. Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself.

    Users of the channel were not notified about this specific gathering of data but agree to forfeit any intellectual property rights claims by using Stable Diffusion. In addition, users are instructed that the images are public domain and can be used by anyone for any purpose. The exact language is as follows:

Note, that while users have forfeited copyright (and any/all intellectual property right claims) on these images, they are still public domain and can be used by anyone for any purpose, including by the user. Feel free to use images from DreamStudio Beta and the Stable Diffusion beta Discord service for anything, including commercial purposes.

  1. Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented.

    By using the server and tools, users consented to the regulations posed by Stability AI LTD, the company that both made Stable Diffusion and runs the Discord server. This implies consent by using the tool. The exact wording is as follows:

By your use of DreamStudio Beta and the Stable Diffusion, you hereby agree to forfeit all intellectual property rights claims, worldwide, and regardless of legal jurisdiction or intellectual property law applicable therein, including forfeiture of any/all copyright claim(s), to the Content you provide or receive through your use of DreamStudio Beta and the Stable Diffusion beta Discord service.

This message is contained in the rules and terms of service section of the Stable Diffusion Discord. In conjunction with the previous statement about images being public domain (CC0 1.0 license), it is established that the images made by using Stable Diffusion can be used for other purposes.

  1. If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).

Users will have the option to report harmful content or withdraw images they created through a Google Form listed on the DiffusionDB website: https://github.com/poloclub/diffusiondb.

  1. Has an analysis of the potential impact of the dataset and its use on data subjects (e.g. a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.

    No analysis has been conducted.

  2. Any other comments?

    None.

Preprocessing / Cleaning / Labeling

Dataset creators should read through these questions prior to any pre-processing, cleaning, or labeling and then provide answers once these tasks are complete. The questions in this section are intended to provide dataset consumers with the information they need to determine whether the “raw” data has been processed in ways that are compatible with their chosen tasks. For example, text that has been converted into a “bag-of-words” is not suitable for tasks involving word order.

  1. Was any preprocessing/cleaning/labeling of the data done (e.g. discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section.

    The Discord chat logs include collage images, where each collage contains a grid of images that share the same prompt but have different seeds. We use Pillow to split a collage into individual images. Then, among these images sharing the same prompt, we randomly select one to include in DiffusionDB. We sample images to save the dataset storage size.

  2. Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g. to support unanticipated future uses)? If so, please provide a link or other access point to the "raw" data.

    Raw data was not saved due to high storage requirements.

  3. Is the software used to preprocess/clean/label the instances available? If so, please provide a link or other access point.

    All our data collection and preprocessing code is available at: https://github.com/poloclub/diffusiondb.

  4. Any other comments?

    None.

Uses

These questions are intended to encourage dataset creators to reflect on the tasks for which the dataset should and should not be used. By explicitly highlighting these tasks, dataset creators can help dataset consumers to make informed decisions, thereby avoiding potential risks or harms.

  1. Has the dataset been used for any tasks already? If so, please provide a description.

    No.

  2. Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point.

    No.

  3. What (other) tasks could the dataset be used for?

    This dataset can be used for (1) prompt autocomplete, (2) generating images through search, (3) detecting deepfake, (4) debugging image generation, (5) explaining image generation, and more.

  4. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g. stereotyping, quality of service issues) or other undesirable harms (e.g. financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms?

    There is minimal risk for harm: the data were already public. Personally identifiable data (e.g., discord usernames) were removed during the collection/preprocessing phases.

  5. Are there tasks for which the dataset should not be used? If so, please provide a description.

    All tasks that utilize this dataset should follow the licensing policies and the regulations posed by Stability AI, the company that both made Stable Diffusion and runs the official Discord server.

  6. Any other comments?

    None.

Distribution

Dataset creators should provide answers to these questions prior to distributing the dataset either internally within the entity on behalf of which the dataset was created or externally to third parties.

  1. Will the dataset be distributed to third parties outside of the entity (e.g. company, institution, organization) on behalf of which the dataset was created? If so, please provide a description.

    Yes, the dataset is publicly available on the internet.

  2. How will the dataset will be distributed (e.g. tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)?

    The dataset is distributed on the project website: https://poloclub.github.io/diffusiondb.

  3. When will the dataset be distributed?

    The dataset is released on October 25th, 2022.

  4. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.

    All images generated by stable diffusion discord services are under the CC0 1.0 License, and therefore so are images in this dataset. In addition, the distribution of the dataset is under the Terms of Use posed by Stability AI, the company that both made Stable Diffusion and runs the official Discord server.

  5. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.

    All images in this dataset have a CC0 1.0 License and follows the Stability AI's Terms of Use.

  6. Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.

    No.

  7. Any other comments?

    None.

Maintenance

As with the previous section, dataset creators should provide answers to these questions prior to distributing the dataset. These questions are intended to encourage dataset creators to plan for dataset maintenance and communicate this plan with dataset consumers.

  1. Who is supporting/hosting/maintaining the dataset?

    The authors of this paper will be supporting and maintaining the dataset.

  2. How can the owner/curator/manager of the dataset be contacted (e.g. email address)?

    The contact information of the curators of the dataset is listed on the project website: https://poloclub.github.io/diffusiondb.

  3. Is there an erratum? If so, please provide a link or other access point.

    There is no erratum for our initial release. Errata will be documented in future releases on the dataset website.

  4. Will the dataset be updated (e.g. to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g. mailing list, GitHub)?

Yes, we will monitor the Google Form where users can report harmful images and creators can remove their images. We will update the dataset bimonthly. Updates will be posted on the project website https://poloclub.github.io/diffusiondb.

  1. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g. were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced.

People can use a Google Form linked on the project website to remove specific instances from DiffusionDB.

  1. Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users.

    We will continue to support older versions of the dataset.

  2. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to other users? If so, please provide a description.

    Anyone can extend/augment/build on/contribute to DiffusionDB. Potential collaborators can contact the dataset authors.

  3. Any other comments?

    None.