Datasets:
Tasks:
Graph Machine Learning
Modalities:
Time-series
Formats:
parquet
Size:
1K - 10K
ArXiv:
License:
| license: cc-by-4.0 | |
| task_categories: | |
| - graph-ml | |
| pretty_name: PDEBench 2D Diffusion-Reaction | |
| tags: | |
| - physics learning | |
| - geometry learning | |
| dataset_info: | |
| features: | |
| - name: Base_2_2/Zone/CellData/activator | |
| list: float32 | |
| - name: Base_2_2/Zone/CellData/activator_ic | |
| list: float32 | |
| - name: Base_2_2/Zone/CellData/inhibitor | |
| list: float32 | |
| - name: Base_2_2/Zone/CellData/inhibitor_ic | |
| list: float32 | |
| splits: | |
| - name: train | |
| num_bytes: 26476560000 | |
| num_examples: 1000 | |
| download_size: 26606982307 | |
| dataset_size: 26476560000 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| ```yaml | |
| legal: | |
| owner: Takamoto, M et al. (https://darus.uni-stuttgart.de/dataset.xhtml?persistentId=doi:10.18419/darus-2986) | |
| license: cc-by-4.0 | |
| data_production: | |
| physics: 2D Diffusion-Reaction | |
| type: simulation | |
| script: Converted to PLAID format for standardized usage; no changes to data content. | |
| num_samples: | |
| train: 1000 | |
| storage_backend: hf_datasets | |
| plaid: | |
| version: 0.1.12 | |
| ``` | |
| This dataset was generated with [`plaid`](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `plaid_sample` objects. | |
| The simplest way to use this dataset is to first download it: | |
| ```python | |
| from plaid.storage import download_from_hub | |
| repo_id = "channel/dataset" | |
| local_folder = "downloaded_dataset" | |
| download_from_hub(repo_id, local_folder) | |
| ``` | |
| Then, to iterate over the dataset and instantiate samples: | |
| ```python | |
| from plaid.storage import init_from_disk | |
| local_folder = "downloaded_dataset" | |
| split_name = "train" | |
| datasetdict, converterdict = init_from_disk(local_folder) | |
| dataset = datasetdict[split] | |
| converter = converterdict[split] | |
| for i in range(len(dataset)): | |
| plaid_sample = converter.to_plaid(dataset, i) | |
| ``` | |
| It is possible to stream the data directly: | |
| ```python | |
| from plaid.storage import init_streaming_from_hub | |
| repo_id = "channel/dataset" | |
| datasetdict, converterdict = init_streaming_from_hub(repo_id) | |
| dataset = datasetdict[split] | |
| converter = converterdict[split] | |
| for sample_raw in dataset: | |
| plaid_sample = converter.sample_to_plaid(sample_raw) | |
| ``` | |
| Plaid samples' features can be retrieved like the following: | |
| ```python | |
| from plaid.storage import load_problem_definitions_from_disk | |
| local_folder = "downloaded_dataset" | |
| pb_defs = load_problem_definitions_from_disk(local_folder) | |
| # or | |
| from plaid.storage import load_problem_definitions_from_hub | |
| repo_id = "channel/dataset" | |
| pb_defs = load_problem_definitions_from_hub(repo_id) | |
| pb_def = pb_defs[0] | |
| plaid_sample = ... # use a method from above to instantiate a plaid sample | |
| for t in plaid_sample.get_all_time_values(): | |
| for path in pb_def.get_in_features_identifiers(): | |
| plaid_sample.get_feature_by_path(path=path, time=t) | |
| for path in pb_def.get_out_features_identifiers(): | |
| plaid_sample.get_feature_by_path(path=path, time=t) | |
| ``` | |
| For those familiar with HF's `datasets` library, raw data can be retrieved without using the `plaid` library: | |
| ```python | |
| from datasets import load_dataset | |
| repo_id = "channel/dataset" | |
| datasetdict = load_dataset(repo_id) | |
| for split_name, dataset in datasetdict.items(): | |
| for raw_sample in dataset: | |
| for feat_name in dataset.column_names: | |
| feature = raw_sample[feat_name] | |
| ``` | |
| Notice that raw data refers to the variable features only, with a specific encoding for time variable features. | |
| ### Dataset Sources | |
| - **Papers:** | |
| - [arxiv](https://arxiv.org/pdf/2210.07182) | |