Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -101,4 +101,112 @@ dataset_info:
|
|
| 101 |
- name: test
|
| 102 |
num_bytes: 578736
|
| 103 |
num_examples: 2494
|
| 104 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
- name: test
|
| 102 |
num_bytes: 578736
|
| 103 |
num_examples: 2494
|
| 104 |
+
---
|
| 105 |
+
|
| 106 |
+
# Aqueous Solubility Database (AqSolDB)
|
| 107 |
+
|
| 108 |
+
AqSolDB is created by the Autonomous Energy Materials Discovery [AMD] research group, consists of aqueous solubility values of
|
| 109 |
+
9,982 unique compounds curated from 9 different publicly available aqueous solubility datasets. This openly accessible dataset,
|
| 110 |
+
which is the largest of its kind, and will not only serve as a useful reference source of measured solubility data, but also
|
| 111 |
+
as a much improved and generalizable training data source for building data-driven models.
|
| 112 |
+
|
| 113 |
+
## Quickstart Usage
|
| 114 |
+
|
| 115 |
+
### Load a dataset in python
|
| 116 |
+
Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
|
| 117 |
+
First, from the command line install the `datasets` library
|
| 118 |
+
|
| 119 |
+
$ pip install datasets
|
| 120 |
+
|
| 121 |
+
then, from within python load the datasets library
|
| 122 |
+
|
| 123 |
+
>>> import datasets
|
| 124 |
+
|
| 125 |
+
and load one of the `B3DB` datasets, e.g.,
|
| 126 |
+
|
| 127 |
+
>>> B3DB_classification = datasets.load_dataset("maomlab/B3DB", name = "B3DB_classification")
|
| 128 |
+
Downloading readme: 100%|████████████████████████| 4.40k/4.40k [00:00<00:00, 1.35MB/s]
|
| 129 |
+
Downloading data: 100%|██████████████████████████| 680k/680k [00:00<00:00, 946kB/s]
|
| 130 |
+
Downloading data: 100%|██████████████████████████| 2.11M/2.11M [00:01<00:00, 1.28MB/s]
|
| 131 |
+
Generating test split: 100%|█████████████████████| 1951/1951 [00:00<00:00, 20854.95 examples/s]
|
| 132 |
+
Generating train split: 100%|████████████████████| 5856/5856 [00:00<00:00, 144260.80 examples/s]
|
| 133 |
+
|
| 134 |
+
and inspecting the loaded dataset
|
| 135 |
+
|
| 136 |
+
>>> B3DB_classification
|
| 137 |
+
B3DB_classification
|
| 138 |
+
DatasetDict({
|
| 139 |
+
test: Dataset({
|
| 140 |
+
features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],
|
| 141 |
+
num_rows: 1951
|
| 142 |
+
})
|
| 143 |
+
train: Dataset({
|
| 144 |
+
features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],
|
| 145 |
+
num_rows: 5856
|
| 146 |
+
})
|
| 147 |
+
})
|
| 148 |
+
|
| 149 |
+
### Use a dataset to train a model
|
| 150 |
+
One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
|
| 151 |
+
First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support
|
| 152 |
+
|
| 153 |
+
pip install 'molflux[catboost,rdkit]'
|
| 154 |
+
|
| 155 |
+
then load, featurize, split, fit, and evaluate the catboost model
|
| 156 |
+
|
| 157 |
+
import json
|
| 158 |
+
from datasets import load_set
|
| 159 |
+
from molflux.datasets import featurise_dataset
|
| 160 |
+
from molflux.features import load_from_dicts as load_representations_from_dicts
|
| 161 |
+
from molflux.splits import load_from_dict as load_split_from_dict
|
| 162 |
+
from molflux.modelzoo import load_from_dict as load_model_from_dict
|
| 163 |
+
from molflux.metrics import load_suite
|
| 164 |
+
|
| 165 |
+
split_dataset = load_dataset('maomlab/B3DB', name = 'B3DB_classification')
|
| 166 |
+
|
| 167 |
+
split_featurised_dataset = featurise_dataset(
|
| 168 |
+
split_dataset,
|
| 169 |
+
column = "SMILES",
|
| 170 |
+
representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))
|
| 171 |
+
|
| 172 |
+
model = load_model_from_dict({
|
| 173 |
+
"name": "cat_boost_classifier",
|
| 174 |
+
"config": {
|
| 175 |
+
"x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'],
|
| 176 |
+
"y_features": ['BBB+/BBB-']}})
|
| 177 |
+
|
| 178 |
+
model.train(split_featurised_dataset["train"])
|
| 179 |
+
preds = model.predict(split_featurised_dataset["test"])
|
| 180 |
+
|
| 181 |
+
classification_suite = load_suite("classification")
|
| 182 |
+
|
| 183 |
+
scores = classification_suite.compute(
|
| 184 |
+
references=split_featurised_dataset["test"]['BBB+/BBB-'],
|
| 185 |
+
predictions=preds["cat_boost_classifier::BBB+/BBB-"])
|
| 186 |
+
|
| 187 |
+
|
| 188 |
+
## About the DB3B
|
| 189 |
+
|
| 190 |
+
### Features of *B3DB*
|
| 191 |
+
|
| 192 |
+
1. The largest dataset with numerical and categorical values for Blood-Brain Barrier small molecules
|
| 193 |
+
(to the best of our knowledge, as of February 25, 2021).
|
| 194 |
+
|
| 195 |
+
2. Inclusion of stereochemistry information with isomeric SMILES with chiral specifications if
|
| 196 |
+
available. Otherwise, canonical SMILES are used.
|
| 197 |
+
|
| 198 |
+
3. Characterization of uncertainty of experimental measurements by grouping the collected molecular
|
| 199 |
+
data records.
|
| 200 |
+
|
| 201 |
+
4. Extended datasets for numerical and categorical data with precomputed physicochemical properties
|
| 202 |
+
using [mordred](https://github.com/mordred-descriptor/mordred).
|
| 203 |
+
|
| 204 |
+
|
| 205 |
+
### Data splits
|
| 206 |
+
The original B3DB dataset does not define splits, so here we have used the `Realistic Split` method described
|
| 207 |
+
in [(Martin et al., 2018)](https://doi.org/10.1021/acs.jcim.7b00166).
|
| 208 |
+
|
| 209 |
+
### Citation
|
| 210 |
+
|
| 211 |
+
```
|
| 212 |
+
|