ORD_Ahneman_2018 / README.md
maom's picture
clean up extra version of the header at the bottom
ff6a5ff verified
|
raw
history blame
6.99 kB
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: Prepared Dataset for ML/train-*
dataset_info:
  features:
    - name: >-
        Catalyst_CC(C)c1cc(C(C)C)c(-c2ccccc2[PH-](C(C)(C)C)C(C)(C)C)c(C(C)C)c1.O=S(=O)([O-])C(F)(F)F.[NH-]c1ccccc1-c1ccccc1[Pd+3]
      dtype: bool
    - name: >-
        Catalyst_CC(C)c1cc(C(C)C)c(-c2ccccc2[PH-](C2CCCCC2)C2CCCCC2)c(C(C)C)c1.O=S(=O)([O-])C(F)(F)F.[NH-]c1ccccc1-c1ccccc1[Pd+3]
      dtype: bool
    - name: >-
        Catalyst_COc1ccc(OC)c([PH-](C(C)(C)C)C(C)(C)C)c1-c1c(C(C)C)cc(C(C)C)cc1C(C)C.O=S(=O)([O-])C(F)(F)F.[NH-]c1ccccc1-c1ccccc1[Pd+3]
      dtype: bool
    - name: >-
        Catalyst_COc1ccc(OC)c([PH-](C23CC4CC(CC(C4)C2)C3)C23CC4CC(CC(C4)C2)C3)c1-c1c(C(C)C)cc(C(C)C)cc1C(C)C.O=S(=O)([O-])C(F)(F)F.[NH-]c1ccccc1-c1ccccc1[Pd+3]
      dtype: bool
    - name: Aryl Halide_Brc1ccccn1
      dtype: bool
    - name: Aryl Halide_Brc1cccnc1
      dtype: bool
    - name: Aryl Halide_CCc1ccc(Br)cc1
      dtype: bool
    - name: Aryl Halide_CCc1ccc(Cl)cc1
      dtype: bool
    - name: Aryl Halide_CCc1ccc(I)cc1
      dtype: bool
    - name: Aryl Halide_COc1ccc(Br)cc1
      dtype: bool
    - name: Aryl Halide_COc1ccc(Cl)cc1
      dtype: bool
    - name: Aryl Halide_COc1ccc(I)cc1
      dtype: bool
    - name: Aryl Halide_Clc1ccccn1
      dtype: bool
    - name: Aryl Halide_Clc1cccnc1
      dtype: bool
    - name: Aryl Halide_FC(F)(F)c1ccc(Br)cc1
      dtype: bool
    - name: Aryl Halide_FC(F)(F)c1ccc(Cl)cc1
      dtype: bool
    - name: Aryl Halide_FC(F)(F)c1ccc(I)cc1
      dtype: bool
    - name: Aryl Halide_Ic1ccccn1
      dtype: bool
    - name: Aryl Halide_Ic1cccnc1
      dtype: bool
    - name: Base_CCN=P(N=P(N(C)C)(N(C)C)N(C)C)(N(C)C)N(C)C
      dtype: bool
    - name: Base_CN(C)C(=NC(C)(C)C)N(C)C
      dtype: bool
    - name: Base_CN1CCCN2CCCN=C12
      dtype: bool
    - name: Additives_CCOC(=O)c1cc(C)no1
      dtype: bool
    - name: Additives_CCOC(=O)c1cc(C)on1
      dtype: bool
    - name: Additives_CCOC(=O)c1cc(OC)no1
      dtype: bool
    - name: Additives_CCOC(=O)c1ccon1
      dtype: bool
    - name: Additives_CCOC(=O)c1cnoc1
      dtype: bool
    - name: Additives_CCOC(=O)c1cnoc1C
      dtype: bool
    - name: Additives_COC(=O)c1cc(-c2ccco2)on1
      dtype: bool
    - name: Additives_COC(=O)c1cc(-c2cccs2)on1
      dtype: bool
    - name: Additives_COC(=O)c1ccno1
      dtype: bool
    - name: Additives_Cc1cc(-c2ccccc2)on1
      dtype: bool
    - name: Additives_Cc1cc(-n2cccc2)no1
      dtype: bool
    - name: Additives_Cc1cc(C)on1
      dtype: bool
    - name: Additives_Cc1ccno1
      dtype: bool
    - name: Additives_Cc1ccon1
      dtype: bool
    - name: Additives_Fc1cccc(F)c1-c1ccno1
      dtype: bool
    - name: Additives_c1ccc(-c2ccno2)cc1
      dtype: bool
    - name: Additives_c1ccc(-c2ccon2)cc1
      dtype: bool
    - name: Additives_c1ccc(-c2cnoc2)cc1
      dtype: bool
    - name: Additives_c1ccc(-c2ncno2)cc1
      dtype: bool
    - name: Additives_c1ccc(CN(Cc2ccccc2)c2ccno2)cc1
      dtype: bool
    - name: Additives_c1ccc(CN(Cc2ccccc2)c2ccon2)cc1
      dtype: bool
    - name: Additives_c1ccc2nocc2c1
      dtype: bool
    - name: Additives_c1ccc2oncc2c1
      dtype: bool
    - name: yield
      dtype: float64
  splits:
    - name: train
      num_bytes: 58751
      num_examples: 4312
  download_size: 83432
  dataset_size: 58751

BIOINF595 W2025 Bioactivity Project Dataset

Author: Carl Mauro

The reaction data used in this project is from the following publication, accessed through the Open Reaction Database (https://open-reaction-database.org/). The original data is used under an MIT license, and is under copyright by the original authors (see LICENSE.txt file for details).

Ahneman, D. T.; Estrada, J. G.; Lin, S.; Dreher, S. D.; Doyle, A. G. 
Predicting Reaction Performance in C–N Cross-Coupling Using Machine Learning. 
Science 2018, 360 (6385), 186–190. https://doi.org/10.1126/science.aar5169.

Includes python scripts used to download the dataset, sanitize molecular SMILES strings, then train an H2O AutoML model on the data to predict reaction yields.

The original, unchanged dataset downloaded directly from the ORD data repository is stored under the "Original Dataset" directory. The processed data with sanitized SMILES strings is stored under the "Sanitized Dataset" directory. The dataset prepared using one-hot encoding (to enable the training of the H2O AutoML model) is stored under the "Prepared Data" directory.

Scripts are stored in the src/ directory and should be used in numerical order by name. The purpose of each script is described below:

01.install_packages.py      --> This script includes all python packages used across all scripts. 
                                The user should check which packages they do not yet have installed and install any missing ones.
                                It is recommended that all packages be installed to a unique Conda environment set up for handling this dataset & associated ML model.
                           
02.download_dataset.py      --> This script is used to download the dataset directly from the ORD data repository on GitHub. 
                                Further details can be found at https://github.com/open-reaction-database 
    
03.sanitize_data.py         --> This script uses the MolVS package to convert the molecular SMILES strings in the original dataset into canonical SMILES strings 
                                (i.e., to perform'sanitization'). The user should input the original dataset saved as a .csv file (Here, "Ahneman_ORD_Data.csv"). 
                                The script will output a new .csv file ("Sanitized_Ahneman_ORD_Data.csv") that is identical in structure to the original, 
                                but with the sanitized SMILES strings. 

04.prepare_data_for_ML.py   --> This script takes the sanitized dataset as an input and performs one-hot enconding in order to prepare the data to be used in the 
                                H2O AutoML model. A new .csv file ("Prepared_Data.csv") is created to save the dataset after one-hot encoding. 
                        
05.run_autoML_updated.py    --> This script takes in the one-hot encoded reaction data and splits it into training and test sets (70%/30%). 
                                The data is used to train an H2O AutoML model (maximum 8 models, omitting stacked ensemble models). After training
                                the H2O AutoML model, the best-performing model suggested by AutoML is selected and analyzed by SHAP analysis. A loss curve
                                is also generated for the model, along with a plot comparing the predicted reaction yields from the validation set to the actual
                                yields included in the original dataset.
                                
06.upload_to_huggingface.py --> This script was used to upload datasets used and generated for this project to this Huggingface repository. 
                                The datasets package must be installed to run this script.