Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,137 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: mit
|
| 5 |
+
tags:
|
| 6 |
+
- chemistry
|
| 7 |
+
- SMILES
|
| 8 |
+
- yield
|
| 9 |
+
datasets:
|
| 10 |
+
- ORD
|
| 11 |
+
metrics:
|
| 12 |
+
- r_squared
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# Model Card for ReactionT5v2-yield
|
| 16 |
+
|
| 17 |
+
This is a ReactionT5 pre-trained to predict yields of reactions. You can use the demo [here](https://huggingface.co/spaces/sagawa/ReactionT5_task_yield).
|
| 18 |
+
|
| 19 |
+
### Model Sources
|
| 20 |
+
|
| 21 |
+
<!-- Provide the basic links for the model. -->
|
| 22 |
+
|
| 23 |
+
- **Repository:** https://github.com/sagawatatsuya/ReactionT5v2
|
| 24 |
+
- **Paper:** https://arxiv.org/abs/2311.06708
|
| 25 |
+
- **Demo:** https://huggingface.co/spaces/sagawa/ReactionT5_task_yield
|
| 26 |
+
|
| 27 |
+
## Uses
|
| 28 |
+
|
| 29 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 30 |
+
|
| 31 |
+
## How to Get Started with the Model
|
| 32 |
+
|
| 33 |
+
Use the code below to get started with the model.
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
import torch
|
| 37 |
+
import torch.nn as nn
|
| 38 |
+
from transformers import AutoTokenizer, T5ForConditionalGeneration, AutoConfig, PreTrainedModel
|
| 39 |
+
|
| 40 |
+
class ReactionT5Yield(PreTrainedModel):
|
| 41 |
+
config_class = AutoConfig
|
| 42 |
+
def __init__(self, config):
|
| 43 |
+
super().__init__(config)
|
| 44 |
+
self.config = config
|
| 45 |
+
self.model = T5ForConditionalGeneration.from_pretrained(self.config._name_or_path)
|
| 46 |
+
self.model.resize_token_embeddings(self.config.vocab_size)
|
| 47 |
+
self.fc1 = nn.Linear(self.config.hidden_size, self.config.hidden_size//2)
|
| 48 |
+
self.fc2 = nn.Linear(self.config.hidden_size, self.config.hidden_size//2)
|
| 49 |
+
self.fc3 = nn.Linear(self.config.hidden_size//2*2, self.config.hidden_size)
|
| 50 |
+
self.fc4 = nn.Linear(self.config.hidden_size, self.config.hidden_size)
|
| 51 |
+
self.fc5 = nn.Linear(self.config.hidden_size, 1)
|
| 52 |
+
|
| 53 |
+
self._init_weights(self.fc1)
|
| 54 |
+
self._init_weights(self.fc2)
|
| 55 |
+
self._init_weights(self.fc3)
|
| 56 |
+
self._init_weights(self.fc4)
|
| 57 |
+
self._init_weights(self.fc5)
|
| 58 |
+
|
| 59 |
+
def _init_weights(self, module):
|
| 60 |
+
if isinstance(module, nn.Linear):
|
| 61 |
+
module.weight.data.normal_(mean=0.0, std=0.01)
|
| 62 |
+
if module.bias is not None:
|
| 63 |
+
module.bias.data.zero_()
|
| 64 |
+
elif isinstance(module, nn.Embedding):
|
| 65 |
+
module.weight.data.normal_(mean=0.0, std=0.01)
|
| 66 |
+
if module.padding_idx is not None:
|
| 67 |
+
module.weight.data[module.padding_idx].zero_()
|
| 68 |
+
elif isinstance(module, nn.LayerNorm):
|
| 69 |
+
module.bias.data.zero_()
|
| 70 |
+
module.weight.data.fill_(1.0)
|
| 71 |
+
|
| 72 |
+
def forward(self, inputs):
|
| 73 |
+
encoder_outputs = self.model.encoder(**inputs)
|
| 74 |
+
encoder_hidden_states = encoder_outputs[0]
|
| 75 |
+
outputs = self.model.decoder(input_ids=torch.full((inputs['input_ids'].size(0),1),
|
| 76 |
+
self.config.decoder_start_token_id,
|
| 77 |
+
dtype=torch.long), encoder_hidden_states=encoder_hidden_states)
|
| 78 |
+
last_hidden_states = outputs[0]
|
| 79 |
+
output1 = self.fc1(last_hidden_states.view(-1, self.config.hidden_size))
|
| 80 |
+
output2 = self.fc2(encoder_hidden_states[:, 0, :].view(-1, self.config.hidden_size))
|
| 81 |
+
output = self.fc3(torch.hstack((output1, output2)))
|
| 82 |
+
output = self.fc4(output)
|
| 83 |
+
output = self.fc5(output)
|
| 84 |
+
return output*100
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
model = ReactionT5Yield.from_pretrained('sagawa/ReactionT5v2-yield')
|
| 88 |
+
tokenizer = AutoTokenizer.from_pretrained('sagawa/ReactionT5v2-yield')
|
| 89 |
+
inp = tokenizer(['REACTANT:CC(C)n1ncnc1-c1cn2c(n1)-c1cnc(O)cc1OCC2.CCN(C(C)C)C(C)C.Cl.NC(=O)[C@@H]1C[C@H](F)CN1REAGENT: PRODUCT:O=C(NNC(=O)C(F)(F)F)C(F)(F)F'], return_tensors='pt')
|
| 90 |
+
print(model(inp)) # tensor([[19.1666]], grad_fn=<MulBackward0>)
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
## Training Details
|
| 94 |
+
|
| 95 |
+
### Training Procedure
|
| 96 |
+
|
| 97 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 98 |
+
We used Open Reaction Database (ORD) dataset for model training.
|
| 99 |
+
The command used for training is the following. For more information, please refer to the paper and GitHub repository.
|
| 100 |
+
|
| 101 |
+
```python
|
| 102 |
+
python train.py \
|
| 103 |
+
--train_data_path='/home/acf15718oa/ReactionT5_neword/data/all_ord_reaction_uniq_with_attr20240506_v3_train.csv' \
|
| 104 |
+
--valid_data_path='/home/acf15718oa/ReactionT5_neword/data/all_ord_reaction_uniq_with_attr20240506_v3_valid.csv' \
|
| 105 |
+
--test_data_path='/home/acf15718oa/ReactionT5_neword/data/all_ord_reaction_uniq_with_attr20240506_v3_test.csv' \
|
| 106 |
+
--CN_test_data_path='/home/acf15718oa/ReactionT5_neword/data/C_N_yield/MFF_Test1/test.csv' \
|
| 107 |
+
--epochs=100 \
|
| 108 |
+
--batch_size=32 \
|
| 109 |
+
--output_dir='./'
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
### Results
|
| 113 |
+
|
| 114 |
+
| **R^2** | **DFT** | **MFF** | **Yield-BERT**| **T5Chem** | **CompoundT5**| **ReactionT5** (without finetuning) | **ReactionT5** |
|
| 115 |
+
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
|
| 116 |
+
| Random 70/30 | 0.92 | 0.927 ± 0.007 | 0.951 ± 0.005 | 0.970 ± 0.003 | 0.971 ± 0.002 | 0.831 ± 0.012 | 0.947 ± 0.003 |
|
| 117 |
+
| Test 1 | 0.80 | 0.851 | 0.838 | 0.811 | 0.855 | 0.846 | 0.872 |
|
| 118 |
+
| Test 2 | 0.77 | 0.713 | 0.836 | 0.907 | 0.852 | 0.869 | 0.917 |
|
| 119 |
+
| Test 3 | 0.64 | 0.635 | 0.738 | 0.789 | 0.712 | 0.779 | 0.811 |
|
| 120 |
+
| Test 4 | 0.54 | 0.184 | 0.538 | 0.627 | 0.547 | 0.843 | 0.830 |
|
| 121 |
+
| Avg. Tests 1–4| 0.69 ± 0.104 | 0.596 ± 0.251 | 0.738 ± 0.122 | 0.785 ± 0.094 | 0.741 ± 0.126 | 0.834 ± 0.034 | 0.857 ± 0.041 |
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
## Citation
|
| 125 |
+
|
| 126 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 127 |
+
arxiv link: https://arxiv.org/abs/2311.06708
|
| 128 |
+
```
|
| 129 |
+
@misc{sagawa2023reactiont5,
|
| 130 |
+
title={ReactionT5: a large-scale pre-trained model towards application of limited reaction data},
|
| 131 |
+
author={Tatsuya Sagawa and Ryosuke Kojima},
|
| 132 |
+
year={2023},
|
| 133 |
+
eprint={2311.06708},
|
| 134 |
+
archivePrefix={arXiv},
|
| 135 |
+
primaryClass={physics.chem-ph}
|
| 136 |
+
}
|
| 137 |
+
```
|