Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -42,7 +42,7 @@ print(logits.shape) # (batch_size, num_labels), (2, 2)
|
|
| 42 |
ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
|
| 43 |
```python
|
| 44 |
import torch
|
| 45 |
-
model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_large', trust_remote_code=True,
|
| 46 |
```
|
| 47 |
|
| 48 |
## Embed entire datasets with no new code
|
|
@@ -156,9 +156,9 @@ We look at various ESM models and their throughput on an H100. Adding efficient
|
|
| 156 |
If you use any of this implementation or work please cite it (as well as the ESMC preprint).
|
| 157 |
|
| 158 |
```
|
| 159 |
-
@misc {
|
| 160 |
author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
|
| 161 |
-
title = {
|
| 162 |
year = {2024},
|
| 163 |
url = { https://huggingface.co/Synthyra/ESMplusplus_small },
|
| 164 |
DOI = { 10.57967/hf/3726 },
|
|
|
|
| 42 |
ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
|
| 43 |
```python
|
| 44 |
import torch
|
| 45 |
+
model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_large', trust_remote_code=True, dtype=torch.float16) # or torch.bfloat16
|
| 46 |
```
|
| 47 |
|
| 48 |
## Embed entire datasets with no new code
|
|
|
|
| 156 |
If you use any of this implementation or work please cite it (as well as the ESMC preprint).
|
| 157 |
|
| 158 |
```
|
| 159 |
+
@misc {FastPLMs,
|
| 160 |
author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
|
| 161 |
+
title = { FastPLMs: Fast, efficient, protien language model inference from Huggingface AutoModel.},
|
| 162 |
year = {2024},
|
| 163 |
url = { https://huggingface.co/Synthyra/ESMplusplus_small },
|
| 164 |
DOI = { 10.57967/hf/3726 },
|