lhallee commited on
Commit
049ba28
·
verified ·
1 Parent(s): 64d62e2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -42,7 +42,7 @@ print(logits.shape) # (batch_size, num_labels), (2, 2)
42
  ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
43
  ```python
44
  import torch
45
- model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_large', trust_remote_code=True, torch_dtype=torch.float16) # or torch.bfloat16
46
  ```
47
 
48
  ## Embed entire datasets with no new code
@@ -156,9 +156,9 @@ We look at various ESM models and their throughput on an H100. Adding efficient
156
  If you use any of this implementation or work please cite it (as well as the ESMC preprint).
157
 
158
  ```
159
- @misc {ESM++,
160
  author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
161
- title = { ESM++: Efficient and Hugging Face compatible versions of the ESM Cambrian models},
162
  year = {2024},
163
  url = { https://huggingface.co/Synthyra/ESMplusplus_small },
164
  DOI = { 10.57967/hf/3726 },
 
42
  ESM++ weights are fp32 by default. You can load them in fp16 or bf16 like this:
43
  ```python
44
  import torch
45
+ model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_large', trust_remote_code=True, dtype=torch.float16) # or torch.bfloat16
46
  ```
47
 
48
  ## Embed entire datasets with no new code
 
156
  If you use any of this implementation or work please cite it (as well as the ESMC preprint).
157
 
158
  ```
159
+ @misc {FastPLMs,
160
  author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
161
+ title = { FastPLMs: Fast, efficient, protien language model inference from Huggingface AutoModel.},
162
  year = {2024},
163
  url = { https://huggingface.co/Synthyra/ESMplusplus_small },
164
  DOI = { 10.57967/hf/3726 },