--- {} --- # Model Card for Dayhoff In this work, we combined genomic-derived protein sequences, metagenomics, structure-based synthetic sequences, and MSAs to create the Dayhoff Atlas of protein data and language models. We first created a large-scale natural protein dataset, GigaRef, by combining and reclustering sequences from metagenomic databases with UniRef100. With 3.3B sequences in 1.7B clusters, GigaRef is the largest open dataset of natural proteins to date. To infuse the benefits of protein structure information into sequence space, we generated the first large-scale structure-based synthetic dataset, called BackboneRef, by sampling 240,830 backbone structures from a structure-based generative model and then using them to design a total of 46M synthetic sequences. Using UniRef, GigaRef, BackboneRef, and 16M MSAs from OpenProteinSet, we then trained the Dayhoff series of PLMs, which use a a hybrid state-space-model (SSM) and transformer architecture along with a mixture-of-experts (MoE) mechanism to enable the long context lengths needed to combine single sequences and MSAs at scale. Dayhoff models make accurate zero-shot predictions of mutations effects, generate sequences conditioned on aligned or unaligned homologs, and generate shorter Cas9s that preserve the functional domain architecture. Larger models, metagenomic sequences, and structure-based augmentation all increased the expression rates of unconditional generations in E. coli. Finally, we generated, characterized, and release 16M synthetic sequences as DayhoffRef Dayhoff is described in this [preprint](preprint); if you use the code from this repository or the results, please cite the preprint. ## Model Details ### Model Description - **Developed by:** Kevin K. Yang, Sarah Alamdari, Alex J. Lee, Kaeli Kaymak-Loveless, Samir Char, Garyk Brixi, Carles Domingo-Enrich, Chentong Wang, Suyue Lyu, Nicolo Fusi, Neil Tenenholtz, Ava P. Amini - **Model type:** Hybrid state-space-model transformer architecture with mixture-of-experts - **License:** MIT ### Model Sources - **Repository:** https://github.com/microsoft/dayhoff ## Uses ### Downstream Use * Protein Language Model Training: Training protein language models, to generate new protein sequences, predict mutation effects, and design functional proteins. * Zero-shot Prediction: Predicting the functional impact of mutations. * Sequence Generation: Generating new protein sequences unconditionally, or conditioned on homologs for designing proteins with desired properties. * Synthetic Sequence Generation: Exploring novel protein structures and functions using synthetic sequences. ## Bias, Risks, and Limitations The [software/model] described in this repository is provided for research and development use only. The [software/model] is not intended for use in clinical decision-making or for any other clinical use, and the performance of model for clinical use has not been established. You bear sole responsibility for any use of this [software/model], including incorporation into any product intended for clinical use.  ## How to Get Started with the Model Sample protein generation code: ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed set_seed(0) torch.set_default_device("cuda") model = AutoModelForCausalLM.from_pretrained('microsoft/Dayhoff-170m-UR50-BRu') tokenizer = AutoTokenizer.from_pretrained('microsoft/Dayhoff-170m-UR50-BRu', trust_remote_code=True) inputs = tokenizer(tokenizer.bos_token, return_tensors="pt", return_token_type_ids=False) outputs = model.generate(inputs['input_ids'],max_length=50,do_sample=True) sequence = tokenizer.batch_decode(outputs,skip_special_tokens=True) print(sequence) ``` For detailed instructions on package usage, please refer to the README in model repo. ## Evaluation ### Results Dayhoff models make accurate zero-shot predictions of mutation effects, generate sequences conditioned on aligned or unaligned homologs, and generate shorter Cas9s that preserve the functional domain architecture. Larger models, metagenomic sequences, and structure-based augmentation all increased the expression rates of unconditional generations in E. coli ## Technical Specifications ### Compute Infrastructure * 170M-parameter models: trained on 8 NVIDIA A100 or 8 NVIDIA H100 GPUs using Distributed Data Parallel. * 3B-parameter models: trained on 176 NVIDIA H100 GPUs using Fully Sharded Data Parallel in hybrid-shard mode. ## Citation **BibTeX:** If you use this model in your work, please cite it as follows: ## Model Card Authors Samir Char, Sarah A. Alamdari