Text Generation
Transformers
Safetensors
zaya
conversational
yury-zyphra commited on
Commit
ba27dd5
·
verified ·
1 Parent(s): 4b860cb

Update README.md (#1)

Browse files

- Update README.md (eba9d41e73fee205ee911e6660f4a06b092fb30c)

Files changed (1) hide show
  1. README.md +19 -2
README.md CHANGED
@@ -43,12 +43,29 @@ ZAYA1-reasoning-base also performs extremely strongly on many extremely challeng
43
 
44
  ### Prerequisites
45
 
46
- TODO instructions to use our branch
 
 
 
47
 
 
 
 
 
48
 
49
 
50
  ### Inference
51
 
52
- TODO
 
 
53
 
 
 
54
 
 
 
 
 
 
 
 
43
 
44
  ### Prerequisites
45
 
46
+ To use ZAYA1, install `zaya` branch from our fork of `transformers` library, which is based on the v4.57.1 of `transformers`:
47
+ ```bash
48
+ pip install "transformers @ git+https://github.com/Zyphra/transformers.git@zaya"
49
+ ```
50
 
51
+ The command above relies on requirements for `transformers v4.57.1` being installed in your environment. If you're installing in a fresh Python environment, you might want to specify a specific extra, like `[dev-torch]`, to install all the dependencies:
52
+ ```bash
53
+ pip install "transformers[dev-torch] @ git+https://github.com/Zyphra/transformers.git@zaya"
54
+ ```
55
 
56
 
57
  ### Inference
58
 
59
+ ```python
60
+ from transformers import AutoTokenizer, AutoModelForCausalLM
61
+ import torch
62
 
63
+ tokenizer = AutoTokenizer.from_pretrained("Zyphra/ZAYA1-reasoning-base")
64
+ model = AutoModelForCausalLM.from_pretrained("Zyphra/ZAYA1-reasoning-base", device_map="cuda", dtype=torch.bfloat16)
65
 
66
+ input_text = "What factors contributed to the fall of the Roman Empire?"
67
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
68
+
69
+ outputs = model.generate(**input_ids, max_new_tokens=100)
70
+ print(tokenizer.decode(outputs[0]))
71
+ ```