jerryzh168 commited on
Commit
9de4d92
·
verified ·
1 Parent(s): 9cdb596

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -132,6 +132,9 @@ pip install accelerate
132
 
133
  Use the following code to get the quantized model:
134
  ```Py
 
 
 
135
  model_id = "google/gemma-3-27b-it"
136
  model_to_quantize = "google/gemma-3-27b-it"
137
  from torchao.quantization import Int4WeightOnlyConfig, quantize_, ModuleFqnToConfig
 
132
 
133
  Use the following code to get the quantized model:
134
  ```Py
135
+ import torch
136
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
137
+
138
  model_id = "google/gemma-3-27b-it"
139
  model_to_quantize = "google/gemma-3-27b-it"
140
  from torchao.quantization import Int4WeightOnlyConfig, quantize_, ModuleFqnToConfig