dlouapre HF Staff commited on
Commit
fa3ad1a
·
1 Parent(s): c5681ae

working on requirements

Browse files
Files changed (4) hide show
  1. PROJECT.md +214 -0
  2. pyproject.toml +13 -0
  3. requirements.txt +0 -485
  4. uv.lock +0 -0
PROJECT.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Project Overview: Steered LLM Generation with SAE Features
2
+
3
+ ## What This Project Does
4
+
5
+ This project demonstrates **activation steering** of large language models using Sparse Autoencoder (SAE) features. It modifies the internal activations of Llama 3.1 8B Instruct during text generation to control the model's behavior and output characteristics.
6
+
7
+ ## Core Concept
8
+
9
+ Sparse Autoencoders (SAEs) decompose neural network activations into interpretable features. By extracting specific feature vectors from SAEs and adding them to the model's hidden states during generation, we can "steer" the model toward desired behaviors without fine-tuning.
10
+
11
+ ## Architecture
12
+
13
+ ```
14
+ User Input → Tokenizer → Model with Forward Hooks → Steered Generation → Output
15
+
16
+ Steering Vectors
17
+ (from pre-trained SAEs)
18
+ ```
19
+
20
+ ## Key Components
21
+
22
+ ### 1. **Steering Vectors** (`steering.py`, `extract_steering_vectors.py`)
23
+
24
+ **Source**: SAE decoder weights from `andyrdt/saes-llama-3.1-8b-instruct`
25
+
26
+ **Extraction Process**:
27
+ - SAEs are trained to reconstruct model activations: `x ≈ decoder @ encoder(x)`
28
+ - Each decoder column represents a feature direction in activation space
29
+ - We extract specific columns (features) that produce desired behaviors
30
+ - Vectors are normalized and stored in `steering_vectors.pt`
31
+
32
+ **Functions**:
33
+ - `load_saes()`: Downloads SAE files from HuggingFace Hub and extracts features
34
+ - `load_saes_from_file()`: Fast loading from pre-extracted vectors (preferred)
35
+
36
+ ### 2. **Steering Implementation** (`steering.py`)
37
+
38
+ **Two Backends**:
39
+
40
+ #### A. **NNsight Backend** (for research/analysis)
41
+ - Uses `generate_steered_answer()` with NNsight's intervention API
42
+ - Modifies activations during generation using context managers
43
+ - Good for: experimentation, debugging, understanding interventions
44
+
45
+ #### B. **Transformers Backend** (for production/deployment)
46
+ - Uses `stream_steered_answer_hf()` with PyTorch forward hooks
47
+ - Direct hook registration on transformer layers
48
+ - Good for: deployment, streaming, efficiency
49
+
50
+ **Steering Mechanism** (`create_steering_hook()`):
51
+
52
+ ```python
53
+ def hook(module, input, output):
54
+ hidden_states = output[0] # Shape: [batch, seq_len, hidden_dim]
55
+
56
+ for steering_component in layer_components:
57
+ vector = steering_component['vector'] # Direction to steer
58
+ strength = steering_component['strength'] # How much to steer
59
+
60
+ # Add steering to each token in sequence
61
+ amount = (strength * vector).unsqueeze(0).expand(seq_len, -1).unsqueeze(0)
62
+
63
+ if clamp_intensity:
64
+ # Remove existing projection to prevent over-steering
65
+ projection = (hidden_states @ vector) @ vector
66
+ amount = amount - projection
67
+
68
+ hidden_states = hidden_states + amount
69
+
70
+ return (hidden_states,) + rest_of_output
71
+ ```
72
+
73
+ **Key Insight**: Hooks are applied at specific layers during the forward pass, modifying activations before they propagate to subsequent layers.
74
+
75
+ ### 3. **Configuration** (`demo.yaml`)
76
+
77
+ ```yaml
78
+ features:
79
+ - [layer, feature_idx, strength]
80
+ # Example: [11, 74457, 1.03]
81
+ # Applies feature 74457 from layer 11 with strength 1.03
82
+ ```
83
+
84
+ **Parameters**:
85
+ - `layer`: Which transformer layer to apply steering (0-31 for Llama 8B)
86
+ - `feature_idx`: Which SAE feature to use (0-131071 for 128k SAE)
87
+ - `strength`: Multiplicative factor for steering intensity
88
+ - `clamp_intensity`: If true, removes existing projection before adding steering
89
+
90
+ ### 4. **Applications**
91
+
92
+ #### A. **Console Demo** (`demo.py`)
93
+ - Interactive chat interface in terminal
94
+ - Supports both NNsight and Transformers backends (configurable via `BACKEND`)
95
+ - Real-time streaming with transformers backend
96
+ - Color-coded output for better UX
97
+
98
+ #### B. **Web App** (`app.py`)
99
+ - Gradio interface for web deployment
100
+ - Streaming generation with `TextIteratorStreamer`
101
+ - Multi-turn conversation support
102
+ - ZeroGPU compatible for HuggingFace Spaces
103
+
104
+ ## Implementation Details
105
+
106
+ ### Device Management
107
+
108
+ **ZeroGPU Compatible**:
109
+ ```python
110
+ # Model loaded with device_map="auto"
111
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
112
+
113
+ # Steering vectors on CPU initially (Spaces mode)
114
+ load_device = "cpu" if SPACES_AVAILABLE else device
115
+
116
+ # Hooks automatically move vectors to GPU during inference
117
+ vector = vector.to(dtype=hidden_states.dtype, device=hidden_states.device)
118
+ ```
119
+
120
+ ### Streaming Generation
121
+
122
+ Uses threading to enable real-time token streaming:
123
+ ```python
124
+ streamer = TextIteratorStreamer(tokenizer, skip_prompt=True)
125
+ thread = Thread(target=lambda: model.generate(..., streamer=streamer))
126
+ thread.start()
127
+
128
+ for token_text in streamer:
129
+ yield token_text # Send to UI as tokens arrive
130
+ ```
131
+
132
+ ### Hook Registration
133
+
134
+ ```python
135
+ # Register hooks on specific layers
136
+ for layer_idx in layers_to_steer:
137
+ hook_fn = create_steering_hook(layer_idx, steering_components)
138
+ handle = model.model.layers[layer_idx].register_forward_hook(hook_fn)
139
+ hook_handles.append(handle)
140
+
141
+ # Generate with steering
142
+ model.generate(...)
143
+
144
+ # Clean up
145
+ for handle in hook_handles:
146
+ handle.remove()
147
+ ```
148
+
149
+ ## Technical Advantages
150
+
151
+ 1. **No Fine-tuning Required**: Steers pre-trained models without retraining
152
+ 2. **Interpretable**: SAE features are more interpretable than raw activations
153
+ 3. **Composable**: Multiple steering vectors can be combined
154
+ 4. **Efficient**: Only modifies forward pass, no backward pass needed
155
+ 5. **Dynamic**: Different steering per generation, configurable at runtime
156
+
157
+ ## Limitations
158
+
159
+ 1. **SAE Dependency**: Requires pre-trained SAEs for the target model
160
+ 2. **Manual Feature Selection**: Finding effective features requires experimentation
161
+ 3. **Strength Tuning**: Steering strength needs calibration per feature
162
+ 4. **Computational Overhead**: Small overhead from hook execution during generation
163
+
164
+ ## File Structure
165
+
166
+ ```
167
+ eiffel-demo/
168
+ ├── app.py # Gradio web interface
169
+ ├── demo.py # Console chat interface
170
+ ├── steering.py # Core steering implementation
171
+ ├── extract_steering_vectors.py # SAE feature extraction
172
+ ├── demo.yaml # Configuration (features, params)
173
+ ├── steering_vectors.pt # Pre-extracted vectors (generated)
174
+ ├── print_utils.py # Terminal formatting utilities
175
+ ├── requirements.txt # Dependencies
176
+ ├── README.md # User documentation
177
+ └── PROJECT.md # This file
178
+ ```
179
+
180
+ ## Dependencies
181
+
182
+ **Core**:
183
+ - `transformers`: Model loading and generation
184
+ - `torch`: Neural network operations
185
+ - `gradio`: Web interface
186
+ - `nnsight`: Alternative intervention framework (optional)
187
+ - `sae-lens`: SAE utilities (for extraction only)
188
+
189
+ **Deployment**:
190
+ - `spaces`: HuggingFace Spaces ZeroGPU support
191
+ - `hf-transfer`: Fast model downloads
192
+
193
+ ## Usage Flow
194
+
195
+ 1. **Setup**: Extract steering vectors once
196
+ ```bash
197
+ python extract_steering_vectors.py
198
+ ```
199
+
200
+ 2. **Configure**: Edit `demo.yaml` to select features and strengths
201
+
202
+ 3. **Run**: Launch console or web interface
203
+ ```bash
204
+ python demo.py # Console
205
+ python app.py # Web app
206
+ ```
207
+
208
+ 4. **Deploy**: Upload to HuggingFace Spaces with ZeroGPU
209
+
210
+ ## References
211
+
212
+ - SAE Repository: `andyrdt/saes-llama-3.1-8b-instruct`
213
+ - Base Model: `meta-llama/Llama-3.1-8B-Instruct`
214
+ - Technique: Activation steering via learned SAE features
pyproject.toml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "eiffel-demo"
3
+ version = "0.1.0"
4
+ description = "Steered LLM demo using SAE features with Gradio interface"
5
+ requires-python = ">=3.11"
6
+ dependencies = [
7
+ "torch>=2.8.0",
8
+ "transformers>=4.56.2",
9
+ "gradio>=4.0.0",
10
+ "pyyaml>=6.0",
11
+ "accelerate>=0.20.0",
12
+ "spaces==0.28.3"
13
+ ]
requirements.txt DELETED
@@ -1,485 +0,0 @@
1
- # This file was autogenerated by uv via the following command:
2
- # uv pip compile pyproject.toml -o requirements.txt
3
- accelerate==1.11.0
4
- # via
5
- # eiffel-demo (pyproject.toml)
6
- # nnsight
7
- # transformer-lens
8
- aiofiles==24.1.0
9
- # via gradio
10
- aiohappyeyeballs==2.6.1
11
- # via aiohttp
12
- aiohttp==3.13.2
13
- # via fsspec
14
- aiosignal==1.4.0
15
- # via aiohttp
16
- annotated-doc==0.0.3
17
- # via fastapi
18
- annotated-types==0.7.0
19
- # via pydantic
20
- anyio==4.11.0
21
- # via
22
- # gradio
23
- # httpx
24
- # starlette
25
- astor==0.8.1
26
- # via nnsight
27
- asttokens==3.0.0
28
- # via stack-data
29
- attrs==25.4.0
30
- # via aiohttp
31
- babe==0.0.7
32
- # via sae-lens
33
- beartype==0.14.1
34
- # via transformer-lens
35
- better-abc==0.0.3
36
- # via transformer-lens
37
- bidict==0.23.1
38
- # via python-socketio
39
- brotli==1.1.0
40
- # via gradio
41
- certifi==2025.10.5
42
- # via
43
- # httpcore
44
- # httpx
45
- # requests
46
- # sentry-sdk
47
- charset-normalizer==3.4.4
48
- # via requests
49
- click==8.3.0
50
- # via
51
- # nltk
52
- # typer
53
- # uvicorn
54
- # wandb
55
- cloudpickle==3.1.2
56
- # via nnsight
57
- config2py==0.1.42
58
- # via py2store
59
- datasets==4.4.0
60
- # via
61
- # sae-lens
62
- # transformer-lens
63
- decorator==5.2.1
64
- # via ipython
65
- dill==0.4.0
66
- # via
67
- # datasets
68
- # multiprocess
69
- docstring-parser==0.17.0
70
- # via simple-parsing
71
- dol==0.3.31
72
- # via
73
- # config2py
74
- # graze
75
- # py2store
76
- einops==0.8.1
77
- # via transformer-lens
78
- executing==2.2.1
79
- # via stack-data
80
- fancy-einsum==0.0.3
81
- # via transformer-lens
82
- fastapi==0.121.0
83
- # via gradio
84
- ffmpy==0.6.4
85
- # via gradio
86
- filelock==3.20.0
87
- # via
88
- # datasets
89
- # huggingface-hub
90
- # torch
91
- # transformers
92
- frozenlist==1.8.0
93
- # via
94
- # aiohttp
95
- # aiosignal
96
- fsspec==2025.10.0
97
- # via
98
- # datasets
99
- # gradio-client
100
- # huggingface-hub
101
- # torch
102
- gitdb==4.0.12
103
- # via gitpython
104
- gitpython==3.1.45
105
- # via wandb
106
- gradio==5.49.1
107
- # via eiffel-demo (pyproject.toml)
108
- gradio-client==1.13.3
109
- # via gradio
110
- graze==0.1.39
111
- # via babe
112
- groovy==0.1.2
113
- # via gradio
114
- h11==0.16.0
115
- # via
116
- # httpcore
117
- # uvicorn
118
- # wsproto
119
- hf-transfer==0.1.9
120
- # via eiffel-demo (pyproject.toml)
121
- hf-xet==1.2.0
122
- # via huggingface-hub
123
- httpcore==1.0.9
124
- # via httpx
125
- httpx==0.28.1
126
- # via
127
- # datasets
128
- # gradio
129
- # gradio-client
130
- # safehttpx
131
- huggingface-hub==0.36.0
132
- # via
133
- # accelerate
134
- # datasets
135
- # gradio
136
- # gradio-client
137
- # tokenizers
138
- # transformers
139
- i2==0.1.58
140
- # via config2py
141
- idna==3.11
142
- # via
143
- # anyio
144
- # httpx
145
- # requests
146
- # yarl
147
- importlib-resources==6.5.2
148
- # via py2store
149
- ipython==9.6.0
150
- # via nnsight
151
- ipython-pygments-lexers==1.1.1
152
- # via ipython
153
- jaxtyping==0.3.3
154
- # via transformer-lens
155
- jedi==0.19.2
156
- # via ipython
157
- jinja2==3.1.6
158
- # via
159
- # gradio
160
- # torch
161
- joblib==1.5.2
162
- # via nltk
163
- markdown-it-py==4.0.0
164
- # via rich
165
- markupsafe==3.0.3
166
- # via
167
- # gradio
168
- # jinja2
169
- matplotlib-inline==0.2.1
170
- # via ipython
171
- mdurl==0.1.2
172
- # via markdown-it-py
173
- mpmath==1.3.0
174
- # via sympy
175
- multidict==6.7.0
176
- # via
177
- # aiohttp
178
- # yarl
179
- multiprocess==0.70.18
180
- # via datasets
181
- narwhals==2.10.1
182
- # via plotly
183
- networkx==3.5
184
- # via torch
185
- nltk==3.9.2
186
- # via sae-lens
187
- nnsight==0.5.10
188
- # via eiffel-demo (pyproject.toml)
189
- numpy==1.26.4
190
- # via
191
- # accelerate
192
- # datasets
193
- # gradio
194
- # pandas
195
- # patsy
196
- # plotly-express
197
- # scipy
198
- # statsmodels
199
- # transformer-lens
200
- # transformers
201
- nvidia-cublas-cu12==12.8.4.1
202
- # via
203
- # nvidia-cudnn-cu12
204
- # nvidia-cusolver-cu12
205
- # torch
206
- nvidia-cuda-cupti-cu12==12.8.90
207
- # via torch
208
- nvidia-cuda-nvrtc-cu12==12.8.93
209
- # via torch
210
- nvidia-cuda-runtime-cu12==12.8.90
211
- # via torch
212
- nvidia-cudnn-cu12==9.10.2.21
213
- # via torch
214
- nvidia-cufft-cu12==11.3.3.83
215
- # via torch
216
- nvidia-cufile-cu12==1.13.1.3
217
- # via torch
218
- nvidia-curand-cu12==10.3.9.90
219
- # via torch
220
- nvidia-cusolver-cu12==11.7.3.90
221
- # via torch
222
- nvidia-cusparse-cu12==12.5.8.93
223
- # via
224
- # nvidia-cusolver-cu12
225
- # torch
226
- nvidia-cusparselt-cu12==0.7.1
227
- # via torch
228
- nvidia-nccl-cu12==2.27.5
229
- # via torch
230
- nvidia-nvjitlink-cu12==12.8.93
231
- # via
232
- # nvidia-cufft-cu12
233
- # nvidia-cusolver-cu12
234
- # nvidia-cusparse-cu12
235
- # torch
236
- nvidia-nvshmem-cu12==3.3.20
237
- # via torch
238
- nvidia-nvtx-cu12==12.8.90
239
- # via torch
240
- orjson==3.11.4
241
- # via gradio
242
- packaging==25.0
243
- # via
244
- # accelerate
245
- # datasets
246
- # gradio
247
- # gradio-client
248
- # huggingface-hub
249
- # plotly
250
- # statsmodels
251
- # transformers
252
- # wandb
253
- pandas==2.3.3
254
- # via
255
- # babe
256
- # datasets
257
- # gradio
258
- # plotly-express
259
- # statsmodels
260
- # transformer-lens
261
- parso==0.8.5
262
- # via jedi
263
- patsy==1.0.2
264
- # via
265
- # plotly-express
266
- # statsmodels
267
- pexpect==4.9.0
268
- # via ipython
269
- pillow==11.3.0
270
- # via gradio
271
- platformdirs==4.5.0
272
- # via wandb
273
- plotly==6.3.1
274
- # via
275
- # plotly-express
276
- # sae-lens
277
- plotly-express==0.4.1
278
- # via sae-lens
279
- prompt-toolkit==3.0.52
280
- # via ipython
281
- propcache==0.4.1
282
- # via
283
- # aiohttp
284
- # yarl
285
- protobuf==6.33.0
286
- # via wandb
287
- psutil==7.1.3
288
- # via accelerate
289
- ptyprocess==0.7.0
290
- # via pexpect
291
- pure-eval==0.2.3
292
- # via stack-data
293
- py2store==0.1.22
294
- # via babe
295
- pyarrow==22.0.0
296
- # via datasets
297
- pydantic==2.11.10
298
- # via
299
- # fastapi
300
- # gradio
301
- # nnsight
302
- # wandb
303
- pydantic-core==2.33.2
304
- # via pydantic
305
- pydub==0.25.1
306
- # via gradio
307
- pygments==2.19.2
308
- # via
309
- # ipython
310
- # ipython-pygments-lexers
311
- # rich
312
- python-dateutil==2.9.0.post0
313
- # via pandas
314
- python-dotenv==1.2.1
315
- # via sae-lens
316
- python-engineio==4.12.3
317
- # via python-socketio
318
- python-multipart==0.0.20
319
- # via gradio
320
- python-socketio==5.14.3
321
- # via nnsight
322
- pytz==2025.2
323
- # via pandas
324
- pyyaml==6.0.3
325
- # via
326
- # eiffel-demo (pyproject.toml)
327
- # accelerate
328
- # datasets
329
- # gradio
330
- # huggingface-hub
331
- # sae-lens
332
- # transformers
333
- # wandb
334
- regex==2025.11.3
335
- # via
336
- # nltk
337
- # transformers
338
- requests==2.32.5
339
- # via
340
- # datasets
341
- # graze
342
- # huggingface-hub
343
- # python-socketio
344
- # transformers
345
- # wandb
346
- rich==14.2.0
347
- # via
348
- # nnsight
349
- # transformer-lens
350
- # typer
351
- ruff==0.14.3
352
- # via gradio
353
- sae-lens==6.21.0
354
- # via eiffel-demo (pyproject.toml)
355
- safehttpx==0.1.7
356
- # via gradio
357
- safetensors==0.6.2
358
- # via
359
- # accelerate
360
- # sae-lens
361
- # transformers
362
- scipy==1.16.3
363
- # via
364
- # plotly-express
365
- # statsmodels
366
- semantic-version==2.10.0
367
- # via gradio
368
- sentencepiece==0.2.1
369
- # via transformer-lens
370
- sentry-sdk==2.43.0
371
- # via wandb
372
- shellingham==1.5.4
373
- # via typer
374
- simple-parsing==0.1.7
375
- # via sae-lens
376
- simple-websocket==1.1.0
377
- # via python-engineio
378
- six==1.17.0
379
- # via python-dateutil
380
- smmap==5.0.2
381
- # via gitdb
382
- sniffio==1.3.1
383
- # via anyio
384
- stack-data==0.6.3
385
- # via ipython
386
- starlette==0.49.3
387
- # via
388
- # fastapi
389
- # gradio
390
- statsmodels==0.14.5
391
- # via plotly-express
392
- sympy==1.14.0
393
- # via torch
394
- tenacity==9.1.2
395
- # via sae-lens
396
- tokenizers==0.22.1
397
- # via transformers
398
- toml==0.10.2
399
- # via nnsight
400
- tomlkit==0.13.3
401
- # via gradio
402
- torch==2.9.0
403
- # via
404
- # eiffel-demo (pyproject.toml)
405
- # accelerate
406
- # nnsight
407
- # transformer-lens
408
- tqdm==4.67.1
409
- # via
410
- # datasets
411
- # huggingface-hub
412
- # nltk
413
- # transformer-lens
414
- # transformers
415
- traitlets==5.14.3
416
- # via
417
- # ipython
418
- # matplotlib-inline
419
- transformer-lens==2.16.1
420
- # via sae-lens
421
- transformers==4.57.1
422
- # via
423
- # eiffel-demo (pyproject.toml)
424
- # nnsight
425
- # sae-lens
426
- # transformer-lens
427
- # transformers-stream-generator
428
- transformers-stream-generator==0.0.5
429
- # via transformer-lens
430
- triton==3.5.0
431
- # via torch
432
- typeguard==4.4.4
433
- # via transformer-lens
434
- typer==0.20.0
435
- # via gradio
436
- typing-extensions==4.15.0
437
- # via
438
- # aiosignal
439
- # anyio
440
- # fastapi
441
- # gradio
442
- # gradio-client
443
- # huggingface-hub
444
- # ipython
445
- # pydantic
446
- # pydantic-core
447
- # sae-lens
448
- # simple-parsing
449
- # starlette
450
- # torch
451
- # transformer-lens
452
- # typeguard
453
- # typer
454
- # typing-inspection
455
- # wandb
456
- typing-inspection==0.4.2
457
- # via pydantic
458
- tzdata==2025.2
459
- # via pandas
460
- urllib3==2.5.0
461
- # via
462
- # requests
463
- # sentry-sdk
464
- uvicorn==0.38.0
465
- # via gradio
466
- wadler-lindig==0.1.7
467
- # via jaxtyping
468
- wandb==0.22.3
469
- # via transformer-lens
470
- wcwidth==0.2.14
471
- # via prompt-toolkit
472
- websocket-client==1.9.0
473
- # via python-socketio
474
- websockets==15.0.1
475
- # via gradio-client
476
- wsproto==1.2.0
477
- # via simple-websocket
478
- xxhash==3.6.0
479
- # via datasets
480
- yarl==1.22.0
481
- # via aiohttp
482
-
483
- # HuggingFace Spaces ZeroGPU support
484
- spaces==0.28.3
485
- # via eiffel-demo (for ZeroGPU deployment)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
uv.lock ADDED
The diff for this file is too large to render. See raw diff