Update README with CUDA library setup instructions and correct NumPy version
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ Complete PyTorch ML stack with all dependencies - no conflicts, easy installatio
|
|
| 25 |
- **Python:** 3.10 compatible
|
| 26 |
- **PyTorch:** 2.7.1 + CUDA 12.6
|
| 27 |
- **Transformers:** 4.52.3
|
| 28 |
-
- **NumPy:**
|
| 29 |
- **SciPy:** 1.15.2
|
| 30 |
- **All Dependencies:** 80+ wheels, fully tested together
|
| 31 |
|
|
@@ -53,11 +53,24 @@ subprocess.run(["pip", "install"] + [f"{wheel_path}/*.whl"], shell=True)
|
|
| 53 |
# 1. Download repository
|
| 54 |
git clone https://huggingface.co/RDHub/pytorch_python_310
|
| 55 |
|
| 56 |
-
# 2. Install everything
|
| 57 |
cd pytorch_python_310
|
| 58 |
-
pip install lib_wheel
|
| 59 |
-
|
| 60 |
-
# 3.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
python -c "import torch; print(f'PyTorch {torch.__version__} - CUDA: {torch.cuda.is_available()}')"
|
| 62 |
```
|
| 63 |
|
|
@@ -67,7 +80,7 @@ python -c "import torch; print(f'PyTorch {torch.__version__} - CUDA: {torch.cuda
|
|
| 67 |
|---------|---------|---------|
|
| 68 |
| PyTorch | 2.7.1 | 3.10 |
|
| 69 |
| Transformers | 4.52.3 | 3.10 |
|
| 70 |
-
| NumPy |
|
| 71 |
| CUDA | 12.6 | - |
|
| 72 |
|
| 73 |
## 🎯 Use Cases
|
|
@@ -83,8 +96,9 @@ Perfect for:
|
|
| 83 |
|
| 84 |
- **No dependency conflicts** - all versions tested together
|
| 85 |
- **Offline ready** - no internet needed after download
|
| 86 |
-
- **CUDA included** - ready for GPU training
|
| 87 |
- **Linux x86_64** compatible
|
|
|
|
| 88 |
|
| 89 |
---
|
| 90 |
|
|
|
|
| 25 |
- **Python:** 3.10 compatible
|
| 26 |
- **PyTorch:** 2.7.1 + CUDA 12.6
|
| 27 |
- **Transformers:** 4.52.3
|
| 28 |
+
- **NumPy:** 2.0.2 (compatible version)
|
| 29 |
- **SciPy:** 1.15.2
|
| 30 |
- **All Dependencies:** 80+ wheels, fully tested together
|
| 31 |
|
|
|
|
| 53 |
# 1. Download repository
|
| 54 |
git clone https://huggingface.co/RDHub/pytorch_python_310
|
| 55 |
|
| 56 |
+
# 2. Install everything with requirements file for correct versions
|
| 57 |
cd pytorch_python_310
|
| 58 |
+
pip install -r lib_wheel/requirements.txt --find-links lib_wheel --no-index
|
| 59 |
+
|
| 60 |
+
# 3. Set up CUDA libraries (for conda environments)
|
| 61 |
+
# Create activation script for automatic library path setup
|
| 62 |
+
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
|
| 63 |
+
cat > $CONDA_PREFIX/etc/conda/activate.d/pytorch_cuda_libs.sh << 'EOF'
|
| 64 |
+
#!/bin/bash
|
| 65 |
+
# Set up NVIDIA CUDA library paths for PyTorch
|
| 66 |
+
NVIDIA_LIB_PATH=$(find $CONDA_PREFIX -path "*/nvidia/*/lib" -type d 2>/dev/null | tr '\n' ':')
|
| 67 |
+
CUSPARSELT_LIB_PATH=$(find $CONDA_PREFIX -path "*/cusparselt/lib" -type d 2>/dev/null | tr '\n' ':')
|
| 68 |
+
export LD_LIBRARY_PATH="${NVIDIA_LIB_PATH}${CUSPARSELT_LIB_PATH}${LD_LIBRARY_PATH}"
|
| 69 |
+
EOF
|
| 70 |
+
chmod +x $CONDA_PREFIX/etc/conda/activate.d/pytorch_cuda_libs.sh
|
| 71 |
+
|
| 72 |
+
# 4. Reactivate environment and test
|
| 73 |
+
conda deactivate && conda activate your_env_name
|
| 74 |
python -c "import torch; print(f'PyTorch {torch.__version__} - CUDA: {torch.cuda.is_available()}')"
|
| 75 |
```
|
| 76 |
|
|
|
|
| 80 |
|---------|---------|---------|
|
| 81 |
| PyTorch | 2.7.1 | 3.10 |
|
| 82 |
| Transformers | 4.52.3 | 3.10 |
|
| 83 |
+
| NumPy | 2.0.2 | 3.10 |
|
| 84 |
| CUDA | 12.6 | - |
|
| 85 |
|
| 86 |
## 🎯 Use Cases
|
|
|
|
| 96 |
|
| 97 |
- **No dependency conflicts** - all versions tested together
|
| 98 |
- **Offline ready** - no internet needed after download
|
| 99 |
+
- **CUDA included** - ready for GPU training with library path setup
|
| 100 |
- **Linux x86_64** compatible
|
| 101 |
+
- **Requires conda environment** - for automatic CUDA library path management
|
| 102 |
|
| 103 |
---
|
| 104 |
|