| # Corpus Data Directory | |
| ## Location | |
| `/data/adaptai/corpus-data` | |
| ## Purpose | |
| This directory serves as the central storage location for all corpus data pulled from Nebius S3 and other sources. It is the primary input directory for the bleeding-edge ETL pipeline. | |
| ## Data Organization | |
| ### Directory Structure | |
| ``` | |
| /data/adaptai/corpus-data/ | |
| ├── nebius-oscar/ # OSCAR corpus from Nebius S3 | |
| │ ├── unsharded/ # Unsharded multilingual data | |
| │ ├── wikipedia/ # Wikipedia dumps | |
| │ └── commoncrawl/ # Common Crawl data | |
| ├── mounted-s3/ # Symlinks to mounted S3 buckets | |
| │ ├── oscar-corpus -> /mnt/s3/oscar-corpus | |
| │ └── other-buckets/ # Additional S3 buckets | |
| ├── processed/ # Processed data ready for analysis | |
| │ ├── flowetl-transformed/ # FlowETL processed files | |
| │ └── cleaned/ # Cleaned and normalized data | |
| └── backups/ # Corpus data backups | |
| └── YYYY-MM-DD/ # Date-based backup folders | |
| ``` | |
| ## Data Sources | |
| ### Primary Sources | |
| 1. **Nebius S3 - OSCAR Corpus** | |
| - Open Super-large Crawled Aggregated coRpus | |
| - 100+ languages | |
| - Petabyte-scale multilingual data | |
| - Real-time streaming capability | |
| 2. **Wikipedia Dumps** | |
| - Multilingual Wikipedia articles | |
| - Structured text data | |
| - Regular updates | |
| 3. **Common Crawl** | |
| - Web crawl data | |
| - Diverse content types | |
| - Massive scale | |
| ### Integration Methods | |
| #### Direct Mount (Recommended) | |
| ```bash | |
| # Mount Nebius S3 buckets | |
| s3fs oscar-corpus /mnt/s3/oscar-corpus -o url=https://storage.yandexcloud.net | |
| # Access data through symlinks | |
| ls -la /data/adaptai/corpus-data/mounted-s3/ | |
| ``` | |
| #### Automated Pull Script | |
| ```bash | |
| # Use the automated puller | |
| python3 /data/adaptai/bleeding-edge-etl/nebius_s3_mount.py | |
| # Environment variables required: | |
| export Nebius_ACCESS_KEY=your_access_key | |
| export Nebius_SECRET_KEY=your_secret_key | |
| ``` | |
| #### Manual Download | |
| ```bash | |
| # For specific file downloads | |
| aws s3 sync s3://oscar-corpus/unsharded/ /data/adaptai/corpus-data/nebius-oscar/unsharded/ | |
| ``` | |
| ## Processing Pipeline | |
| ### FlowETL Integration | |
| Corpus data in this directory is automatically processed by: | |
| 1. **FlowETL** - Autonomous transformations | |
| 2. **Apache NiFi** - Orchestration and flow management | |
| 3. **Apache Drill** - Schema-free querying | |
| 4. **CWB/ANNIS** - Linguistic analysis | |
| ### Data Flow | |
| ``` | |
| Nebius S3 → /data/adaptai/corpus-data/ → FlowETL → Processed Data → Analysis | |
| ``` | |
| ## Storage Requirements | |
| ### Capacity | |
| - **Initial**: 10TB+ for sample datasets | |
| - **Production**: 100TB+ for full corpus processing | |
| - **Scalable**: Designed for petabyte-scale growth | |
| ### Performance | |
| - **SSD Storage**: /data partition optimized for high I/O | |
| - **Memory Caching**: DragonFly/Redis for frequent access | |
| - **Network**: High-throughput connectivity to Nebius S3 | |
| ## Access Patterns | |
| ### Read Access | |
| - FlowETL transformation engine | |
| - Apache Drill for SQL queries | |
| - CWB/ANNIS for linguistic analysis | |
| - Research and development tools | |
| ### Write Access | |
| - Automated S3 sync processes | |
| - Manual data ingestion | |
| - Processing pipeline outputs | |
| - Backup and archival systems | |
| ## Backup Strategy | |
| ### Automated Backups | |
| ```bash | |
| # Daily incremental backups | |
| rsync -av --delete /data/adaptai/corpus-data/ /backup/corpus-data/daily/ | |
| # Weekly full backups | |
| tar -czf /backup/corpus-data/weekly/$(date +%Y-%m-%d).tar.gz /data/adaptai/corpus-data/ | |
| ``` | |
| ### Cloud Backup | |
| - Regular sync to Nebius S3 for disaster recovery | |
| - Versioned backups for data recovery | |
| - Geographic redundancy | |
| ## Security | |
| ### Access Control | |
| - Role-based permissions | |
| - Audit logging | |
| - Encryption at rest and in transit | |
| ### Data Protection | |
| - Anonymization where required | |
| - Compliance with usage agreements | |
| - Regular security audits | |
| ## Monitoring | |
| ### Health Checks | |
| ```bash | |
| # Disk space monitoring | |
| df -h /data/adaptai/corpus-data | |
| # Data integrity checks | |
| find /data/adaptai/corpus-data -name "*.jsonl" -exec jsonschema -i {} \; | |
| # Access monitoring | |
| inotifywait -m -r /data/adaptai/corpus-data | |
| ``` | |
| ### Performance Metrics | |
| - Throughput: GB/s processed | |
| - Latency: End-to-end processing time | |
| - Quality: Data validation results | |
| - Utilization: Storage capacity metrics | |
| ## Troubleshooting | |
| ### Common Issues | |
| 1. **Permission Denied** | |
| ```bash | |
| sudo chown -R $(whoami):$(whoami) /data/adaptai/corpus-data | |
| ``` | |
| 2. **Disk Space Full** | |
| ```bash | |
| # Clean up temporary files | |
| find /data/adaptai/corpus-data -name "*.tmp" -delete | |
| ``` | |
| 3. **S3 Mount Failed** | |
| ```bash | |
| # Check credentials | |
| cat /etc/passwd-s3fs | |
| # Remount | |
| sudo umount /mnt/s3/* | |
| sudo s3fs oscar-corpus /mnt/s3/oscar-corpus -o url=https://storage.yandexcloud.net | |
| ``` | |
| ## Related Components | |
| ### ETL Pipeline | |
| - **FlowETL**: `/data/adaptai/bleeding-edge-etl/flowetl/` | |
| - **Apache NiFi**: `/data/adaptai/bleeding-edge-etl/nifi/` | |
| - **Apache Drill**: `/data/adaptai/bleeding-edge-etl/drill/` | |
| - **CWB/ANNIS**: `/data/adaptai/bleeding-edge-etl/corpus-analysis/` | |
| ### Infrastructure | |
| - **Nebius S3**: Cloud object storage | |
| - **DragonFly**: High-performance cache | |
| - **Redis**: Traditional caching | |
| - **Qdrant**: Vector database for analysis | |
| --- | |
| **Maintained by**: ETL Team - Bleeding-Edge Corpus Aggregation | |
| **Last Updated**: August 24, 2025 | |
| **Status**: ACTIVE - Ready for Data Ingestion |