Corpus Data Directory
Location
/data/adaptai/corpus-data
Purpose
This directory serves as the central storage location for all corpus data pulled from Nebius S3 and other sources. It is the primary input directory for the bleeding-edge ETL pipeline.
Data Organization
Directory Structure
/data/adaptai/corpus-data/
βββ nebius-oscar/ # OSCAR corpus from Nebius S3
β βββ unsharded/ # Unsharded multilingual data
β βββ wikipedia/ # Wikipedia dumps
β βββ commoncrawl/ # Common Crawl data
βββ mounted-s3/ # Symlinks to mounted S3 buckets
β βββ oscar-corpus -> /mnt/s3/oscar-corpus
β βββ other-buckets/ # Additional S3 buckets
βββ processed/ # Processed data ready for analysis
β βββ flowetl-transformed/ # FlowETL processed files
β βββ cleaned/ # Cleaned and normalized data
βββ backups/ # Corpus data backups
βββ YYYY-MM-DD/ # Date-based backup folders
Data Sources
Primary Sources
Nebius S3 - OSCAR Corpus
- Open Super-large Crawled Aggregated coRpus
- 100+ languages
- Petabyte-scale multilingual data
- Real-time streaming capability
Wikipedia Dumps
- Multilingual Wikipedia articles
- Structured text data
- Regular updates
Common Crawl
- Web crawl data
- Diverse content types
- Massive scale
Integration Methods
Direct Mount (Recommended)
# Mount Nebius S3 buckets
s3fs oscar-corpus /mnt/s3/oscar-corpus -o url=https://storage.yandexcloud.net
# Access data through symlinks
ls -la /data/adaptai/corpus-data/mounted-s3/
Automated Pull Script
# Use the automated puller
python3 /data/adaptai/bleeding-edge-etl/nebius_s3_mount.py
# Environment variables required:
export Nebius_ACCESS_KEY=your_access_key
export Nebius_SECRET_KEY=your_secret_key
Manual Download
# For specific file downloads
aws s3 sync s3://oscar-corpus/unsharded/ /data/adaptai/corpus-data/nebius-oscar/unsharded/
Processing Pipeline
FlowETL Integration
Corpus data in this directory is automatically processed by:
- FlowETL - Autonomous transformations
- Apache NiFi - Orchestration and flow management
- Apache Drill - Schema-free querying
- CWB/ANNIS - Linguistic analysis
Data Flow
Nebius S3 β /data/adaptai/corpus-data/ β FlowETL β Processed Data β Analysis
Storage Requirements
Capacity
- Initial: 10TB+ for sample datasets
- Production: 100TB+ for full corpus processing
- Scalable: Designed for petabyte-scale growth
Performance
- SSD Storage: /data partition optimized for high I/O
- Memory Caching: DragonFly/Redis for frequent access
- Network: High-throughput connectivity to Nebius S3
Access Patterns
Read Access
- FlowETL transformation engine
- Apache Drill for SQL queries
- CWB/ANNIS for linguistic analysis
- Research and development tools
Write Access
- Automated S3 sync processes
- Manual data ingestion
- Processing pipeline outputs
- Backup and archival systems
Backup Strategy
Automated Backups
# Daily incremental backups
rsync -av --delete /data/adaptai/corpus-data/ /backup/corpus-data/daily/
# Weekly full backups
tar -czf /backup/corpus-data/weekly/$(date +%Y-%m-%d).tar.gz /data/adaptai/corpus-data/
Cloud Backup
- Regular sync to Nebius S3 for disaster recovery
- Versioned backups for data recovery
- Geographic redundancy
Security
Access Control
- Role-based permissions
- Audit logging
- Encryption at rest and in transit
Data Protection
- Anonymization where required
- Compliance with usage agreements
- Regular security audits
Monitoring
Health Checks
# Disk space monitoring
df -h /data/adaptai/corpus-data
# Data integrity checks
find /data/adaptai/corpus-data -name "*.jsonl" -exec jsonschema -i {} \;
# Access monitoring
inotifywait -m -r /data/adaptai/corpus-data
Performance Metrics
- Throughput: GB/s processed
- Latency: End-to-end processing time
- Quality: Data validation results
- Utilization: Storage capacity metrics
Troubleshooting
Common Issues
Permission Denied
sudo chown -R $(whoami):$(whoami) /data/adaptai/corpus-dataDisk Space Full
# Clean up temporary files find /data/adaptai/corpus-data -name "*.tmp" -deleteS3 Mount Failed
# Check credentials cat /etc/passwd-s3fs # Remount sudo umount /mnt/s3/* sudo s3fs oscar-corpus /mnt/s3/oscar-corpus -o url=https://storage.yandexcloud.net
Related Components
ETL Pipeline
- FlowETL:
/data/adaptai/bleeding-edge-etl/flowetl/ - Apache NiFi:
/data/adaptai/bleeding-edge-etl/nifi/ - Apache Drill:
/data/adaptai/bleeding-edge-etl/drill/ - CWB/ANNIS:
/data/adaptai/bleeding-edge-etl/corpus-analysis/
Infrastructure
- Nebius S3: Cloud object storage
- DragonFly: High-performance cache
- Redis: Traditional caching
- Qdrant: Vector database for analysis
Maintained by: ETL Team - Bleeding-Edge Corpus Aggregation Last Updated: August 24, 2025 Status: ACTIVE - Ready for Data Ingestion