File size: 5,412 Bytes
6a6070f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
# Corpus Data Directory
## Location
`/data/adaptai/corpus-data`
## Purpose
This directory serves as the central storage location for all corpus data pulled from Nebius S3 and other sources. It is the primary input directory for the bleeding-edge ETL pipeline.
## Data Organization
### Directory Structure
```
/data/adaptai/corpus-data/
βββ nebius-oscar/ # OSCAR corpus from Nebius S3
β βββ unsharded/ # Unsharded multilingual data
β βββ wikipedia/ # Wikipedia dumps
β βββ commoncrawl/ # Common Crawl data
βββ mounted-s3/ # Symlinks to mounted S3 buckets
β βββ oscar-corpus -> /mnt/s3/oscar-corpus
β βββ other-buckets/ # Additional S3 buckets
βββ processed/ # Processed data ready for analysis
β βββ flowetl-transformed/ # FlowETL processed files
β βββ cleaned/ # Cleaned and normalized data
βββ backups/ # Corpus data backups
βββ YYYY-MM-DD/ # Date-based backup folders
```
## Data Sources
### Primary Sources
1. **Nebius S3 - OSCAR Corpus**
- Open Super-large Crawled Aggregated coRpus
- 100+ languages
- Petabyte-scale multilingual data
- Real-time streaming capability
2. **Wikipedia Dumps**
- Multilingual Wikipedia articles
- Structured text data
- Regular updates
3. **Common Crawl**
- Web crawl data
- Diverse content types
- Massive scale
### Integration Methods
#### Direct Mount (Recommended)
```bash
# Mount Nebius S3 buckets
s3fs oscar-corpus /mnt/s3/oscar-corpus -o url=https://storage.yandexcloud.net
# Access data through symlinks
ls -la /data/adaptai/corpus-data/mounted-s3/
```
#### Automated Pull Script
```bash
# Use the automated puller
python3 /data/adaptai/bleeding-edge-etl/nebius_s3_mount.py
# Environment variables required:
export Nebius_ACCESS_KEY=your_access_key
export Nebius_SECRET_KEY=your_secret_key
```
#### Manual Download
```bash
# For specific file downloads
aws s3 sync s3://oscar-corpus/unsharded/ /data/adaptai/corpus-data/nebius-oscar/unsharded/
```
## Processing Pipeline
### FlowETL Integration
Corpus data in this directory is automatically processed by:
1. **FlowETL** - Autonomous transformations
2. **Apache NiFi** - Orchestration and flow management
3. **Apache Drill** - Schema-free querying
4. **CWB/ANNIS** - Linguistic analysis
### Data Flow
```
Nebius S3 β /data/adaptai/corpus-data/ β FlowETL β Processed Data β Analysis
```
## Storage Requirements
### Capacity
- **Initial**: 10TB+ for sample datasets
- **Production**: 100TB+ for full corpus processing
- **Scalable**: Designed for petabyte-scale growth
### Performance
- **SSD Storage**: /data partition optimized for high I/O
- **Memory Caching**: DragonFly/Redis for frequent access
- **Network**: High-throughput connectivity to Nebius S3
## Access Patterns
### Read Access
- FlowETL transformation engine
- Apache Drill for SQL queries
- CWB/ANNIS for linguistic analysis
- Research and development tools
### Write Access
- Automated S3 sync processes
- Manual data ingestion
- Processing pipeline outputs
- Backup and archival systems
## Backup Strategy
### Automated Backups
```bash
# Daily incremental backups
rsync -av --delete /data/adaptai/corpus-data/ /backup/corpus-data/daily/
# Weekly full backups
tar -czf /backup/corpus-data/weekly/$(date +%Y-%m-%d).tar.gz /data/adaptai/corpus-data/
```
### Cloud Backup
- Regular sync to Nebius S3 for disaster recovery
- Versioned backups for data recovery
- Geographic redundancy
## Security
### Access Control
- Role-based permissions
- Audit logging
- Encryption at rest and in transit
### Data Protection
- Anonymization where required
- Compliance with usage agreements
- Regular security audits
## Monitoring
### Health Checks
```bash
# Disk space monitoring
df -h /data/adaptai/corpus-data
# Data integrity checks
find /data/adaptai/corpus-data -name "*.jsonl" -exec jsonschema -i {} \;
# Access monitoring
inotifywait -m -r /data/adaptai/corpus-data
```
### Performance Metrics
- Throughput: GB/s processed
- Latency: End-to-end processing time
- Quality: Data validation results
- Utilization: Storage capacity metrics
## Troubleshooting
### Common Issues
1. **Permission Denied**
```bash
sudo chown -R $(whoami):$(whoami) /data/adaptai/corpus-data
```
2. **Disk Space Full**
```bash
# Clean up temporary files
find /data/adaptai/corpus-data -name "*.tmp" -delete
```
3. **S3 Mount Failed**
```bash
# Check credentials
cat /etc/passwd-s3fs
# Remount
sudo umount /mnt/s3/*
sudo s3fs oscar-corpus /mnt/s3/oscar-corpus -o url=https://storage.yandexcloud.net
```
## Related Components
### ETL Pipeline
- **FlowETL**: `/data/adaptai/bleeding-edge-etl/flowetl/`
- **Apache NiFi**: `/data/adaptai/bleeding-edge-etl/nifi/`
- **Apache Drill**: `/data/adaptai/bleeding-edge-etl/drill/`
- **CWB/ANNIS**: `/data/adaptai/bleeding-edge-etl/corpus-analysis/`
### Infrastructure
- **Nebius S3**: Cloud object storage
- **DragonFly**: High-performance cache
- **Redis**: Traditional caching
- **Qdrant**: Vector database for analysis
---
**Maintained by**: ETL Team - Bleeding-Edge Corpus Aggregation
**Last Updated**: August 24, 2025
**Status**: ACTIVE - Ready for Data Ingestion |