Datasets:
Hugging Face Dataset Upload Instructions
Files to Upload
Core Dataset Files
- README.md - Complete dataset card with metadata, description, and usage examples
- data.csv - Clean CSV file with 516 scenarios and their misery scores
- load_dataset.py - Python script for easy dataset loading and exploration
- requirements.txt - Dependencies needed to use the dataset
Supporting Files (Optional)
- misery_index.py - Advanced datasets library loading script
- UPLOAD_INSTRUCTIONS.md - This file (for reference)
Upload Steps
Method 1: Using Hugging Face Hub (Recommended)
Install Hugging Face Hub:
pip install huggingface_hubLogin to Hugging Face:
huggingface-cli loginCreate and upload the dataset:
from huggingface_hub import HfApi, create_repo # Create repository repo_id = "your-username/misery-index" create_repo(repo_id, repo_type="dataset") # Upload files api = HfApi() api.upload_file( path_or_fileobj="README.md", path_in_repo="README.md", repo_id=repo_id, repo_type="dataset" ) api.upload_file( path_or_fileobj="data.csv", path_in_repo="data.csv", repo_id=repo_id, repo_type="dataset" ) api.upload_file( path_or_fileobj="load_dataset.py", path_in_repo="load_dataset.py", repo_id=repo_id, repo_type="dataset" ) api.upload_file( path_or_fileobj="requirements.txt", path_in_repo="requirements.txt", repo_id=repo_id, repo_type="dataset" )
Method 2: Using Git
Clone the dataset repository:
git clone https://huggingface.co/datasets/your-username/misery-index cd misery-indexCopy files:
cp /path/to/your/files/* .Push to Hugging Face:
git add . git commit -m "Add Misery Index Dataset" git push
Method 3: Web Interface
- Go to Hugging Face Datasets
- Create a new dataset repository
- Upload files using the web interface
- Edit README.md directly in the browser if needed
Usage After Upload
Once uploaded, users can load the dataset in several ways:
Using Datasets Library
from datasets import load_dataset
# Load from Hugging Face Hub
dataset = load_dataset("your-username/misery-index")
print(dataset["train"][0])
Using Pandas (Direct CSV)
import pandas as pd
from huggingface_hub import hf_hub_download
# Download and load CSV
file_path = hf_hub_download(
repo_id="your-username/misery-index",
filename="data.csv",
repo_type="dataset"
)
df = pd.read_csv(file_path)
Using the Provided Script
# Download the load_dataset.py script and use it
from huggingface_hub import hf_hub_download
import importlib.util
# Download the script
script_path = hf_hub_download(
repo_id="your-username/misery-index",
filename="load_dataset.py",
repo_type="dataset"
)
# Load and use
spec = importlib.util.spec_from_file_location("load_dataset", script_path)
load_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(load_module)
df = load_module.load_misery_dataset("data.csv")
stats = load_module.get_dataset_statistics(df)
Dataset Configuration
The dataset uses these configurations:
- License: CC-BY-4.0 (Creative Commons Attribution)
- Language: English (en)
- Task: Text regression, sentiment analysis, emotion prediction
- Size: 516 samples (100<n<1K category)
Validation Checklist
Before uploading, ensure:
- README.md contains comprehensive dataset card
- data.csv has all 516 rows with proper formatting
- No missing values in critical columns (scenario, misery_score)
- load_dataset.py script runs without errors
- requirements.txt includes all necessary dependencies
- License is properly specified (CC-BY-4.0)
- Dataset tags are appropriate for discoverability
Post-Upload Tasks
Test the uploaded dataset:
from datasets import load_dataset ds = load_dataset("your-username/misery-index") print(f"Dataset loaded successfully with {len(ds['train'])} samples")Update dataset card if needed using the web interface
Share the dataset with relevant research communities
Consider creating a model trained on this dataset
Support
If you encounter issues:
- Check the Hugging Face documentation
- Visit the Hugging Face Discord
- Create an issue in the dataset repository