The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Narrow AI: Experimental Model Repository
This repository contains experimental model checkpoints and data from the paper "On the creation of narrow AI: hierarchy and nonlocality of neural network skills" by Eric Michaud, Asher Parker-Sartori, and Max Tegmark.
Repository Contents
This dataset provides the hard-to-reproduce LLM experimental artifacts that support the paper's key figures, particularly training curves and model performance data for scaling analysis and pruning studies.
Experiments Included
1. trainscratch01/ - LLMs Trained from Scratch
- Purpose: Training small to medium LLMs from scratch for scaling analysis
- Models: 9 architectures ranging from 23M to 1.6B parameters
- Architecture format:
d{hidden_size}_l{num_layers}_h{num_heads} - Key models included:
d768_l12_h12/- 338M parameters (representative medium model)d2048_l32_h32/- 1.6B parameters (large model for scaling)
- Training: 100K steps on GitHub code dataset
- Paper figures: Figure 6, Figure 12
2. pruneandtrain01/ - Attribution-Based Pruning
- Purpose: Pruning LLaMA-3.2-1B using gradient attribution, then recovery training
- Base model: NousResearch/Llama-3.2-1B
- Configurations: Various neuron and residual sparsity levels
- Key configurations included:
n0.50_r0.50/- 50% neuron, 50% residual pruning (moderate)n0.90_r0.50/- 90% neuron, 50% residual pruning (aggressive)
- Unique files:
pruning_mask.pt- Binary masks indicating pruned neuronspruning_stats.json- Detailed attribution scores and pruning decisionsexperiment_metadata.json- Sparsity levels and run metadata
- Paper figures: Figure 6, Figure 12, Figure 13
3. pruneandtrainrandom00/ - Random Pruning Baseline
- Purpose: Random pruning comparison for attribution-based methods
- Configuration:
n0.50_r0.20/for direct comparison with attribution methods - Paper figures: Figure 13
4. distillscratch00/ - Knowledge Distillation (Selected)
- Purpose: Training small models via knowledge distillation
- Teacher models: Meta-Llama-3.1-8B, Llama-3.2-3B
- Student:
d768_l12_h12/architecture for comparison - Paper figures: Figure 6, Figure 12
5. tuneprune15-redo/ - Group-Sparsity Regularized Training
- Purpose: Training Llama-3.2-1B on Python code with group-sparsity penalty to induce structured sparsity
- Base model: NousResearch/Llama-3.2-1B (1.2B parameters)
- Method: L1 norm of L2 norm of MLP neuron parameters (encourages entire neurons to become zero)
- Dataset:
codeparrot/github-code(Python subset) - Training: 70,000 steps with various regularization strengths
- Configurations:
lambda_0.0003_bs_18_acc_6/- Light regularization (λ=0.0003)lambda_0.0005_bs_18_acc_6/- Moderate regularization (λ=0.0005)lambda_0.001_bs_18_acc_6/- Strong regularization (λ=0.001)
- Unique files:
experiment_metadata.json- Complete training setup and regularization detailstrainer_state.json- Full training curves including data loss and regularization loss
- Training script: Located in
$HOME/narrow/experiments/tuneprune15-redo - Key feature: Subdistribution training (Python only) with explicit sparsity induction
Model Architecture Details
Parameter Scaling
| Model | Hidden Size | Layers | Heads | Intermediate | Parameters |
|---|---|---|---|---|---|
| d256_l4_h4 | 256 | 4 | 4 | 1024 | ~23M |
| d512_l8_h8 | 512 | 8 | 8 | 2048 | ~92M |
| d768_l12_h12 | 768 | 12 | 12 | 3072 | ~338M |
| d2048_l32_h32 | 2048 | 32 | 32 | 8192 | ~1.6B |
Pruning Configurations
| Config | Neuron Sparsity | Residual Sparsity | Description |
|---|---|---|---|
| n0.50_r0.50 | 50% | 50% | Moderate pruning |
| n0.90_r0.50 | 90% | 50% | Aggressive neuron pruning |
| n0.50_r0.20 | 50% | 20% | Light residual pruning |
File Structure
Each model directory contains:
Standard Checkpoints
final_model/- Final trained modelcheckpoint-{step}/- Intermediate checkpoints (every 5K steps)model_stats.json- Parameter counts and architecture info
Files per Checkpoint
model.safetensors- Model weights in SafeTensors formatconfig.json- Model configurationtokenizer.json- Tokenizer configurationtrainer_state.json- Training history and loss curvestraining_args.bin- Training arguments
Pruning-Specific Files
pruning_mask.pt- Binary masks for pruned parameters (~5GB)pruning_stats.json- Attribution scores and pruning decisions (~8MB)experiment_metadata.json- Run metadata and sparsity settings
Usage Examples
Loading a Model
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load a trained-from-scratch model
model = AutoModelForCausalLM.from_pretrained("ericjm/narrow-data/trainscratch01/d768_l12_h12/final_model")
tokenizer = AutoTokenizer.from_pretrained("ericjm/narrow-data/trainscratch01/d768_l12_h12/final_model")
Loading Pruning Data
import torch
import json
# Load pruning mask and statistics
config = "n0.50_r0.50"
mask = torch.load(f"ericjm/narrow-data/pruneandtrain01/{config}/pruning_mask.pt")
with open(f"ericjm/narrow-data/pruneandtrain01/{config}/pruning_stats.json") as f:
stats = json.load(f)
Analyzing Training Curves
import json
# Load training history
with open("ericjm/narrow-data/trainscratch01/d768_l12_h12/final_model/trainer_state.json") as f:
trainer_state = json.load(f)
training_loss = [entry['train_loss'] for entry in trainer_state['log_history'] if 'train_loss' in entry]
Loading Group-Sparsity Models (tuneprune15-redo)
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
# Load a model trained with group-sparsity regularization
lambda_config = "lambda_0.0005_bs_18_acc_6"
model_path = f"ericjm/narrow-data/tuneprune15-redo/{lambda_config}/checkpoint-70000"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Load experiment metadata
with open("ericjm/narrow-data/tuneprune15-redo/experiment_metadata.json") as f:
metadata = json.load(f)
# Analyze training curves including regularization loss
with open(f"{model_path}/trainer_state.json") as f:
trainer_state = json.load(f)
data_loss = [x['data_loss'] for x in trainer_state['log_history'] if 'data_loss' in x]
reg_loss = [x['reg_loss'] for x in trainer_state['log_history'] if 'reg_loss' in x]
Reproducing Paper Figures
Figure 6 & 12: LLM Training Frontiers
- Data: Training curves from
trainscratch01/,distillscratch00/,pruneandtrain01/ - Analysis: Compare training efficiency and final performance across methods
- Notebook: See paper repository for analysis code
Figure 13: Attribution vs Random Pruning
- Data: Recovery curves from
pruneandtrain01/vspruneandtrainrandom00/ - Key comparison:
n0.50_r0.20configuration in both experiments
Technical Details
Training Setup
- Dataset:
codeparrot/github-code(Python subset) - Sequence length: 1024 tokens
- Tokenizer: Meta-Llama-3.1-8B tokenizer
- Training steps: 100K for scratch training, 20K for pruning recovery
- Learning rate: 5e-4 (scratch), 5e-5 (pruning recovery)
Pruning Method
- Attribution: Gradient-based neuron importance scoring
- Sparsity: Separate control of neuron and residual stream dimensions
- Recovery: Fine-tuning with masked gradients to recover performance
Computational Requirements
- Training: NVIDIA A100 80GB
- Storage: ~50GB for essential models, ~1TB for complete archive
- Memory: Models range from 23M to 1.6B parameters
Citation
If you use this data in your research, please cite:
@article{michaud2024narrow,
title={On the creation of narrow AI: hierarchy and nonlocality of neural network skills},
author={Michaud, Eric and Parker-Sartori, Asher and Tegmark, Max},
journal={arXiv preprint},
year={2024}
}
License
This dataset is released under the same license as the paper. Please see the paper repository for detailed licensing information.
Contact
For questions about this dataset, please contact Eric Michaud or open an issue in the paper's repository.
- Downloads last month
- 43