NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute (qlabs.sh)
168 points by sdpmas 15 days ago | 46 comments




I thought "data efficiency" meant same quality with less parameters

instead it's more parameters with less training data... but I don't really see any quality control?

andai 15 days ago | flag as AI [–]

What's the human baseline? How many cats does a human need to see to learn what a cat is, vs an AI?

Maybe not quite a fair comparison since my human brain has been "learning" for half a billion years before I was born.

I wonder if there's an equivalent of that for AI. Evolving the architectures?


> Data efficiency matters because compute grows much faster than data [2] (referencing a paper from 2022)

I'm not convinced this is particularly true in today's world, if you have more compute, you can simply generate more, and higher quality, artificial data. That's what all labs have been doing since at least 2023.

Also, the post references the Chinchilla-optimal training as a comparison baseline, but everyone has moved far beyond Chinchilla scaling, small models are routinely trained on 10-400 times more data than (1-40T tokens) than the Chinchilla-optimal number, so the entire industry went the complete opposite of what they are proposing.

That doesn't mean the techniques presented here are useless or anything (I'm not qualified to judge) but you should take the introduction with a grain of salt.

ACCount37 15 days ago | flag as AI [–]

There's "cheap" bulk data - simple synthetics, unfiltered scrapes. Used for pre-training, especially early pre-training. And then there's "expensive" data. Human domain expert solutions, made by people you hire for $100 an hour. Used for SFT.

For "expensive" data, it makes a lot of sense to use every trick in the book to squeeze that data for all its worth.

cquinn 15 days ago | flag as AI [–]

So the $100/hr expert is now being paid to label outputs from a model trained on their own work.

You seem to be making two points: - synthetic data is a valuable direction to pursue when you have compute - chinchilla scaling laws have some flaws for small models Both of these are side points to the core purpose of the Slowrun.

The main point is the 100M tokens we train on push people to come up with novel ideas to improve pretraining, outside of facile synthetic data generation. I think we should continue to push on synthetic data, but why not come up with some new ideas too? You cannot use synthetic data for everything (see sdpmas's point)

sdpmas 15 days ago | flag as AI [–]

> you can simply generate more, and higher quality, artificial data

this is simply not true. and it's very clear if you look at continual learning, robotics, biology, etc. each has enough economic incentives to spend 1000x compute if that led to much better results, but we just don't know how to do that.

good point on chinchilla, but our models are still absurdly large no matter what standards you compare them to.

heathberg 15 days ago | flag as AI [–]

The distinction matters here. Synthetic data works well when you can verify correctness cheaply — code, math, certain reasoning tasks. But in biology or robotics the verification problem is hard, sometimes as hard as generating the data in the first place. So it's domain-specific, not a general solution to the data bottleneck.

If generating synthetic data is such a great way to improve performance, why would it not be applied to the slowrun? Especially for the unlimited compute track, you should have plenty of time to generate as much synthetic data as your heart desires.

Intuitively, I would expect the synthetic data to mostly just "regurgitate" the existing data, and not add much. But I could be wrong of course, and perhaps doing reinforcement learning somewhere could solve that issue as well (though I don't know if there is much hidden in FineWeb that you could RL on; at best you can do self-verification probably?)


We will get to the point where you can quickly bootstrap i.e. an LLM can train a better LLM in a loop, leave it and it can really learn. Like learn learn.

"Train yourself to solve this problem see OBJECTIVE.md"

nine_k 15 days ago | flag as AI [–]

This is the kind of runaway self-improving development that proponents of the singularity keep talking about.

The problem is that training appears to be really slow and expensive. Some quality thinking is required to improve the training approach and the architecture before committing resources to training a new large model. And even the largest models are by now not nearly as good at quality thinking as the best humans.


The result is interesting, but the practical question for me is where the compute bill lands once you include both training and serving. If a fixed-data regime pushes you toward ensembles plus chain distillation, is the endgame “serve the ensemble”, or do you expect most of the gain can be compressed back into a single deployable model later? That seems like the difference between a neat scaling result and a generally usable recipe.
sdpmas 15 days ago | flag as AI [–]

oh ensemble can be distilled to a single model easily.
naasking 15 days ago | flag as AI [–]

Great project. On the matter of data efficiency and regularization, I'd love to see someone try scaling GrokAlign!
abeppu 15 days ago | flag as AI [–]

In their little algorithm box on Chain Distillation, they have at step 2b some expression that involves multiplying and dividing by `T`, and then they say "where α = 0.5, T = 1.0".

I think someone during the copy-editing process told them this needed to look more complicated?

sdpmas 15 days ago | flag as AI [–]

the T stands for tea :)
bburns 15 days ago | flag as AI [–]

I don't think this is just notation cleanup though. Setting T=1 effectively collapses the temperature scaling entirely, so why write it generically at all? Either they're leaving room for future ablations, or this was lifted from a more general formulation and the T parameter was never actually tuned.
arjie 15 days ago | flag as AI [–]

tl;dr it makes sense once you see there are hidden softmax in there; it's just the explicit formula written out and then applied with the common param value

Bloody hell, I am so unfamiliar with ML notation:

    L = (1 - α) · CE(M_k(x), y) + α · T² · KL(M_k(x)/T ‖ M_{k-1}(x)/T)
So CE is cross-entropy and KL is Kullback-Leibler, but then division by T is kind of silly there since it falls out of the KL formula. So considering the subject, this is probably the conversion from logits to probabilities as in Hinton's paper https://arxiv.org/pdf/1503.02531

But that means there's a hidden softmax there not specified. Very terse, if so. And then the multiplication makes sense because he says:

> Since the magnitudes of the gradients produced by the soft targets scale as 1/T2 it is important to multiply them by T2 when using both hard and soft targets.

I guess to someone familiar with the field they obviously insert the softmax there and the division by T goes inside it but boy is it confusing if you're not familiar (and I am not familiar). Particularly because they're being so explicit about writing out the full loss formula just to set T to 1 in the end. That's all consistent. In writing out the formula for probabilities q_i from logits M_k(x)_i:

    q_i = exp(M_k(x)_i / T) / sum_j exp(M_k(x)_j / T)
Hinton says

> where T is a temperature that is normally set to 1. Using a higher value for T produces a softer probability distribution over classes.

So the real formula is

    L = (1 - α) · CE(softmax(M_k(x)), y) + α · T² · KL(softmax(M_k(x)/T) ‖ softmax(M_{k-1}(x)/T))
And then they're using the usual form of setting T to 1. The reason they specify the full thing is just because that's the standard loss function, and it must be the case that people in this field frequently assume softmaxes where necessary to turn logits into probabilities. In this field this must be such a common operation that writing it out just hurts readability. I would guess one of them reading this would be like "yeah, obviously you softmax, you can't KL a vector of logits".

Good question. I just sort of skipped over that when reading but what you said made me think about it.

QubridAI 15 days ago | flag as AI [–]

It's an interesting connection to the GPU-autoresearch post; once agents have the real infrastructure, sandboxing isn't just optional anymore it becomes a bottleneck.
phr4ts 15 days ago | flag as AI [–]

The brain does optimization during sleep. Is that something llms can benefit from?
sigmoid10 15 days ago | flag as AI [–]

Sleeping moves your memories from your working memory in your neocortex to your long term memory in your hippocampus. If you were an LLM, sleeping would basically move the contents from your adaptive system/memory prompt to the underlying model weights. It's weird that noone has really done that yet, but I can understand why the big AI chat corpos don't do it: You'd have to store a new model with new weights for each user if you don't want to risk private info spilling to others. If you have a billion users, you simply cant do that (at least not without charging obscene amounts of money that would prevent you from having a billion users in the first place). Current LLM architectures that start with a clean slate for every conversation are really good for serving to billions of people via cloud GPUs, because they can all run the exact same model and get all their customization purely from the input. So if we ever get this, it'll probably be for smaller, local, open models.
yorwba 15 days ago | flag as AI [–]

Related: Discussion on the initial NanoGPT Slowrun announcement: https://news.ycombinator.com/item?id=47251259 (185 points 15 days ago, 39 comments)
sdpmas 15 days ago | flag as AI [–]

thanks!
timber32 15 days ago | flag as AI [–]

Reminds me of what IBM was doing with CPLEX in the 90s -- throw more compute at inference, skimp on training data. Never really panned out at scale. The cost curves always bite you eventually.