Nice idea. Essentially, adding differentiability to the best of n choice lets them encourage models to add some diversity “naturally”. The Gemma 2b results indicate it’s probably worth trying this on larger models.
That said, I’m unclear how much this helps in practice; we don’t usually parse through say 32 responses from our 2B parameter models. I guess if you instrumented parallel reasoning processes in batch this might be helpful. Perhaps that’s what o1-pro is doing in the background, actually.
Anyway, this one seems to me like it might make its way onto the “good idea” list when rl is available in the training pipeline.
karmasimida 15 hours ago [-]
Isn't the BoN RL formulation similar to DeepSeek's GRPO algorithm? The latter seems to implicitly already captured this?
Johnyhar 14 hours ago [-]
Wouldn't RL training, with the goal of aligning the LLM with the reward function R(x, y), result in the outputs of the trained LLM maximizing said reward function? How different are the rewards of the N outputs in BoN sampling, to justify its cost.
padolsey 13 hours ago [-]
I wish they had some example completions in the paper and not just eval results. It would be really useful to see if there are any emergent linguistic tilts to the newly diverse responses...
justanotheratom 20 hours ago [-]
Is Best-of-N Sampling standard practice these days in Inference? Sounds expensive on the face of it. I am surprised because I thought the trend was towards cheaper inference.
diwank 20 hours ago [-]
For reasoning models, this would actually improve exploration efficiency and hence possibly allow higher performance for the same compute budget. As in, if you want to sample from multiple rollouts for the same prompt, it's more efficient if the model is able to produce diverse thought directions and consider them to find the best response as opposed to going down similar trajectories and waste compute.
Cerebras has used optillm for optimising inference with techniques like CePO and LongCePO.
peepeepoopoo114 19 hours ago [-]
Almost all of the efficiency gains have come from shedding bit precision, but the problem is that AI labs are now running out of bits to shed. The move to reduced precision inference has been masking the insane unsustainability of compute scaling as a model improvement paradigm.
nullc 3 hours ago [-]
Is there really a limit on bits to shed? I suspect not.
Take N gates, normalize them, represent them as points on the surface of a hypersphere. Quantize the hypersphere as coarsely as you need to get the precision you want. Want less precision but your quantization is getting too coarse? Increase N.
Fast algebraic codes exist to convert positions on a hyperspheric-ish surfaces to indexes and vice versa.
Perhaps spherical VQ isn't ideal-- though I suspect it is, since groups of weights often act as rotations naturally-- but some other geometry should be good if not.
That said, I’m unclear how much this helps in practice; we don’t usually parse through say 32 responses from our 2B parameter models. I guess if you instrumented parallel reasoning processes in batch this might be helpful. Perhaps that’s what o1-pro is doing in the background, actually.
Anyway, this one seems to me like it might make its way onto the “good idea” list when rl is available in the training pipeline.
Cerebras has used optillm for optimising inference with techniques like CePO and LongCePO.
Take N gates, normalize them, represent them as points on the surface of a hypersphere. Quantize the hypersphere as coarsely as you need to get the precision you want. Want less precision but your quantization is getting too coarse? Increase N.
Fast algebraic codes exist to convert positions on a hyperspheric-ish surfaces to indexes and vice versa.
Perhaps spherical VQ isn't ideal-- though I suspect it is, since groups of weights often act as rotations naturally-- but some other geometry should be good if not.