Marginalium
A note in the margins
December 13, 2024
Marginalium
My commentary on something from elsewhere on the web.
AI model capabilities don’t just scale with size, but with efficiency. I don’t pretend to understand lots of this but I did see:
In other words, around three months, it is possible to achieve performance comparable to current state-of-the-art LLMs using a model with half the parameter size.
Assuming anything like that is true, we should see much better performance from smaller models—maybe even ones you can run on your own PC. Be nice to see the back of OpenAI et al. See also the paper.
filed under: