An extended metaphor posted by /u/500_Shames on 2023-03-15 about the challenges of competing with OpenAI.
Elsewhere in this thread referenced The Bitter Lesson, which touches on similar topics.
/u/redlow0992:
Panic inside NLP orgs of big tech companies? What about the panic at NLP departments in universities? I have witnessed my friends putting their work on PhDs go into despair after ChatGPT and now GPT-4. Quite literally, majority of the research topics in NLP are slowly becoming obsolete in front of our eyes.
/u/ChatumTannin:
Could someone knowledgeable explain this to me? Why isn't it an exciting new basis for further research, rather than a dead end?
/u/500_Shames:
Because if you don’t have access to the same resources that OpenAI has, you can’t compete.
The best metaphor I can come up with is that we’re all spending years to practice and refine the perfect combat art. “New state of the art punching performance achieved with this slight modification to our stance. By planting the foot very carefully and turning while striking, we can break 8 boards rather than just 7, as was the limit of the previous gold standard.” Quickly we graduated to swords, so everyone had to get good at smelting and blacksmithing at home. Still accessible, but now a lot of people had to redirect their research from biomechanics to metallurgy.
Anyone with a GPU or two could iterate on some aspects of the status quo at home, try to find minor modifications or make a breakthrough. Dropout is a really cool, groundbreaking approach to address overfitting that anyone could have come up with, apply, and publish a paper on if they had the idea and skill to implement on consumer hardware.
Then we started scaling. Scaling hard. Think of this as introducing guns, vehicles, and mass production to the equation. Again, you can try to make iterative improvements, but now you need much bigger capital investments to make this happen. Several years ago, to try and push limits in NLP often meant having access to a supercluster at a university. Still doable, but the groundbreaking katana design you were working that would be 5% sharper than the previous gold standard is sorta irrelevant now that we have armor piercing rounds that get the job done through brute force. Now you need to figure out how to push the envelope once again.
Last week, we were working on very nuanced challenges in armor penetration. Why does the bullet go through these materials, but not these? Even if we can’t build a new gun altogether, we can still push for iterative improvements. If you worked on the biomechanics of punching, then biomechanics of swinging a sword, you could still do proper firing stance research.
Yesterday, they revealed they had achieved nuclear fission and GPT-4 is the atom bomb. All of the problems we were working on were rendered irrelevant by the sheer size and power of GPT-4. This is exciting as a giant leap forward, but concerning in that it makes going any other direction far harder. No one cares about work on armor piercing bullets when the state of the art is vaporizing a city block. We worry that internal inefficiencies don’t matter if you have enough data and computing power to make it so big and strong to compensate. Now if we want to “iterate” on this new gold standard, we have to ask OpenAI nicely to use their tool. If we want to try anything new, it will be with the knowledge that there’s no way we will come close to the performance of GPT-4, not because our approach is wrong, but because we lack the same resources. NLP journals will likely be “The Journal of GPT-4” for the next few years.
I’m being hyperbolic here, but I hope the concept I’m trying to explain makes sense.