Most of the groundbreaking results in the field of Machine Learning that we have seen lately use enormous neural networks containing hundreds of millions of parameters. Training such a model is not only time-consuming but also has quite a negative environmental impact. Moreover, training CharGPT, for example, is estimated to have cost around one million dollars.
These issues make it impossible for an individual or even a smaller company to conduct research on such models, making the overall speed of technological advancement lower than it could be possible.
Therefore, at TechnoLynx, we believe that lowering the barrier of entry to these technologies is of utmost importance, which partly includes optimizing the training processes.
Luckily, research is being done on this topic, and we welcome LiGO, a new method for learning by model growth. LiGO is shown to perform better than any previous model growth technique, let alone training from scratch. As a bonus, surprisingly, the network trained using LiGO is showing better results than traditional training techniques in many cases.
Credits: MIT News