I believe we'll eventually have AGI, and it'll run on a PC at home, on CPU if you're willing to wait and have enough RAM. Training takes massive amounts of compute, but that can be distributed. I also believe that it's possible to significantly reduce the power required to run LLMs using alternative chip architectures.
Even if the bubble pops and no new data centers get filled with masses of GPUs, I think we're going to eventually have AGI now. Enough people have enough of a clue as to how to do it, not to mention the public sourced training data and models, to make it a certain outcome.
I believe we'll eventually have AGI, and it'll run on a PC at home, on CPU if you're willing to wait and have enough RAM. Training takes massive amounts of compute, but that can be distributed. I also believe that it's possible to significantly reduce the power required to run LLMs using alternative chip architectures.
Even if the bubble pops and no new data centers get filled with masses of GPUs, I think we're going to eventually have AGI now. Enough people have enough of a clue as to how to do it, not to mention the public sourced training data and models, to make it a certain outcome.