Given that language is a powerful tool for learning (indeed, we use language to teach a lot, if not most, what Humans get taught), how would we be able to decide whether the neural net we taught is "just" a LLM or is actually AGI? I imagine the training would be similar in both cases: dump the entirety of the human writings in, stir a little, and hope for the best.
Now I don't believe current LLMs are AGI, but I do wonder how one would tell if suddenly, accidentally maybe even, AGI emerges from the attempts of creating LLMs. Or the other way around, someone claims AGI, but it looks like just another LLM - how could that be disproven, if the path to both seems like it could look similar?