Theoretically, if you had a very large corpus of only Minoan, you could train an LLM on it.
The LLM would be able to generate Minoan texts, either from scratch, or as completions of supplied starting texts.
Its other related abilities will depend heavily on the nature of the training material. It will only emulate conversation, write poetry, perform logic or translations if the corpus contains significant number of examples of those things. You would not be able to create a chatbot from it, because you have no way to construct valid "system prompt" or perform fine tuning - those approaches used to create ChatGPT and other AI assistants all require human understanding of inputs and outputs.
If your imaginary training corpus included images or other media related to the text somehow (i.e. not just added by a trainer without reference to the text), then provided you created architecture to support that, it may be possible to create a multi-modal model that learned deeper meanings of some words (e.g. associate a Minoan noun for "cat" with images of cats). Of course human researchers could do the same, and I expect experts in ancient poorly understood languages dream of finding some illustrated teaching materials they could study and decipher the language.
LLMs "understand" language in a cyclic self-referential manner, constrained within the training data.
Without some equivalent of the Rosetta Stone, your imaginary LLM might be able to converse in an an ancient language, but it would be hard to claim it had "deciphered" it. Its output would be just as obscure to any humans not already familiar with the language as the source material, and it would not even be possible to tell whether it was correct.
There is no internal deciphering layer inside an LLM. Although there may be logical sub-models within the larger model, they ultimately only relate to valid symbol manipulations, and there are no current ways to map these to real-world concepts. If something like that could be achieved in future, then it may be possible to create a very rough map that compared one LLM to another, and suggested parallels. It would not be deciphering as such, but something broadly helpful towards it e.g. "this piece of text is likely to be poetry" or "that piece of text is some sort of record, like a tax audit" (these are things that can also be guessed by human experts with some accuracy from locations of finds and layout of writing too).
Similar issues apply to deciphering animal communication through machine learning, although in that case observations of the environment and behaviour concurrent with the signals might help.