I was reading about corporations and trusts and came to wonder how one could use existing legal tools to allow a robot to be as "autonomous" as possible.
I imagine a foundation or non-profit owned by an irrevocable trust owning an AI and the associated data. If an AI was vested with management of most key parts of an entity's operations and a docile board of director, could the AI itself buy property, get contracts to "work", sell things, etc? If so, under what conditions? Would it be comparable to the autonomy of a person under guardianship (theoretically and legally, I know it's not the same thing, but in practice)?
I have found some information on other stacks, but nothing that constitutes a complete answer to my queries.
Thank you for your help!
EDIT: I know that an AI is not a legal entity. By saying that the AI "buys, sells, etc" I didn't mean AI as a legal entity, I meant AI as a principle of operation of a legal entity, like a non-profit. My question refers to autonomy in practice This is a very speculative thought exercise. I guess that the real question is: What imaginative detours could be taken to get around the legal impossibility for an AI to be a legal entity.
I thought the trust might be a good place to start, since in Canada there are data trusts or trusts protecting forests in perpetuity. I thought, why not an AI?