AIs are generally biased.
However, in machine learning, the word bias doesn't necessarily have a negative connotation - it refers to the prior knowledge you inject into the learner. Maybe you've already heard of regularization techniques (which is a form of bias), which are based on the idea that, if you restrict the set of functions the learner can pick from, the learner might be able to find a function that generalises more to unseen examples rather than choosing a model that will just perform well on the training dataset. You can also check my other answers here and here for other details.
Now, even if you restrict the set of models the learner can choose from, the AIs might still be biased and produce false and misleading statements or statements that favour certain things over others, because, for example, the dataset contains misleading information or doesn't contain enough information for the learner to pick the right function, or the models just hallucinates and makes up facts. Here is a famous example.
Having said that, corporations and companies that develop these generative AIs, like Microsoft and Google, might have some principles that attempt to guide the development of safe AI, but AIs will most likely always be biased, and there's also censorship. We could argue that some censorship might also be needed, but it also depends on the context. For example, Microsoft writes
Our goal for Copilot is to be helpful to users. By leveraging best practices from other Microsoft generative AI products and services, we aim to limit Copilot from generating problematic content and increase the likelihood of a safe and positive user experience. While we have taken steps to mitigate risks, generative AI models like those behind Copilot are probabilistic and can make mistakes, meaning mitigations may occasionally fail to block harmful user prompts or AI-generated responses.
So, in conclusion, you should never blindly trust AIs, like you probably shouldn't trust an unknown or unreliable person.