TL;DR; There is no model inherently censored, as big models have big training data which can't be all modified to not include explicit content. Also, we don't want to not-include the explicit content lest we want to take away the abstractions that the NN might learn from that explicit content also. There are multiple layers to causal reasoning involved to design commercially available products like OpenAI's models.
That's an interesting one. Let's first differentiate between different terms being used. A foundational model is a model that is pre-trained on a set of training data and can be fine-tuned to perform downstream tasks, whereas consequentially a fine-tuned model is one that can perform specific tasks like summarization, topic extraction, NER, etc.
Now, let's talk about the GPT-3 family for a bit which houses both the Davinci model and ChatGPT (GPT-3.5) model. If you read about it a little bit, OpenAI lets you fine-tune Davinci but not ChatGPT. Davinci was fine-tuned to perform better generations/completions. On the other hand, ChatGPT was fine-tuned to perform humanesque chat conversations and consequentially started outputting more uncensored content (because of-course human texts tend to get witty).
Then comes the concept of causal reasoning. You can understand it most easily as a conditional statement which is an added layer over the output of the language model. Essentially it forces the model to take certain actions given certain scenarios. One example might be a prompt saying to use a different selection strategy (it becomes greedy at temperature=0) if the next generated token leads to a vulgar word/slang. Another might be pre-added instructions to model input requesting it to return 'I can't respond to this query as it pertains to copyright information'. It's basically an elaborate if-else statement that works with language models.
Coming back to our example of OpenAI's models, while not completely uncensored, DaVinci is susceptible to producing more uncensored content than ChatGPT as evidenced in their blog asking users to use a layer of causal reasoning to best meet their market requirements and that OpenAI will not be held liable to any explicit content generated.
UPDATE (21 July 2023) - According to this UC Berkley study, this causal reasoning might be actually reducing the effectiveness of ChatGPT due to increasing limitations on the content that it can produce.