In Newtonian mechanics for $N$ particles, so long as you avoid "pathological" situations like the Norton's Dome example, or otherwise introduce non-determinism through a non-deterministic force like in the Langevin equation, the time evolution of the system is fully determined once you know the positions and momenta of all of the particles at one instant in time.
The same is generally true for fluid mechanics like in the Navier-Stokes equations, or relativistic field theories like electromagnetism or general relativity (GR). You can have scenarios in GR and possibly in the Navier-Stokes equations when singularities form which make the equations well defined. At least in GR, this isn't a big problem in practice; in numerical approaches to GR evolving colliding binary systems, for example, one can use a so-called "puncture method" where the singularity is dealt with analytically, and matched onto a numerical solution. However, in both GR and Navier-Stokes, generally the appearance of a singularity would indicate that these equations are being pushed beyond their regime of validity, and to understand what's happening at the singularity requires a more complete theory. Perhaps, in this more complete theory, there is a way to time evolve the system deterministically.
So, at least within classical (non-quantum) physics, assuming you perfectly know the initial state and don't introduce any pathologies, generally the equations are deterministic, modulo some edge cases that at least don't cause issues in practice when we need to do calculations to compare to experiment.
Here I will discuss two sources in which non-determinism (or at least, "non-determinism for all practical purposes") can enter into physics...
- Quantum mechanics. The outcome of any experiment involving a state that is a superposition over different basis states in the observable you want to measure cannot be predicted in advance. However, there is a kind of "generalized determinism" that quantum mechanics satisfies, in that if you know the initial state perfectly, you can at least predict the probabilities for any future observation.
- Uncertainty in the initial state. In my opinion, the most relevant source of "non-determinism" in the laws of physics that leads to complexity and interesting behavior in the world around us that we can't predict easily, comes from uncertainty in specifying initial conditions. You can try to wave this away as "mathematically trivial," but it does seem to be an important feature of reality as experienced by human beings. Even in a purely classical model, there are simply too many degrees of freedom to specify all their states or store them in a computer. And for a given particle, there is always some measurement error in specifying its position and momentum. Because many real-world systems tend to be chaotic, any small uncertainty in the initial conditions can quickly balloon into an inability to make predictions. Michael Berry estimated that, even with a supercomputer, it was impossible to predict the state of motion of a billiard ball after several collisions, because its motion becomes sensitive to increasingly minute details of the initial state. Here I quote from an article on anecdote.com, which includes a quote from Taleb's The Black Swan when talking about Berry's paper:
The problem is that to correctly computer the ninth impact, you need to take account the gravitational pull of someone standing next to the table (modestly, Berry’s computations use a weight of less than 150 pounds). And to compute the fifty-sixth impact, every single elementary particle in the universe needs to be present in your assumptions! An electron at the edge of the universe, separated from us by 10 billion light-years, must figure in the calculations, since it exerts a meaningful effect on the outcome.
I realize you're willing to assume that the initial state can be perfectly specified, so you can consider this answer as a frame challenge that this assumption is actually incredibly strong in practice and hides much of the interesting complexity of the real world. In some sense, it is not a "stable" assumption, since adding a small amount of uncertainty in the initial state tends to explode into much larger amounts of uncertainty in realistic systems, rather than staying "close" to the idealized solution you would have found with perfect initial conditions.