Stabilizer simulation can be done in $O(gq + rq^2)$ time, where $g$ is the total number of operations, $q$ is the number of qubits, and $r$ is the number of measurements that needed random results. But that's just for the first sample. But you can generate another sample in $O(g)$ time by using the first sample as a template. Getting more samples is way, way cheaper.
The hard part of the stabilizer simulation is figuring out the relationships between measurements. Which measurements have to be 0, which measurements have to agree, etc. But if you already have a sample, those relationships are all implicit in the sample. You don't need to figure them out again. So the cost drops enormously.
In the context of converting between measurement-based and not-measurement-based computations, this means you don't necessarily need some high level understanding of the circuits. I don't know specifically what the exact information required is, but you probably don't need to be told "this is a surface code" and make some specialized surface code conversion methods. You probably just need something simpler and more universal like an example of a sample, or a hint about the location of the logical observable. Almost certainly any kind of real time system will require these hints to be included, so that any necessary conversions can be performed and verified efficiently.