Lattice surgery on surface codes, and recent generalisations to QLDPC codes, perform computation via sequences of joint logical measurements.
A lot of work has gone into reducing the space-time overhead of such logical measurements, and results are typically presented with this in mind e.g. the time cost of measuring a weight $d$ logical Pauli is say $O(d)$.
Joint measurements may then be compiled to produce Clifford gates e.g. the typical CNOT implementation involves M_XX and M_ZZ plus some fix ups.
But how does one compile an average/worst case Clifford operator in this framework? In theory we may simply combine gates like CNOT,H and S using best known compiling methods for un-encoded operators, but does this give favourable time scaling? Are there known bounds for compiling Clifford operators from logical measurements of weight $\leq W$ say?
Any references would be appreciated.