There are multiple factors that affect an embedding's performance, including what Davide mentions. Depending on your background, the following interpretation of Davide's answer might be easier for you to understand:
Early in the anneal, the Ising (classical/user-input/final) Hamiltonian has no effect, which means that two spins in a chain are not compelled to agree.
As the anneal progresses, the chain couplings become increasingly impactful... so do the other Ising terms (h's and J's).
You want the chain terms to become important just before the other Ising terms become important. This is a fine balance that can be influenced by the strength of the chains (relative to the other terms), and by anneal offsets, which can make some qubits anneal slightly in advance of others. This D-Wave whitepaper shows an example where anneal offsets can be effective.
More generally I would start "debugging" an embedding by answering, in order, the following questions:
Are one or more chains frequently (or almost always) broken in the output? If a chain frequently comes out as partly up and partly down, it suggests that one or all of the couplings in that chain need to be stronger.
If you find multiple embeddings of the same problem (heuristically using sapiFindEmbedding or similar), do they all "fail" in the same way?
Does your embedding have some chains that are very large compared to others? If so, the larger chains will have slower dynamics (they will be "heavier") and perhaps should be adjusted with anneal offsets as described in the link.
Note that if your output (without postprocessing) doesn't have any broken chains, it may suggest that your chains are too strong. This means that the non-chain terms will be very small, and therefore the problem will be sensitive to noise and will have a high effective temperature, which is bad.
Good luck!