In many test benches I see the following pattern for clock generation:
process
begin
clk <= '0';
wait for 10 NS;
clk <= '1';
wait for 10 NS;
end process;
On other cases I see:
clk <= not clk after 10 ns;
The later is said to be better, because it is scheduled before any process is executed, and thus signals that are changed synchronously to the clk edge are handled properly. The following sections from the LRM may seem to support this theory:
Page 169: 12.6.4 The simulation cycle A simulation cycle consists of the following steps:
- b) Each active explicit signal in the model is updated. (Events may occur on signals as a result.)
This should be the signals with a new projected value such as signals delayed by the after.
- d) For each process P, if P is currently sensitive to a signal S and if an event has occurred on S in this simulation cycle, then P resumes.
That would be most of the logic to be simulated
- e) Each nonpostponed process that has resumed in the current simulation cycle is executed until it suspends.
And now all processes that are suspended by a wait for are executed.
TL;DR:
- Is the
aftermethod always superior to thewait formethod? - Does it help to prevent the problems synchronously set input signals?
wait fors) or signals which are triggered by the edge of the clk and hence are a delta-cycle later? – Martin Thompson Jan 10 '15 at 16:34afterversion is better? I've never been taught it's better, I just prefer it as it's a single line :) – Martin Thompson Jan 10 '15 at 16:35wait untilorwait for. – Karsten Becker Jan 10 '15 at 22:30wait for, you should always wait for edges of the clock (like the real world does) - otherwise you are simulating something different, and should expect trouble! – Martin Thompson Jan 11 '15 at 20:11