With a Baud rate of 600 kbit/s, you have a nominal bit time of 1.67 μS.
The nominal bit time is divided into three segments: SYNC_SEG, BS1, and BS2.
- SYNC_SEG = Tq = (BRP[9:0] + 1) x Tpclk
- BS1 = Tq x (TS1[3:0] + 1)
- BS2 = Tq x (TS2[3:0] + 1)
A 30 MHz CAN clock gives us Tpclk of 33.33 ns.
1.67 μS / 33.33 nS gives us almost exactly 50 clock ticks per bit time. A good rule of thumb is to have your sampling point at 70-80% of the nominal bit time. More towards 80% the higher the baud rate. So let's set BRP[9:0] to 4 which gives us:
Tq = (4 + 1) x 33.33 ns = 166.65 ns
So now Tq is 1/10th of nominal bit time and gives us easy numbers to work with. For an 80% sampling point:
- SYNC_SEG = 1 Tq = BRP[9:0] of 4
- BS1 = 7 Tq = TS1[3:0] of 6
- BS2 = 2 Tq = TS2[3:0] of 1
A Word of Warning:
I strongly recommend against running at 1 Mbit/s. I had a project where the customer specified the system to run at 1 Mbit/s and it caused no end of headaches. And it was completely unnecessary as we were only sending one message every 500 ms.
We played with everything. Different transceiver chips, termination schemes, board layout, slew rates, sampling points, common mode filters, etc. It was a nightmare. And not only for us. Every subsystem had issues communicating. After four years of begging and pleading for a lower baud rate, they finally dropped it to a more standard 125 kbit/s.
If you have any control over what baud rate is chosen, lower is better. If it is a customer specification, tell them as often as you can that it's a bad idea.