0

I'd like to conduct some measurements with my pupils in the high-school I'm teaching in, but I ran into some conceptual problems. I'd like to measure the (approximately constant) speed $v$ of an electric train running on a circular track by measuring the time taken by the train to cover different distances.

The setup is this:

  • each distance and time measurement is taken a single time, i.e. it's a single direct measure;
  • for each couple $(d_i,t_i)$ of measures I can calculate the speed $v_i=d_i/t_i$, i.e. I obtain an indirect measure;

My question is this: How can I correctly (following error theory) evaluate the "true" speed $v$? Can I take the mean of the $v_i$'s? What about errors? I suppose that single measures have an error related to the measuring tool, i.e. if the stopwatch shows the centiseconds, the times shoud have an error of 0.01 s and if the ruler has millimeter precision, the distances shoud have an error of 1 mm.

I'm sorry but I don't have a sound background in error theory so I don't know in theory how to handle errors in indirect measures obtained by single measures.

0 Answers0