a point particle is moving on a straight line.
there is a 100-second period to estimate speed.
we can measure particle position at certain timestamp.
how to best estimate speed?
assumption1: particle moves at constant speed.
assumption2: the timestamp of position measurement is absolutely correct.
assumption3: the error of position measurement is i.i.d and known (just assume some normal distribution).
setting1: number of position measurement is fixed, but measurement timestamp is by choice.
intuitively, should spend half of the measurements at t=0, and another half at t=100.
and calculate speed by position-change-in-100s divide by 100s
is this the best way; and if so, how to prove that this is the best?
setting2: position measurements are already done and given, on timestamps every 1 second.
we can have a rough estimate by position-change-in-100s divide by 100s. this only uses position at t=0 and t=100.
one "stupid" calculation is to calculate speed by t=n and t=n+1, then average the speed with all n from 0 to 99. it is easy to see that the result is exactly the same as rough estimate from t=0 and t=100 only. it seems we are using all measurements, but we are not. it is kind of counter-intuitive to me.
intuitively, the measurements at t=1,2,3... should help, but how to best use them?
setting3: what to learn from setting 1 and 2, and how to do this better in practice?
assumption1 would not be satisfied in practice. but in quite some cases, like a moving car, we can assume "reasonably constant", or "steady" speed.
assumption 2 and 3 also not satisfied, but maybe less concerning?
--
FROM 12.88.238.*