SprintTimer accuracy

The most important feature of a timing app is, of course, its accuracy and precision. To be able to test SprintTimer under controlled but realistic circumstances a special test rig with millisecond accuracy was built (described at the end of this post). More than 150 “races” were run on different iPhones and iPads under different conditions and setups. The overall result was very satisfying.

The 10 seconds race
The “race” was started by a sound from the test rig that triggered the clock in SprintTimer. The Photo Finish recording was started after 9 s and the finish occurred after around 10 s. The difference between the time measured by SprintTimer and the time reported by the test rig was calculated (e.g. 0.003 s in the image above). Seven different devices were tested, ranging from a new iPhone 11 to a 7-year-old iPad Air. In each case, 10 tests were run.

Click to enlarge

All newer devices were well within the 0.01 s accuracy I claim for SprintTimer. The two older devices with a lower frame rate had a slightly higher spread, but are still OK.

Longer races
To check the accuracy over time a number of tests were run on the iPhone 7+. In the first, the clock ran for 5 min (300 s) before a short recording was started. In the second case, the recording was started at once and ran for 5 min before the object appeared. The last case was like the first with the exception that the clock here ran for 1 h before timing the event.

The two first cases are close to the results for the short race, so there is no obvious effect on a 5 min race. In the 1-hour race, we are beginning to see some clock drift. This is not unexpected (as discussed here), but is usually not significant since you seldom need 0.01 s accuracy for a 1-hour race.

Start Sender
Using start sender is usually accurate since it synchronizes the clocks, but it might still introduce a small uncertainty. Four tests were therefore run: The first two in the direct mode with the first over a modern high capacity WiFi, and the second over an old, battery power router placed 40 m away. In addition, two tests were run in cloud mode over a mobile network (WiFi turned off). The first in standard-setting, the second with 4G turned off.

Not surprisingly is there a little larger spread in the times compared to running on one device, but I think that the result is very satisfying. Especially the cloud mode with its higher latency is better than expected. But it should be noted that I live in a country with very good cellular networks, so your experience, in this case, might vary.

Video Finish
Since Video Finish uses the same start and video recording routines as Photo Finish it should theoretically give the same accuracy. To test this two series of 10 s tests were run, one at 10 frames per second the other at 30 fps. The “dual-mode” was used to be able to get times with millisecond precision.

The results are as good as those for Photo Finish, which means that you can use the mode that you find most convenient. Using 30 fps doesn’t improve accuracy when you are using dual-mode. But if you are not, 30 fps will increase the chance that there is an image where the competitor is close to the finish line.

Hand start
In all the cases above SprintTimer was started by a sound from the test-rig. The next graph instead shows results from a hand start where the test rig played the start sound through a speaker. The first results are from using the Hand/Mic mode. The second group from a regular hand start, but with the start calibrated to an offset of 0.17 s.

The hand/mic start should theoretically give the same accuracy as the sound start, but finding the right point in the sound graph will add some uncertainty. The hand start is a test of the user rather than the app. But the results above are evenly distributed around 0, which indicates that the 0.17 s offset was a good choice. It should be noted that these tests were done indoors without any disturbances and you should probably expect a little higher variation in at a real event.

The test-rig
The design criteria for the test-rig were:

  • A real physical object that moves with a speed comparable to a sprinter 5 m away.
  • A start sound that could be sent to SprintTimer.
  • A reference time with millisecond accuracy.
  • Stable and adjustable setup.

I, therefore, built a very stable pendulum with a “tower” made of a rigid aluminum bar and the pendulum of another aluminum bar fixed with two ball bearings. Below the pendulum, I placed a very fast photosensor that is triggered when the pendulum passes. The sensor sits on an adjusting screw so it can be moved exactly to the point where the pendulum is vertical. The sensor is connected to a microcontroller that registers the pass. The microcontroller is also connected to a very accurate “Real Time Clock” that provides a time reference. The iPhone fixture can be adjusted so the finish line in the camera view is exactly over the edge of the pendulum when it is hanging vertically.

The test-rig. Click to enlarge.

I started a test by pushing the red button on the control box. The microcontroller registered the time and sent a sound pulse that started the clock in SprintTimer. After a while (depending on the test), I started the recording and dropped the pendulum. The microcontroller measured the passing time and sent it to the computer. In SprintTimer I scrolled the image to the pendulum and read off the time. The process is shown in this video: