This post might be a bit technical, but can help you understand how a smartphone or computer measures time.
Every smartphone and computer has a clock, so measuring time shouldn’t be a problem? Well, that is true if you want to know the time of the day within a second or so. But if you’re going to take milliseconds into account, it can become trickier. The clocks in smartphones are controlled by an oscillator, e.g. a crystal, in the same way as an electronic watch. Unfortunately, do the manufacturers prioritize cost and energy consumption over very high accuracy when picking oscillators (Apple is no better or worse than others in this respect).
Still, they are generally accurate enough for everyday use. The accuracy is usually around 10-20 ppm, where ppm means ”parts per million”. A 20 ppm error is equal to 20 seconds in 1 million seconds, or about 2 s in 24 h. But that should result in a clock that is noticeably off within a week or two. So how come it isn’t? The answer is that the phone now and then makes calls to highly accurate time servers on the internet and adjusts the time to compensate for the drift of the internal clock.
This means that I as a developer have access to two different types of time, both with their advantages and disadvantages. One is the ”uptime”, i.e. the time in seconds since the device last was restarted. It runs with the internal clock and is not affected by any time adjustments. But that clock stops when you put the device to sleep. The other is the ”time of day” (TOD), e.g. 10:22:12.435. That time also runs with the internal clock, but is occasionally shifted to compensate for the drift.
So which clock to use? The uptime will be subjected to drift. But if we look at the 20 ppm again, that is 0.2 ms in 10 s and 0.02 s in 1000 s (16 min). So it is insignificant for a 100 m race and of little consequence also for longer races. The TOD, on the other hand, is more accurate over an extended period. BUT, you will never know when a time shift occurs, so if you are really unlucky, you will loose 1 s in the middle of a 100 m race. The choice is therefore easy, and the uptime is the basis for almost all timing in SprintTimer. To handle background timing, however, the TOD must be used. SprintTimer, therefore, has routines synchronizing with the on-line time servers much more frequently than what iOS does by default.
Using multiple devices for timing adds to the complexity. The clocks must first be synchronized, or rather compared. When you synchronize the clocks while setting up Start Sender in direct mode, the finish device sends a call to start sender. Then start sender device sends back its current uptime, and when that is received, it is compared to the uptime on the finish device. The app now knows the difference between the clocks in the two devices and can use that when a start signal is sent.
Unfortunately, there is always some latency (delay) in every network and it is usually highly variable (as can be seen on the latency indicator on the sync page). This makes the process a little more involved: The times both when the sync call is sent, and when the reply is received, are noted. Then the mean of these two times are compared to the Start Sender time. Taking the mean is the same as saying that the sync call took the same amount of time in both directions. This is, of course, an assumption, but one that is correct on average. SprintTimer, therefore, makes a lot of sync calls to find this average. And to further increase the accuracy, only calls with low latency are used. In the example below the latency is 0.1 s in each direction.
When you use Start Sender in cloud mode it is not possible for the devices to send calls directly to each other, and they must rely on updating a database instead. This process is much to slow to be used for synchronization. SprintTimer, therefore, relies on its ability to sync with time servers on the Internet. The two devices are consequently synchronized independently, and the TOD is sent with the start signal. This procedure has a small accuracy penalty since the errors in the two synchronizations might be added. An additional drawback is that there is no absolute reference, so if the synchronization on one of the devices goes wrong there is no way to tell.
The main reason for making the clock comparison (both in direct and cloud mode) is that when the difference is established, it doesn’t matter if there are any delays when sending the start signal since the start time can be converted to the time scale of the finish device.