[LEAPSECS] Lets get REAL about time.
phk at phk.freebsd.dk
Mon Jan 23 03:38:44 EST 2012
In message <FCE2D26567AF40FCB0B4DF4A6A039C32 at pc52>, "Tom Van Baak" writes:
>> So I can't do operations on UTC time stamps that are more than
>> 6 months in the future?
>The false assumption is that timestamps have infinite precision.
I pretty arbitrarily set the "default precision" to 0.1 second, based
on my reading of a number of technical documents for various system
designs in the transport sector.
NTP sets the precision at 128msec and is therefore technically "not
enough to ensure compliance" in some of these systems.
Only if we increase the default precision to <= 6sec, can we do 6
months in the future and be compliant with TF.460, as currently ratified.
>One solution is to somehow carry a measure of precision along with
>the timestamp. Note that comparing file mod times of files 10 or 20
>years ago does not require leap second tables.
As long as you only compare for order, in the same timescale, there
is no problem, provided you correctly tell if it is DST or not.
It's the calculation of durations between timestamps on UTC in
the future which is impossible, simply because they are undefined.
Adding error-bars throughout all calculations is not going to
fly with the 99% programmers, and if implemented correctly will
lead to a lot of fun in the "stupid pop-up-boxes" category.
>But where do you draw the line? The SI second was different 20
>years ago compared to today. Are you going to include TAI rate
>adjustment tables along with your new UTC library?
That one is TBD. It's only a matter of implementation, its not
important for the API.
>daily GPS time corrections available on the web; should those
>be included too? What good is a nanosecond timestamp if the
>server that generates it has microseconds of unknown error?
>I guess I object to the whole notion of mixing pico or attosecond
>precision with years or decades or centuries of range.
Except scientists increasingly want that resolution. Think CERN/LHC,
Gran Sasso, think timing of gamma ray bursts and supernovae.
Most users will probably continue with the ~0.05 second typical
precision of NTP.
The point here is to define _one_ API which does things right
inside the domain which might occur on a typical desk. Since
this domain includes both dates in early history of humanity
and benchmarking 4GHz processors, we get the big mantissa.
That obviously does not mean you can benchmark a 4GhHz processor
with full precision over the history of humanity.
As long as you stay inside one computer, you can use a local
timescale with a recent epoch ("start of program") and you
only need a small mantissa.
But as soon as you take timestamps across multiple computers,
be it for atomic physics or flight control, you need to reference
it to a common absolute epoch, and as time_t has shown, those
don't take very long to grow to 32 bits, thus outrunning 64
bits for your benchmarks.
112 bits is overkill, but there are no standardized datatypes
between 64 and 112.
>Another possible solution is to recall how denormalized floating
>point numbers work. There is an implicit trade-off here between
>precision and range. I wonder if the same concept could apply
>somehow to timestamps.
As I said earlier, one of the advantages of FP, is that you can
throw away precision in a structured fashion by using a smaller
You could explot this, and never return more than a floats or
doubles worth of precision when doing subtractions, unless
you use the "CERN" functions.
The trouble with that is that no matter what you radix, there are
7 radix in timekeeping, the majority of them time-variant, so
the introduced error is going to look strange for normal people.
Strange as in: 2100-01-01 00:00:00 - 2012-01-01 00:00:00 = 87y11m27d
It's not a bad idea, but would you want to write the man-page to
explain this, so programmers and their users can understand the
error term ? Not me.
If we want 2100-01-01 - 2012-01-01 to return 88 years, there is
no way around doing the math in "broken down form".
My API forces that behaviour, by refusing to do that job.
>If you find yourself
>needing lots of bits, like more than 32 or 48 or 64, that is already
>a warning sign that you're doing something very wrong.
No, you are just running a numerical benchmark a few minutes long
between two computers with a standardized general purpose datatype
for timestamps on absolute timescales.
The requirement for 64+ bits does not come from any single user,
it comes from the data type being general purpose for all users.
My API allows us to have "only" two data types: The realtime_t
and "struct tm". You can only do away with realtime_t at great
>If you're writing a time library that won't work with an Arduino that
>too is a warning sign.
I fully agree with you, but this is an issue for later.
If we cannot make an API that works correctly when we have enough
CPU-power, we have no chance in hell on the Arduino.
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
More information about the LEAPSECS