[LEAPSECS] Lets get REAL about time.

Michael Sokolov msokolov at ivan.Harhan.ORG
Sun Jan 22 13:52:31 EST 2012

Keith Winstein <keithw at mit.edu> wrote:

> Hmm, in practice I think the plan to simply fail with an error is

> going to be a non-starter. Plenty of applications need to record dates

> more than six months in the future; e.g. in a calendar program, the

> user will want to schedule a meeting for August 1, 2012, from 9 a.m.

> EDT to 10 a.m. EDT. The program will want to do all the normal things

> -- calculate the duration of the meeting, how far in the future it is

> (so it can put it in sorted order along with the other events of that

> day), etc. In a subscription service, we might want to say that the

> user's subscription lasts until January 22, 2013 at 12:21 p.m. (one

> year hence) and give them a countdown (264 days remaining) as that

> timestamp approaches.

This is exactly why "REAL" time is simply wrong for most "civil"
applications. That is also one of the reasons why I have rejected the
idea of switching the time_t scale on my personal non-POSIX UNIX
systems to a TAI-style one.

What people like PHK fail to grasp is that a whole ton of applications
absolutely DO NOT CARE how many Cs-133 transitions happen to occur in
a given *civil* time interval, all they care about is a bijective
mapping between their timestamps and *official civil time*.

The SI second is the root of the problem. It should NOT be used
outside of highly specialized scientific/technical context, i.e., it
should NOT be used in civil contexts. In a civil application a 1 s
timestamp increment is NOT an SI second, it represents the position of
the hands of the official clock on the government building advancing
by a certain angle, regardless of how many times a Cs-133 atom happens
to flip while the hands of the official clock advance by an angle
meaning 1 s.

Each (micro-)nation should indicate its official time with an analog
clock (i.e., one with rotating hands, not digital) on the wall of a
government building specifically to drive the point home that notations
like 23:59:60 are not acceptable. This non-scalar notation is the
real fundamental problem with UTC in my eyes, *not* the length of
advance notice for leap seconds (6 months is *far* more than should be
necessary IMO), and that is why UTC should not be used directly by
"normal" application. UTC should be rubberized in the way of UTC-SLS
or Google's leap smear before being presented to "normal" applications
and non-real-time operating systems such as 4.3BSD-Quasijarus.

Back to the calendar and subscription applications, there is
absolutely no reason why they can't store their timestamps with 1 s or
finer precision indefinitely far in the future *and* have these
timestamps be absolutely correct when that future moment arrives.
BUT, these timestamps need to be reckoned on a NON-REAL time scale,
i.e., they should not pretend to have any relation to time-as-in-physics
and should merely represents particular points in the course of
"analog" civil time, i.e., particular angular positions of the
rotating hands of the official clock on the wall of a government
building. An indication that Mary Q. Public's subscription expires at
2022-07-25T19:41:42 UT1 is perfectly precise and unambiguous
regardless of how many leap seconds occur between now and then.

The current non-POSIX UNIX definition of time_t (which is identical to
the POSIX definition with the exception of making absolutely no
reference to "UTC") which measures the angular position of the hands
of a civil analog clock with no reference to physical time is perfect
for most "normal" civil applications. Forcing such systems to
maintain TAI-style time in the kernel and converting to civil time in
the userland via leap second tables (which is what PHK's REAL time
proposal does in essense) is nothing but an unnecessary burden. Why
burden a system with leap second tables if it doesn't need them? If a
system needs interval (as opposed to civil or time-of-day) timekeeping
only in a very very crude sense, as most "normal" systems do, it is
much simpler to make the system _explicitly not care_ about "atomic"
time and maintain its notion of time solely as a representation of the
civil clock-hands angle, which is also usable as a crude measure of
interval time.

A typical example of what I mean by a system needing interval time
only in a very very crude sense: consider a secondary DNS server
periodically contacting the primary master to see if its zones need an
update (that's a typical use of interval timekeeping on the kind of
computer systems I run). This refresh interval is specified in the
DNS SOA record in units called "seconds", without further
qualification. On the one hand, this ought to mean interval time
rather than time-of-day: there doesn't seem to be any sensible reason
why the interval between DNS zone refreshes should depend on Earth's
rotation. Hence the natural interpretation of RFC 1035 would be to
take the times in the SOA record as being in SI seconds. But the OS
on which my DNS servers run has no knowledge of SI seconds, it only
knows rubber civil seconds. So what's the big deal? Absolutely
nothing bad will happen if a DNS zone refresh occurs one second
earlier or one second later. That's what I mean by interval
timekeeping requirements on most "normal" systems being very crude.

On the other hand, the *civil* timekeeping requirements can be very
stringent. The example of expiration of subscriptions that Keith has
brought up is a very good one: I like the idea of the moment of
subscription expiration far in the future being defined very precisely
*in relation to official civil time*, which for the Republic of New
Poseidia is currently UT1.

One of the big problems in this whole "leap second" debate is that
some of the participants have worked so much with specialized systems
needing high-precision interval timekeeping that they forget that
_not everyone_ has the high-precision requirements that they do, and
then they go ahead and impose their high-end requirements on the rest
of us who don't need them. Requiring every BSD system to maintain
TAI-style time in the kernel, drag around a leap second table and
worry about keeping it up to date is pure evil when most of those
systems only need a civil UT1-modeling time_t and would be just fine
without REAL time.


More information about the LEAPSECS mailing list