[LEAPSECS] Fundamental change in semantics
seaman at noao.edu
Tue Nov 15 12:53:02 EST 2011
On Nov 15, 2011, at 9:44 AM, Warner Losh wrote:
> On Nov 15, 2011, at 2:21 AM, Nero Imhard wrote:
>> On 2011-11-15, at 04:43, Doug Calvert wrote:
>>> Why is redefinition of UTC / end of leap seconds not just another routine change?
>> Because it is not simply a refinement of how UTC is kept near UT, but a rather fundamental change in semantics.
> The debate really is about do those semantics really matter or not.
The actual debate that matters at the ITU appears to be "about" nothing more than a turf battle, pure and simple. The debate here is about (re)building a consensus on a coherent conceptual model of civil timekeeping.
Underneath the semantics is science and systems engineering.
> Some say yes and point to telescopes and sextants. Others say no and point to computers that do regular things well, but irregular things like leap seconds poorly.
Just to emphasize for the hundredth time - astronomical use cases generate the most stringent requirements of both kinds. Telescopes are connected to computers.
> It all depends, really, on what "near" means.
Well said and I share a lot of the remaining points that Warner makes in the message. When I say things like that civil timekeeping *is* mean solar time, this question of identifying acceptably near approximations is the heart of the matter. What else could it mean to say "near enough" in the first place? Near enough to what? But when some stalwart on the opposing side rejects my assertion equating the two they are rejecting a coherent conceptual model for civil timekeeping.
I'll happily debate "near enough". Debating whether civil timekeeping could be based solely on some random clock having nothing to do with time-of-day is a non-starter.
> Of course, the delta will grow more in the future, but a few hundred years after that the slowing rate of the earth will mean a growing delta. That's where people start to think that this definition of "near" might not be good enough, so why even go down this path.
Because a few hundred years is a snap of the fingers. And because it is the rate that matters, not just the offset. And because the rate and offset will be wrong for all those intervening centuries. And because for some purposes the error will be significant in a year or two, not a century or two. And because there simply are two different kinds of time underlying civil timekeeping and pretending otherwise has not been vetted to understand the costs and risks.
> It is disheartening that the middle ground remains unexplored.
> Most of the difficulty of the current system could be solved by allowing DUT1 to grow as large as 10s, but still keep it bounded. If we know there will be about 60, then schedule one every 18 months for the next 10-20 years. On the average we'll stay in sync, computers will know well enough in advance to update tables. Exceptions could be announced 10 years in advance, if they are needed if that rate turned out to be really 55 or 65 since the earth's rotation is slowing, but also sometimes speeding up a bit.
We don't know if this is a possible solution. To identify acceptable trade-offs for a solution, first characterize the problem.
> Heck, even without relaxing DUT1 too much, studies have shown that we can predict at the 95% level of certainty, the leaps we'll need to stay under the 1s limit out 3 years. Predicting it out 2 years can be done quite a bit better (to like 200ms). Exact numbers are in the archives.
The best methods recovered UT1-UTC to better than 50 ms over 500 days. Extrapolation is dangerous, but the trend appeared well-behaved over that entire interval. A simple-minded heuristic for predicting a leap second requires confidence that |UT1-UTC| > 0.1s for the epoch in question (such that adding/subtracting 1s remains < 0.9s). This gives an 800 ms target. Naively (very) that corresponds to a horizon of 8000 days, about two decades. In real life the target will be smaller and the coherence of the trend will collapse at some lookahead distance much shorter than that. The question is how many 9's of confidence can be achieved in practice and how far out. It certainly appears from the EOP PCC that the current state of the art could deliver several years.
> But even a 2-3 year time frame would allow easier updating of tables and such to give systems a better chance at working, and also increase the testability of the leap second. This would also make the costs for leap seconds more predictable for business. Right now, many businesses have unexpected costs associated with leap second compliance when a leap is announced. They need emergency budget on a sub-year time-scale which is disruptive. If we know there's one in June 2012, the appropriate managers can put that into their budgets in the normal process, rather than making it be an emergency (and possibly causing them to say screw it, we'll take our chances). True, most businesses don't worry about this, but if we're talking about improving the current system, a change like this will allow businesses, mostly government contractors, a more predictable cost structure around this.
A cost accounting would be welcome. There are also costs associated with loosening the tolerance. And the costs of historical intercalary adjustments don't go away under any scenario.
> There's nothing magical about the current leap seconds. They are but one of many ways to realize a mean solar time (as opposed to the one true way of realizing Mean Solar Time from Newcomb's Equations of Time). It isn't a great system, but it is the one we have today.
Rather, it is an excellent system and it sets a high bar for other alternatives to surpass. It is no mean achievement for such a widely deployed standard to serve so well for so long with so few issues. There is nothing magical about abruptly ceasing leap seconds to suggest that widespread and dramatic problems won't result. They certainly will for astronomy and aerospace applications.
More information about the LEAPSECS