[LEAPSECS] When did computer timekeeping get good enough for leap seconds to matter?
    Hal Murray 
    hmurray at megapathdsl.net
       
    Thu Jan  9 06:03:53 EST 2014
    
    
  
The IBM 360 systems starting in 1964 used the power line frequency.  (A 
location in low memory got bumped at 300 counts per second.  5 per cycle on 
60 Hz and 6 per cycle on 50 Hz.)  I wonder how much the power timekeeping 
wandered back then relative to today.
Does anybody know what the guys in the power company control rooms do about 
leap seconds?
------------
Leap seconds started in 1972.
I was at Xerox in the late 1970s.  At boot time, Altos got the time from a 
local time server.  Altos used the system crystal (5.88 MHz) for timekeeping. 
 Personal Altos were rebooted frequently so it didn't matter if their clock 
drifted a bit.  The time server was packaged with the routers.  (We called 
them gateways.)  On the few systems that were up a long time (file servers, 
routers), we hand tweaked a fudge factor to adjust the clock rate.  It wasn't 
hard to get to a second per week.  I think the units for the fudge factor 
(from a config file) were seconds per day, but it would read at least one 
digit past the decimal point.  I don't remember any mention of leap seconds.
When were there enough (Unix?) boxes on the net running NTP and keeping good 
enough time to notice things like leap seconds?
I should go browse the old RFCs and see when the API for telling the kernel 
about pending leap seconds was published.  But somebody may have good stories 
or folklore.
-- 
These are my opinions.  I hate spam.
    
    
More information about the LEAPSECS
mailing list