Son-O-MERMAID is a concept that evolved from MERMAID , a floating instrument that acts as a freely drifting seismometer equipped with a hydrophone that captures acoustic signals caused by distant seismic activity. MERMAID is equipped with sensitive on-board acoustics, a battery life measured in months, and with the latest in seismic event detection and discrimination technology. It floats at depth, but it surfaces upon event detection to acquire its GPS position and relay the seismic data via the IRIDIUM satellite constellation. MERMAID was developed by Guust Nolet, formerly from Princeton University and now at the University of Nice. The Son-O-MERMAID instrument is a next generation drifting prototype, jointly developed at the University of Rhode Island (URI) by Harold Vincent and at Princeton University by Frederik Simons, that combines a surface buoy with instruments dangling from an untethered cable. The surface unit enables the GPS and IRIDIUM capabilities to be always engaged. The submerged portion of the device consists of a vertical array of three hydrophones and electronics located at a depth of ~750 meters. The purpose for the vertical array of hydrophones is to separate non-propagating noise from seismic arrivals with removal of surface reverberations . Figure 1-1 below, displays a complete view of Son-O- MERMAID as envisioned by its inventors. In 2012, a first version of a Son-O-MERMAID was built, and it had been designed to record acoustic data and store it in the submerged unit, keeping at the surface only the GPS and IRIDIUM communications modules to report its position and time to a land-based station. Data could be analyzed only after the device is pulled out of the water at the conclusion of a planned test. The greatest risks of this design were: one, a wiring disconnect between the submerged and surface units due to a storm or other unpredicted event; two, to experience a data failure in the submerged unit with no means to be detected before the system is pulled out of the water at the conclusion of the test event.
The objectives of this project are as follows: first, select the necessary hardware components based on cost and power efficiency for prototype implementation of Son-O- MERMAID. Second, design a telemetry algorithm that will reliably transfer the acoustic data from the submerged unit to a computer at the surface in real time which will additionally enable data transmission via IRIDIUM communications system to a land-based station for real time data analysis. Third, synchronize the system time of the surface unit to a GPS receiver to provide data timestamp within one millisecond accuracy. Four, design, build and test the prototype.
2 REVIEW OF LITERATURE
2.1 THE NETWORK TIME PROTOCOL: FEATURES AND ALGORITHMS
2. 1. 1 NETWORK TIME PROTOCOL DAEMON
The ntpd program is an operating system daemon that synchronizes the system clock to remote NTP time servers or local reference clocks. It is a complete implementation of NTP version 4 defined by RFC-5905, but also retains compatibility with version 3 defined by RFC- 1305 and versions 1 and 2, defined by RFC-1059 and RFC-1119, respectively. The program can operate in any of several modes, including client/server, symmetric and broadcast modes, and with both symmetric-key and public key-cryptography . Ordinarily, the ntpd program requires a configuration file which contains configuration commands described on the previous cited documentation. This is all described in detail in section 18.104.22.168.2 under the NTP update configuration file. Clients can also discover remote servers and configure them automatically without previous configuration details.
The ntpd program normally operates continuously while adjusting the system time and frequency; however, the user can control and decide how the ntpd should work by selecting the desired command line options. The next section presents a full description of NTP, what it is and how it works.
2.1.2 NETWORK TIME PROTOCOL (NTP)
The Network Time Protocol (NTP) is an Internet protocol used to synchronize the clocks of computers to a time reference. This standard protocol was developed by Professor David L. Mills at the University of Delaware. Time synchronization across a network is very important if communication programs are running on different computers. If the time is not synchronized, from the perspective of an external observer, switching between these systems would cause time to jump forward and back, a non-desirable effect. As a consequence, isolated networks may run under their own wrong time, but effects will be visible as soon as a connection to the internet is established. Using available technology with existing workstations and Internet paths, it has been demonstrated that computers can be reliably synchronized to better than a millisecond in LANs and better than a few tens of milliseconds in most places in the global Internet . The majority if not all references used in this section are to professor Miller’s work; he created the NTP protocol two decades ago and today he continues his investigation to improve its performance.
22.214.171.124 BASIC FEATURES OF NTP
a. NTP needs a reference clock that defines the true time to operate. All clocks in the network will be set towards that true time.
b. NTP uses Universal Time Coordinated (UTC) as reference time. UTC is an official standard for the current time which evolved from the former Greenwich Mean Time (GMT). This time is independent from time zones and is based on a quantum resonance of a cesium atom, being more accurate than GMT which is based on mean solar time.
c. NTP is a fault-tolerant protocol that will automatically select the best of several available time sources to synchronize to. Insane time sources will be detected and avoided.
d. NTP is highly scalable synchronization network where nodes exchange time information. The time information from one node to another form a hierarchical graph with reference clocks at the top.
e. NTP selects the best candidates for its time out of many available sources. It uses a highly accurate protocol with a resolution of less than a nanosecond.
f. When a network connection is temporarily unavailable, NTP uses measurements from the past to estimate current time.
g. NTP works on most popular UNIX Operating Systems and Windows. As of December of 2013 there are two versions of NTP available: version 3 is the official Internet standard and version 4 is the current development version with specification RFC 5905, which describes NTP specifics and summarizes information useful for its implementation. In addition, some vendors of operating systems customize and deliver their own versions of NTP. For MERMAID NTP version 4 was used and its installation and configuration was customized to efficiently make it run on the Raspberry Pi and synchronize with a GPS receiver as its time source.
126.96.36.199 NEW FEATURES OF NTP V4
According to the NTP v4 release notes, the new features of version four as compared to version three are:
a. Use of floating point arithmetic instead of fix-point (integer arithmetic).
b. Redesigned clock discipline algorithm that improves accuracy, handling of network. jitter and polling intervals.
c. Support for nanokernel kernel implementation that provides nanosecond precision.
d. Public-Key cryptography known as autokey that avoids having common secret keys.
e. Automatic server discovery (manycast mode).
f. Fast synchronization at startup and after network failures (burst mode).
g. New and revised drivers for reference clocks.
h. Support for new platforms and operating systems.
188.8.131.52 HOW NTP WORKS
NTP time synchronization services are widely available in the public Internet with several thousand servers distributed in most countries. The NTP subnet operates with a hierarchy of levels where each level is assigned a number called the “stratum”. Stratum 1 (primary) servers are at the lowest level and directly synchronized to national time services via satellite, radio or telephone modem. Stratum 2 (secondary) servers are at the next higher level synchronized to stratum 1 servers and so on. Clients, on the other hand, in order to provide the most accurate, reliable service, typically operate with several redundant servers over diverse network paths.
184.108.40.206 NTP TIMESCALE AND DATA FORMANTS
NTP clients and servers synchronize to the UTC timescale used by national laboratories and disseminated by radio, satellite and telephone modem; corrections for time zone of daylight savings are performed by the operating system. This time scale is determined by the rotation of the Earth about its axis, and since Earth rotation is gradually slowing down relative to International /atomic Time (TAI), in order to correct UTC with respect to TAI a leap second is inserted at intervals of about 18 months, as determined by the International Earth Rotation Services (IERS). There are three approaches to implementing a leap second in NTP. The first approach is to increment the system clock during the leap second and continue incrementing following the leap. One problem with this approach is that conversion to UTC requires knowledge of all past leap seconds and epoch of insertion. A second approach is to increment the system clock during the leap second and step the clock backward one second at the end of the leap second. The problem is that the resulting timescale is discontinuous and a reading during the leap is repeated one second later. The third approach is to freeze the clock during the leap second allowing the time to catch up at the end of the leap second; this is the approach taken by the NTP conventions. Leap second warnings are disseminated by the national laboratories in the broadcast time-code format, and these warnings are propagated from the NTP primary servers via other servers to the clients by the NTP on-wire protocol. The leap second is implemented by the operating system kernel. About every eighteen months the International Earth Rotation Service (IERS) issues a bulletin announcing the insertion of a leap second in the UTC timescale. This normally happens at the end of the last day of June or December and even though this bulletin is available on the Internet at “www.iers.org”, advance notice of leap seconds is given in signals broadcast from national time and frequency stations, in GPS signals and in telephone modem services. Many but not all reference clocks recognize these signals and many but not all drivers can decode the signals and set the leap bits in the time-code accordingly. This means that many but not all primary servers can pass on these bits in the NTP packet heard to dependent secondary servers and clients. Secondary servers will pass these bits to their dependents and so on throughout the NTP subnet. When no means are available to determine the leap bits from a reference clock or downstratum server, a leapseconds file can be downloaded from “time.nist.gov” and installed. If the precision time kernel support is available and enabled at the beginning of the day of the leap event, the leap bits are set by the Unix “ntp_adjtime ()” system call to arm the kernel for the leap at the end of the day, then the kernel will automatically insert one second exactly at the time of the leap, after which the leap bits will be turned off. If the kernel support is not available or disabled, the leap is implemented by setting the clock back one second using the Unix “settimeofday ()” system call, which will repeat the last second. However setting the time backwards by one second does not actually set the system clock backwards, but effectively stalls the clock for one second.
There are two time formats used by NTP, a 64-bit timestamp format and a 128-bit “datestamp” format. The “datestamp” format is used internally, while the timestamp format is used in packet headers exchanged between clients and servers. These time formats are shown in figure 2-1 below .The timestamp format spans 136 years, called an era. The current NTP era began on 1 January 1900, and the next one will begin in 2036.
2.1 .2.5 ARCHITECTURE AND ALGORITHMS
Figure 2-2 shows the overall organization of the NTP architecture as both a client of upstream lower stratum servers and as a server for downstream higher status clients. The figure shows three servers as the remote synchronization source where each of these servers communicates with a pair of peer/poll processes. Packets are exchanged between the client and server using the on-wire protocol described later in this document. The poll process sends NTP packets at intervals ranging from 8 seconds to 36 hours, and these packets are managed in a way to maximize accuracy while minimizing network load. The peer process receives NTP packets and performs the packet sanity test then it discards the packets that fail the test. For the packets that succeed the test, the peer process runs the on-wire protocol that uses four raw timestamps: the origin timestamp T1 upon departure of the client request, the receive timestamp T2 upon arrival at the server, the transmit timestamp T3 upon departure of the server replay, and the destination timestamp T4 upon arrival at the client. This timestamps are recorded by the “rawstats” option of the “filegen” command, and are used to calculate the clock offset and roundtrip delay samples:
θ = [(T2 – T1) + (T3 – T4)] / 2,
(T4 – T1) is the time elapsed on the client side between the emission of the request packet and the reception of the response packet.
(T3 – T2) is the time the server waited before sending the answer.
δ= (T4 – T1) – (T3 – T2).
The offset and delay statistics are processed by a set of mitigation algorithms, and the offset and delay samples most likely to produce accurate results are selected, and the servers that passed the sanity tests are declared selectable. Later, from the selectable population statistics are used by the “clock select algorithm” to determine a number of truechimers according to Byzantine agreement and correctness principles. Another set of algorithms combine the survivor offsets, designate one of them as the system peer and produces the final offset used by the “Clock Discipline Algorithm” to adjust the system clock time and frequency. The following section describes in more details the above mentioned algorithms.
The NTP software operates in each server and client as an independent process of daemon. The architecture of NTP daemon is illustrated in Figure 2-3. At designated intervals, a client sends a request to each in a set of configured servers and expects a response at some later time. The exchange results in four timestamps readings and these times are used by the client to calculate the clock offset and roundtrip delay relative to each server separately. The clock filter algorithm discards offset “outlyers” associated with large delays, which can result in large errors. These clock offsets produced by the clock filter algorithm for each server separately are then processed by the intersection algorithm in order to detect and discard misbehaving servers called “falsetickers”. The “truechimers” remaining are processed by the clustering algorithm to discard outlyers. The survivors remaining are then weighted by synchronization distance and combined to produce the clock correction used to discipline the computer clock by the clock discipline algorithm. This algorithm is described in more detail in the following section.
2.1 .2.5. 1 CLOCK FILTER ALGORITHM
The clock filter algorithm processes the offset and delay samples produced by the on-wire protocol for each peer process separately. It uses a sliding window of eight samples and picks out the sample with the least expected error. As the delay increases, the offset variation increases, so the best samples are those with the lowest delay. If the sample with lowest delay can be found, it would also have the least offset variation and would be the best candidate to synchronize the system clock. The clock filter algorithm works best when the
delays are statistically identical in the reciprocal directions between the server and client. When delays are not reciprocal, or where the transmission delays on the two directions are traffic dependent, this may not be the case. A common case is downloading or uploading a large file using DSL links; typically the delays are significantly different resulting in large errors.
In the clock filter algorithm the offset and delay samples from the on-wire protocol are inserted as the youngest stage of an eight-stage shift register, those discarding the oldest stage. Each time an NTP packet is received from a source, a dispersion sample is initialized as the sum of the precisions of the server and client. Precision is defined by the latency to read the system clock and varies from 1000 nanoseconds (ns) to 100 milliseconds (ms) in modern machines. The dispersion sample is inserted in the shift register along with the associated offset and delay samples, and then the dispersion sample in each stage is increased at a fixed rate of 15 µs/s representing the worst case error due to skew between the server and client clock frequencies. In each peer process the clock filter algorithm selects the stage with the smallest delay which generally represents the most accurate data. The peer jitter statistic is then computed as the root mean square (RMS) differences between the offset samples and the offset of the selected stage. The peer dispersion statistic is determined as a weighted sum of the dispersion samples in the shift register. As samples enter the register, the peer dispersion drops from 16 s to 8 s, 4 s, 2 s, and so forth. When a source becomes unreachable, the poll process inserts a dummy infinity sample in the shift register for each poll sent, and after eight polls the register returns to its original state. Once a sample is selected it remains selected until a newer sample with lower delay is available. This typically occurs when an older selected sample is discarded from the
shift register. The result can be the loss of up to seven samples in the shift register. The output sample rate can never be less than one in eight input samples. The clock discipline algorithm is designed to operate at this rate.
220.127.116.11.2 CLOCK SELECT ALGORITHM
The clock select algorithm determines from a set of sources which are correct (truechimers) and which are not (falsetickers) based on a set of formal correctness assertions. To begin with, a number of sanity checks are performed to sift the selectable candidate from the source population.
a. A stratum error occurs if the source had never been synchronized, or if the stratum of the source is below the floor option or not below the ceiling option of the “tos” command. The default values for these options are 0 and 15, respectively. It is important to note that 15 is a valid stratum for a server, but a server operating at that stratum cannot synchronize clients.
b. A distance error occurs for a source if the root distance (also known as synchronization distance) of the source is not below the distance threshold “maxdist” option of the “tos” command. The default value for this option is 1.5 seconds
c. A loop error occurs if the source is synchronized to the client. This can occur if two peers are configured with each other in symmetric modes.
d. An unreachable error occurs if the source is unreachable or if the server or peer command for the source includes the “noselect” option.
Sources showing one or more of these errors are considered non-selectable. On the other hand, only the selectable candidates are considered in the following algorithm: given
the measured offset θₒ and root distance λ, the correctness interval is defined as [θₒ – λ, θₒ + λ] of points where the true value of θ lies somewhere on the interval. The problem now consists in determining from a set of correctness intervals which represent truechimers and which represent falsetickers and in search of this solution a new interval is defined: the intersection interval is the smallest interval containing points from the largest number of correctness intervals. A candidate with a correctness interval that contains points in the intersection interval is a truechimer and the best offset estimate is the midpoint of its correctness interval. Furthermore, a candidate with a correctness interval that contains no points in the intersection interval is a “falseticker”. In summary, the midpoint sample produced by the clock filter algorithm is the maximum likelihood estimate and thus best represents the truechimer time.
18.104.22.168.3 CLOCK CLUSTER ALGORITHM
The Clock Closter algorithm processes the truechimers produced by the clock select algorithm to produce a list of survivors which are used by the mitigation algorithms to discipline the system clock. The cluster algorithm operates in a series of rounds; at each round the truechimer furthest from the offset centroid is pruned from the population. The rounds are continued until a specified termination condition is met. First, the truechimer associations are saved on an unordered list with each candidate entry identified with index i (i = 1, .., n), where n is the number of candidates. Let θ(i) be the offset and λ(i) be the root distance of the ith entry. Recall that the root distance is equal to the root dispersion plus half the root delay. For the ith candidate a statistic called the select jitter relative to the ith candidate is calculated as follows. Let dᵢ(j) = |θ(j) – θ(i)| λ(i), where θ(i) is the peer offset of the ith entry and θ(j) is the peer offset of the jth entry, both produced by the clock filter algorithm. The metric used by the cluster algorithm is the select jitter φs(i) computed as the root mean square (RMS) of the dᵢ(j) as j ranges from 1 to n. The objective at each round is to prune the entry with the largest metric until the termination condition is met. The select jitter must be recomputed at each round, but the peer jitter does not change. The termination condition has two parts. First, if the number of survivors is not greater than the “minclock” threshold set by the “minclock” option on the “tos” command, the pruning process terminates. The “minclock” defaults is 3, but can be changed to fit special conditions. The second termination condition is more intricate. Figure 2-4 below shows a round where a candidate of (a) is pruned to yield the candidates of (b). Let φmax be the maximum select jitter and φmin be the minimum peer jitter over all candidates. In (a), candidate 1 has the highest select jitter, so φmax = φs(1). Candidate 4 has the lowest peer jitter, so φmin = φʀ(4). Since φmax > φmin, select jitter dominates peer jitter so the algorithm prunes candidate 1. In (b), φmax = φs(3) and φmin = φʀ(4). Since φmax < φmin, pruning additional candidates does not reduce select jitter, and the algorithm terminates with candidates 2, 3 and 4 as survivors. The survivor list is passed on to the mitigation algorithms, which combine the survivors, select a system peer, and compute the system statistics passed on to dependent clients.
22.214.171.124.4 CLOCK DISCIPLINE ALGORITHM (NTP V4)
The Clock Discipline algorithm adjusts the computer clock time as determined by NTP, compensates for the intrinsic frequency error, and adjusts the poll interval and loop time constant dynamically in response to measured network jitter and oscillator stability. The algorithm functions as a hybrid of two different feedback control systems. In a phase- lock loop (PLL) design, the measured time errors are used to discipline a type-II feedback loop which controls the phase and frequency of the clock oscillator. In the frequency-lock loop (FLL) design, the measured time and frequency errors are used separately to discipline type-I feedback loops, one controlling the phase and the other controlling the frequency. The system processes polls the peer processes at intervals from a few seconds to over a day, depending on peer type. When a new sample of offset, delay and dispersion is available in a peer process, a bit is set in its state variables. The system process, upon noticing this bit, clears it and calls the clock selection, clustering and combining algorithms. This algorithm adjusts the clock oscillator time and frequency with the aid of the clock adjust process, which runs at intervals of one second.
The clock discipline algorithm is implemented as a feedback control loop shown in Figure 2-5. The variable θr represents the reference phase provided by NTP and θc the control phase produced by the variable-frequency oscillator (VFO), which controls the computer clock. The phase detector produces a signal Vd that represents the instantaneous phase difference in between θr and θc. The clock filter functions as a tapped delay line, with the output Vs taken at the sample selected by the algorithm. The loop filter, with impulse response F(t) produces a correction Vc which controls the VFO frequency θc and thus its phase θc. The characteristic behavior of this model, which is determined by F(t) and the various gain factors is studied many text books and summarized in .
This redesigned clock discipline algorithm used in NTP v4 is implemented using two sub algorithms, one based on a linear, time-invariant PLL, and the other on a nonlinear, predictive FLL. Both predict a time correction x as a function of phase error θ, represented by Vs in Figure 2-6.
The PLL predicts a frequency adjustment yPLL as an integral of past time offsets, while the FLL predicts a frequency adjustment yFLL directly from the difference between the last time correction and the current one. The two adjustments are combined and added to the current clock frequency y, as shown in figure 2-6. Then the x and y are used by the clock adjust process to adjust the VCO frequency and close the feedback loop, as shown in Figure 2-5. A complete mathematical derivation of the clock discipline algorithm is described in .
126.96.36.199.5 NTP POLL PROCESS
The poll process sends NTP packets at intervals determined by the clock discipline algorithm. The process is designed to provide a sufficient update rate to maximize accuracy while minimizing network overhead. This rate is determined by a poll (power of 2) exponent with a range between 3 (8 seconds) and 17 (36 hours). The minimum and maximum poll exponent within this range can be set using the “minpoll” and “maxpoll” options of the
server command, with default of 2^6 (64 seconds) and 2^10 (1024 seconds) respectively in NTP v3. However, in NTP v4 these values can be set to a minimum of 2^4 (16 seconds) and a maximum of 2^17 (131,072). Within this range, the clock discipline algorithm automatically manages the poll interval based on current network jitter and oscillator wander. The poll interval is managed by a heuristic algorithm developed over several years of experimentation and depends on an exponentially weighted average of clock offset differences, called clock jitter, and a jiggle counter. As an option of the server command, instead of a single packet, the poll process can send a burst of several packets at 2-s intervals. This is intended to reduce the time to synchronize the clock at initial startup (iburst) and /or to reduce the phase noise at the longer poll intervals (burst). For the iburst option 6 packets are sent in the burst, which is the number normally needed to synchronize the clock; for the burst option, the number of packets in the burst is determined by the difference between the current poll exponent and the minimum poll exponent as a power of 2. For example, with the default minimum poll exponent of 6 (64 seconds) only one packet is sent for every poll, while the full number of eight packets is sent at poll exponents of 9 (512 seconds). This will ensure that the average headway will never exceed the minimum headway. In addition, when ibusrt or burst is enabled, the first packet of the burst is sent, but the remaining packets sent only when the reply to the first packet is received. This means that, even if a server is unreachable, the network load is no more than at the minimum poll interval. A key statistic to control the poll interval is the RMS error measured by the clustering algorithm which sifts the best subset of clocks from the current peer population. This process is called “select dispersion” and expressed by “eSEL”. These samples square values are held in a shift register. The system dispersion “eSYS” is then calculated from the RMS sum of “eSEL” and the peer dispersion “ePEER” of the selected peer.
Where n = 4 samples chosen by experiment. If |θ| > YeSYS, Where Y = 5 is experimentally determined, the oscillator frequency is deviating too fast and the poll interval is reduced in stages to the minimum. If the opposite case holds for some updates, the poll interval is slowly increased in steps to the maximum. Under typical operating conditions, the interval hovers close to the maximum, but on occasions, when the oscillator frequency wanders more than about 1 PPM, it quickly drops to lower values until the wander subsides.
188.8.131.52.6 NTP CLOCK STATE MACHINE
The NTP algorithms work well to sift good data under conditions of light to moderate network and server loads, but under conditions of extreme network congestion, operating system latencies, and oscillator wander, linear time-invariant systems (PLL) and predictive systems (FLL) may fail. The results can be frequent time step changes and large time and frequency errors, and in order to work with large transients the clock discipline algorithm in NTP v4 is managed by the state machine shown below in figure 2-7.
Initially, the machine is unsynchronized and in UNSET state, then if the minimum poll interval is 1,024 s or greater, the first update received sets the clock and transitions to HOLD state. If the interval is less than 1,024 s these actions will not occur until several updates to allow the synchronization to be reduced below 1 s, and allow the algorithms to accumulate reliable error estimates.
In HOLD state, sanity checks, spike detectors and tolerance clamps are disabled, and the clock discipline algorithm is forced to operate in FLL mode only to allow the fastest adaptation to the particular oscillator frequency. The machine remains in this state for at least 5 updates. After this and the nominal clock offset has decreased below 128 milliseconds, the machine transitions to SYNC state and remains there pending unusual conditions.
In SYNC state, the sanity checks, spike detectors and tolerance clamps are operative. To protect against frequency spikes in FLL predictions at small update intervals, the frequency adjustments are clamped at 1 PPM, and to protect against runaway frequency offsets in FLL predictions at large update intervals, the frequency estimate is clamped at 500 PPM, and finally, to protect against disruptions due to severe network congestion, frequency adjustments are disabled if system dispersion exceeds 128 milliseconds.
2.2 THE GLOBAL POSITIONING SYSTEM DAEMON (GPSD)
GPSD is a software program that monitors one or more GPS or AIS receivers attached to a host computer through serial or USB ports. AIS (Automatic Identification System) is a device installed on some vessels and used to transmit their position, speed and course among other information. A gpsd program makes all data on the location/course/velocity of the sensors available to be queried on TCP port 2947 of the host computer. With gpsd, multiple time and location-aware client applications such as NTP can easily share access to these receivers without contention or loss of data.
2.3 PULSE PER SECOND (PPS)
PPS is an electrical signal that has a width of less than a second and a sharply rising or abruptly falling edge that accurately repeats once per second. This signal is output by radio beacons, GPS receivers and other types of precision oscillators. This signal can be used to discipline the local clock oscillator to a high degree of precision, typically to the order less than 10 ms in time and 0,01 parts-per-million (PPM) in frequency. The PPS signal can be connected via the data carrier detector (DCD) pin of a serial port or via the acknowledge (ACK) pin of a parallel port.
Both connections require operating system support. It is available in Linux, FreeBSD and Solaris. Support is also available on an experimental basis for several other systems. The PPS application program interface defined in RFC-2783 (PPSAPI) is the only PPS interface supported; older versions are no longer supported.
2.4 SYNCHRONIZATION OF THE NTP SERVER WITH GPS AND PPS
Our objective of this section is to have a Global Positioning System (GPS) receiver device driving a pulse-per-second (PPS) signal to the Network Time Protocol Daemon (NTPD) server for a highly accurate time reference server. This section will include two parts: the first one describes the devices, drivers and daemons necessary to establish the system time synchronization with the GPS receiver, and the second part which will be covered in the next chapter, will summarize the configuration steps and configuration files necessary to get the system up and running and synchronized to a GPS receiver with an accuracy that depends on the GPS receiver type.
2.4.1 THE GPS DEVICE
The author has found two ways to propagate the PPS signal to the ntpd server, each case presenting its own variants. However, the GPS receiver must be a device capable of sourcing two different types of data: the absolute data and time, and the 1-Hz clock signal (PPS). The first one provides the complete information of the current date and time with poor accuracy since this information is sent over the data line of the serial port (TxD/pin 2) and encoded using some type of protocol, i.e. NMEA. The PPS on the other hand, provides a very accurate clock (1 µS in the GPS 18LVC receiver) but with no reference at all to the absolute time. This signal is wired to the Data Carrier Detect (DCD) pin 1 of the serial port. PPS indicates with good precision when each out second begins, but it does not tell us which second it is. Due to this fact this timing information must be combined with the protocol messages sent by the GPS receiver to have both precision and a complete timestamp at the same time. This GPS device must speak the NMEA protocol which sends out NMEA messages every second.
2.4.2 NTPD REFERENCE CLOCKS
The NTPD server supports several types of drivers, these drivers are low level callback functions that are registered within the NTPD core and implement the access to several types of local clocks such as GPSs. Each driver is identified by a pseudo-IP address identifier and listed in the NTP configuration file located at “/etc/ntp.conf”. There are two drivers involved within the GPS/NTP time synchronization: 127.127.20.x: NMEA Reference Clock driver and 127.127.28.x: SHM (shared memory) driver. The NMEA reference clock driver expects a GPS device sending out NMEA messages to a system via a serial port named “/dev/gpsX” and the PPS signal wired through the DCD pin and accessible from a “/dev/gpsppsX” device. Figure 2-8 below shows the wiring of the GPS receiver to interface with the serial port.
The “/dev/gpsX” appears in the system as a link to the “/dev/ttyS0” serial device, and the “/dev/gpsppsX” appears as a link to the “/dev/pps0” device which is provided by the kernel PPS API. The API collects and distributes a precision kernel clock information from/to userlands programs and supports the DCD pin connected to a 8250 UART. The DCD pin is sensed using a new serial line discipline named PPS, which is an extension of the TTY line discipline. This sensing occurs as an interrupt time, so it provides a very precise time stamping of the DCD events. This PPS API, also known as LinuxPPS is available as a module on Linux kernel version 2.6.34 or later. For Linux systems with kernel older than 2.6.34 LinuxPPS is not yet available and a patch must be applied. In order for the NTPD to synchronize the system clock with the GPS receiver and using the pps signal for precision timing the following two lines must be added to the “/etc/ntp.conf” file:
Server 127.127.20.0 mode 1 minpoll 4 prefer
Fudge 127.127.20.0 flag3 1 flag2 0 time1 0.0 Where
Mode = 1, means that only the GPMRC messages of the NMEA protocol will be analyzed.
Flag3 = 1, tells ntpd to use the PPS line discipline of the kernel
Flag2 = 0, tells the driver to use the rising edge of the DCD signal to indicate the start of each second.
time1 time: Specifies the PPS time offset calibration factor, in seconds and fraction, with default 0.0
To activate the PPS line discipline on the serial port connected to the GPS, it is necessary to run the “ldattach” utility, which is part of the “util-linux-ng” package v2.14 and up and will run in the background to keep the serial port open and the discipline active. Ldattach was provided with the Linux distribution used in our test, however for any system where this tool is not provided it will be necessary to build the tool from its sources which can be found by referring to kernel.org (http://www.kernel.org/pub/linux/utils/util-linux-ng/)
for the source code and your distro provider for an updated util-linux package.
2.4.3 SHM REFERENCE CLOCK
The Shared Memory (SHM) driver accepts delayed timing information from a System- V IPC (Inter Process Communication) shared memory and this timing information is observed in the ntpd logs. The timing information is written there by some process; this process would
read the information from the GPS and write it to the shared memory so that ntpd can process it. One user space utility that performs this task is gpsd, which is a general-purpose daemon designed to talk to most types of GPS modules and according to the texts it is also capable of processing the PPS signals and sending timing information to ntpd via a shared memory device. Gpsd feeds two devices to ntpd, one with the absolute timestamp parsed from the NMEA messages, and another feeding the PPS. Ntpd sees both devices as two different SHM devices so the “ntp.conf” file must include these lines:
Server 127.127.28.0 minpoll 4 fudge 127.127.28.0 refid GPS
Server 127.127.28.1 minpoll 4 prefer Server 127.127.28.1 refid PPS Where
Refid is just a string that specifies the driver reference identifier.
In our approach to build this new Son-O-MERMAID prototype, Figure 1-1 was modified, as described in Figure 3-1 below. Building Son-O-MERMAID could be divided in two major components: one component is the mechanical part which includes building the instrumentation housing, array assembly and suspension cable. This task is out of scope of this work and has been finalized. The other component consists on the hardware integration and processing which constitute the focus of the author’s work.
3.1 Son-O-MERMAID SYSTEM DESIGN
Figure 3-1 describes a high level architecture of Son-O-Mermaid on the right, and a test bed for development of the telemetry algorithms on the left. As this figure shows, Son- O-MERMAID can be divided in three subcomponents: First, a surface component which receives and stores the acoustic data sent from the submerged unit for further analysis. This unit runs a very accurate system time synchronized to a GPS receiver and used to time stamp the acoustic data. Second a submerged component immersed at a depth of ~750 meters which collects, digitize and samples acoustic data. Third, the interface that connects the submerged and surface components, which in this figure consists in the data lines.
3.1.1 THE SUBMERGED COMPONENT
The submerged component includes the following parts: a three-hydrophone array, an analog to digital converter (ADC), a central processing unit (CPU) and one RS-485 adapter. All these parts with the exception of the hydrophones are placed inside a pressure vessel. Figure 3-2 shows this component and a description and integration of its parts follows.
184.108.40.206 HYDROPHONE ARRAY
The hydrophones arranged in this array are manufactured by High Tech Inc.
and their technical specifications are listed in Table 1 in the Appendix, section A.2. This 3-hydrophone array was built at the Equipment Development Laboratory (EDL) at URI’s Narragansett Bay Campus by Catherine Cipolla and Gary Savoie, whose combined scientific instrument design and fabrication experience exceeds some 50 years. The array is displayed in Figure 3-2 above and Figure 3-3 below. This array is expected to be used in the next deployment of Son-O- MERMAID (version 2).
During the prototype design of Son-O-Mermaid version 2 rather than using the array, the majority of the time the array’s acoustic data were simulated by a function generator as shown in figures 3-4a and 3-4b below. The wave form from the function generator was split into three channels to simulate the three inputs from the array into the ADC board. At the ADC board, the analog data were digitized and read in by the Phidget SBC in the submerged unit. The data were then sent to the surface unit as a complete file or as sample by sample depending on the approach selected, as it will be described later.