Reuters technical development chronology 1991-1998: Introduction
Wednesday 26 April 2017
The first part of the chronology covered 1964 to 1990. At the end of that era, the founders of the financial services business, Glen Renfrew and Michael Nelson, were leaving the scene. Nelson had retired in 1989, Renfrew would do so in 1991.
A new CEO, Peter Job, was appointed who inherited a reasonable revenue-generating machine but one with plenty of string and sealing wax behind the glossy exterior.
IDN was well established as an architecture but the tricky task of closing down Monitor loomed large and news delivery over IDN was still problematic. The dealing service had been technically overhauled and apart from the old PDP-8 terminals little remained of the original infrastructure. Automated matching and GLOBEX were awaited. The trading room systems business was healthy enough and the world of open systems and data feeds beckoned.
Finally, a pioneering move was well under way to replace custom-designed terminals with Personal Computer platforms. Behind the scenes discussions started on how to construct a database to replace the Toronto-based historical information mainframe.
The previous era also concluded with the almost complete disappearance of engineering development and special products activities. Once, they had been a strong competitive advantage in developing products for data feeds and for trading room systems and, before that, in communications initiatives. They had mostly been displaced by standard offerings in a largely digital world.
Unsurprisingly, the subject of development efficiency had made little progress - how to get more productivity out of software developers. On the other hand a much better grasp of project management and “how to put the ball over the line” had helped enormously in keeping up with the demands placed on development. Project management encouraged more efficient use of resources and better problem definition.
Unseen work such as keeping up with the routine of data collection and distribution still consumed a large percentage of overall development effort. It may be described as the “curse of legacy”, or “upgrading the engines whilst the plane is still flying”.
Enormous amounts of development time had to be allocated to consolidating products and keeping up with unavoidable external change. This was at the expense of moving forward. The generic requirement to deliver all data everywhere despite geographical communications infrastructure shortcomings continued to be a technical headache.
Significant acquisitions up to 1990 were as follows:
- RICH, initially provided video switching systems for the client site. Pioneered networked trading room systems. Gave Reuters market share and an introduction into the big trading room systems market.
- Instinet, successful automated trading system for equities.
- Wyatts, developed and sold trading systems for brokers who dealt mainly by voice. Complemented the trading room systems product line.
- Schwarzatron, a vendor of a rival quotes system.
- Finsbury Data Services, a vendor of interactive databases of company information.
- IPSharp, a company with many technical assets, including equity databases. Provided Reuters with an infrastructure for operating, maintaining and creating large historical databases. Not technically compatible with Reuters' largely DEC-based infrastructure due to its use of IBM mainframes and APL, an arcane programming language.
- Hovland Business Systems, a client financial position-keeping system.
A recap of the approximate status of systems supporting mainstream products at 1990 is as follows:
Distinct data collection systems existed for exchange data and contributed data.
Exchange data was typically in well-defined "logical" form rather than as loosely defined pages of information. It came from sources over which Reuters had little control in terms of data rate and protocol. Each data source typically required application level processing in order to apply the market rules required. There were main quotations databases in each main technical centre (London, Geneva and Hauppauge). Each stored the data from a number of ticker processing systems. Usually they processed exchange tickers but occasionally took other forms of data from sources other than exchanges. Local area management controlled the ticker systems as they were seen as most knowledgeable on the significance of the data.
Contributed data had historically been collected in paginated form via the Monitor system, though an increasing amount was provided in logical form. The flow of data from a contributor was often controlled to avoid flooding the network. Usually Reuter-designed protocols were used and the data was collected from contribution processing systems connected via HPSN or directly from the client terminals. The data was processed and stored by an area Monitor Contributed Database (MCD) which also accepted contributions from the residual Monitor network.
Thus both data collection mechanisms were largely centralised and assumed that the same data was to be made available everywhere with no variation.
IDN also carried news and time series data, prepared by the Toronto mainframe.
The IDN data content consisted of a large database of records. There were different record types for each class of market instrument, e.g. stock market quote or government treasury bond. For each record type there was a record template which defined the order of fields in the record and their encoding for transmission. There was also a display template which told the terminal device how to present the information to the customer. This was all very sensible. However, there was an early implementation decision, aimed at speeding product launch, to defer the real time distribution of templates. Any IDN component which needed to interpret, store or display market information was forced to store the templates statically and this made subsequent change very difficult.
keeping up with the routine of data collection and distribution... may be described as the 'curse of legacy' or 'upgrading the engines whilst the plane is still flying'
From the quotes and contributed data central databases, updates were distributed to the core IDN network where they updated full copy databases known as Data Retrieval Systems (DRS). A number of DRSs at each of the technical centres were used to service client retrieval requests. The DRS design ensured that extra systems could be added as needed to satisfy demand.
Beneath the DRSs there were two main distribution mechanisms: a terrestrial point-to-point network which covered the bulk of subscribers and a small dish broadcast system (SDS).
In the point-to-point network all updates were distributed via LAN bridges to Secondary Data Centres (SDCs) whose DRS systems cached only those records that had been retrieved recently (say a few days). There could also be another layer of smaller centres, DRS mini-cache centres, which just stored records displayed or stored by terminals local to the cache.
Finally IDN concentrators provided connection to the client terminal controllers. These terminals were still mainly the original Reuter-designed 700MRVs but an increasing number of Personal Computer-based terminals were appearing.
As well as collecting and distributing data IDN could enhance the raw data in various ways to produce new records of so-called “value added” data. This was done by systems designed for the purpose such as the value added systems (which produced a new logical record from other logical records), or the logicisers (which produced logical records from paginated data) and finally the pagination systems (which produced paginated data from logical data to maintain backwards compatibility for some products).
News on IDN
News2000 started life in 1986 as part of the original design to provide a simple news service over IDN. News was collected from editorial systems by News Processing Systems (NPS) in New York and Tokyo. From there news items and headlines were passed through the IDN network. News items were stored for roughly 24 hours in the network and were available for retrieval in the normal way. Headlines were sent separately in the form of broadcast messages. The basic functions of News2000 were receipt of news headlines, alerts and stories in such a way as to make them available to applications for processing as well as for display. The client used a simple set of indices through which news stories could be retrieved, rather like Monitor. The design predated the widespread installation of PC-based client terminals and so had to be as simple as possible for the 700MRV generation of terminals. In spite of the efforts to remain simple, the indexing system proved troublesome to maintain and corrections were badly handled. In addition the basic product was not particularly user friendly and had many deficiencies in recovering from temporary failures, like line disconnect. Much needed to be done.
A number of major improvements had been delivered such that little remained of the original design. A new subscriber line protocol was implemented to increase the efficiency of the client connection to the data centre. Secondly, addresses of frequently used counter-parties swere cached locally to the terminal controller (both PDP/8 and D2000-1) which avoided constant requests back to the centrally-based service machine. At the same time, network capacity was dramatically increased by the introduction of extra network nodes and inter-node connections. This also helped guard against the common occurrence of poor quality international circuits in many geographies.
Network nodes were being converted from PDP-11s to Vaxes but the Dealing concentrators were so far unchanged from the original PDP-11 design, although top-of-the-range processors were now used. Ethernet was used to connect central system components together within the data centres, eliminating the tangle of circuits. LAN bridges were introduced where feasible to replace inter-data centre connections, thus increasing speed and reliability. Two bridges were used to provide diverse routing, requiring special software from the manufacturer. Once again, this was a technical first. Finally, the limit of 4000 terminal controllers, inherent in the design of the original service machines, was removed by the introduction of a completely new VAX-based design. In this approach the functionality was divided between a service machine database, containing all the subscriber information of record as well as other essential administration data, and a number of front ends which held the precise operational configurations, and dealt with all routine requests. The database was not required for normal running of the service and was only required to introduce network changes, like adding a new client. The service machine (SDB) ran the standard VMS operating system but the newly designed front ends ran the lightweight operating system from DEC known as VAX/Eln or Elan.
At the start of the decade Reuters was preparing to launch the product. Valid objections were raised concerning the handling of "broken" trades. A broken trade could occur when a match was found centrally between a bid and offer, but communication was lost to one or other counter-party, leaving the status of the trade badly defined. The worry was that Reuters would bear the financial exposure. It is interesting to note that the same situation delayed the launch of Dealing 2000-1.
A creative technical solution would be found, though launch of Dealing 2000-2 would be delayed until April 1992. In parallel, automated trading systems were under development for the Chicago Mercantile Exchange, the Chicago Board of Trade and the French Matif Exchange. This effort was originally seen as a way of reusing the core Dealing 2000-2 matching technology. However as the technical requirements diverged, completely separate development teams were required. Here Reuters was entering into the software services arena - a new and unknown venture.
The network components and communication lines had been fully replicated, thus easing capacity problems. Monitor was still the main highway for contributions and news. Many important products still relied on Monitor features which had not yet been reproduced on IDN. The main issue was phasing Monitor out in order to gain operational and field cost savings. Several attempts were made to close down Monitor over the next years but did not succeed until 1999.
Special products development had all but ceased. PRISM was the only video switching product still in active development. Triarch was now on a proper architectural footing and the Triarch Source/Sink library for connecting applications to data sources was a financial services industry standard. The Reuter Terminal also existed in the Triarch environment as the Reuters Intelligent Workstation (RIW). It provided access to Reuters data alongside information from other market data vendors. Pressure was increasing for clients to run their own applications within the RIW and integrate them with Reuter data. This required a proper open interface standard and support of an open systems platform where issues might originate in the non-Reuter portion of the workstation. Development costs were an issue and the huge increase in 1989 could not be sustained as the business of trading room upgrades fell away from the peaks reached in the late 1980s. Development was starting on a course of decentralisation as more applications development was located closer to the client. There was still much duplication between the semi-closed world of the standard IDN terminal and the semi-open world of trading room systems.
The original quotations delivery system was still in existence, just, but shrinking fast as IDN displaced the ageing technology.
Martin Davids joined Reuters in 1979 with a degree in applied mathematics and a background in military real time and commercial message switching systems. His first role was as part of a team responsible for getting the Dealing system live. Later he managed Reuters European technical development and headed transactions development in New York where he was also head of information products development. When he left Reuters in 2002 he was head of legacy developments, field and technical operations. ■