[UKUUG Logo] Copyright © 1995-2004 UKUUG Ltd

UKUUG


[Back]

Newsletter Section 8

From the Net




Year 2000 Apocalypse


(Stephen L. Talbott)

It's becoming clear (even to formerly blasé types like me) that strange and awful possibilities are constellating themselves round the “year 2000 problem”. The more closely experts look at large, time-sensitive software systems, the more they are reporting back (as Bank-Boston's chief technology oficer did in the March 8 Economist) that “what we found was terrifying”. That particular bank's information systems, according to the Economist, reveal a problem “far more complex than anyone had imagined.”

It turns out that the number of ways programmers can conceive, represent, and manipulate dates – both explicitly and implicitly – is unlimited. (You are probably familiar with Julian dates, for example, where the day of the year is represented as a number from 1 - 366. But some programmers have found it convenient to use 1 - 1461, representing the number of days in a leap year cycle.) These dates may in turn be represented in programs by names betraying nothing of their time-relatedness. How do you find the relevant names and data structures among millions of lines of code, in order to fix them?

When Bank-Boston did fix certain programs, it was plagued by system failures as soon as it linked its computers to those of newly acquired BayBank. The problems had to be solved a second time. Current estimates in the banking industry are that, overall, the fixes will cost $1 per line of code.

A widely cited estimate of the long-term, global cost of the year 2000 problem comes to $1.6 trillion. The trend in such estimates is upward, not downward, and surely few corporations or government agencies have reason to announce prematurely any gathering sense of panic they may feel about their own emerging prognoses. What we have so far is an increasing number of carefully phrased statements by official spokesmen to the effect that “if such-and-such a huge task cannot be managed successfully, this or that company or agency or industry faces grave risks.” Or else just the deadpan announcement of fact, such as this offered by Jack K Horner of the Los Alamos National Laboratory: “Various well-calibrated software estimation models (SLAM, REVIC, PRICE-S) predict that fixing the Y2K problem in systems of about 500,000 lines of code or larger will take more time than is available between now and the year 2000, regardless of how many programmers are thrown at the job. Most of the US's military command-and-control systems contain more than 500,000 lines of code” (Risks-Forum Digest, 18.96)

What all this still omits is the larger, social dimension. At some point, perhaps very

soon, some inescapable system failures – or last-resort work-arounds with unacceptably high cost – will become matters of public record. With public confidence shaken and the press beginning its predictable feeding frenzy, there is no telling where events might lead. The issues here are not merely technical ones, and a public whose primary education in technological assessment has so far consisted of little more than a diet of Internet hype will not likely prove wise and considered in its responses.

Which brings me to a newsletter I was recently shown. It's by the financial adviser and professional doomsayer, Gary North. I had forgotten that one could produce a newsletter with such shameful disregard for the intelligence of one's readers. But in these 24 pages of sensationalism North embeds enough warnings from well placed officials to make reasonable people start worrying. Among his various observations and claims:

.     While the usual estimate for code-fixing is $1 per line, “in some applications, such as military applications, it can be almost $9 a line.” He cites a headhunter who believes that, as recruiting pressure rises, the hourly wage for fix-it programmers will hit $300 or $400. “How many companies will survive this kind of capital drain?”

.     Regarding the older, Gartner Group estimate of $300 - $600 billion to fix the year 2000 problem worldwide: “This overly optimistic forecast assumes that there are enough programmers available who can read and understand the 400 [?] different mainframe computer languages, most of them unknown to today's younger programmers.” It also assumes “there will be a pre-repair agreement among all these isolated programmers: a single standard that all computers will recognize after they are repaired.”

.     As one example of the year 2000 problem: software keeps track of the millions of railroad cars owned by different companies and scattered all over the nation's tracks. Upon running into a year 2000 glitch, Union Pacific officials discovered that “over 82% of [the company's] programs are sensitive to date-related fields. It has 7000 programs totalling 12 million lines of code. Estimated cost of conversion: 200,000 man-hours or 100 staff years.”

.     “Last October Peter de Jager, an expert on the Year 2000 Problem, published a summary of two meetings at which he had just spoken. He said that 300 representatives from government agencies were in attendance at his first lecture. He asked how many of them were actively engaged in a compliance project. Three hands went up. A week later, he spoke before 140 representatives of Canadian public utilities. He asked them the same question. Six hands went up. When I read that, I knew: the economy is going to crash. It's too late to stop it from happening.”

.     Allstate Insurance (America's second-largest insurance company) has 40,000 programs operating as a

single, complex system on a mainframe. There are 40 million lines of code. In 1995 Allstate employed 100 programmers and budgeted $40 million to fix the code, with completion scheduled for late 1998. “But all experts in this field say that at least 40% of a repair project must be devoted to testing....Do you really believe that a team of 100 programmers will go through 40 million lines of code and not make a single mistake the first time through?” (Typically, the information North cites leaves the reader unable to determine whether the testing time was already included in the 1998 schedule.)

All this leads North to posit scenarios whereby, for example, Allstate looks to be in trouble in 1999, holders of cash-value policies start demanding their money, other policyholders stop sending in their premiums, bankruptcies occur, people start selling off their mortgages, stocks and bonds, the markets collapse....

But North's primary scenarios involve failures at the Social Security Administration and Internal Revenue Service. The challenges for these bureaucracies are undeniably huge, the history of failure massive, and the time short. North interviewed Shelley Davis, former historian of the IRS:

.     I asked her point-blank if the IRS would be flying blind if the revision of its code turns out to be as big a failure as the last 11 years' worth of revisions. She said that “flying blind” describes it perfectly....Then she made an amazing statement: the figure of 11 years is an underestimate. She said that the IRS has been trying to update its computers for 30 years. Each time, the update has failed. She said that by renaming each successive attempt, the IRS has concealed a problem that has been going on for 30 years.

North claims that system failures affecting Social Security checks are virtually certain, leading again to the bank and market collapse scenario. The IRS in turn depends upon the Social Security computers for data about taxpayers. As the problems ripple from Social Security through the IRS, citizens will stop providing correct information on their tax forms. The government will collapse.

Well, the point is that North's alarmism is as much part of the total picture as the purely technical work to be done. Would it take more than one or two high-profile failures to push events along one or another out-of-control trajectory? There don't seem to be many left who are willing to deny the chaotic possibilities outright – in which case it's hard to justify the term “alarmism” above. Given the scale of the potential disasters, however remote their likelihood, what can one be if not alarmed?

How did we get here? That is the question upon which I hope to offer some commentary in the future. There will undoubtedly be much finger-pointing throughout society, but the interesting thing to me is how hard (and unprofitable) that exercise turns out to be if one wants to identify real guilt. We need, rather, to look at the overall relation between technology and society, colored as it is by attitudes in which we all participate. Clearly there is something amiss in the casual way we have been marrying social structure to programming technique, and

we need to understand just what this is.

Unfortunately, the social atmosphere in coming days may not be very conducive to clear-headed analysis.

This piece first appeared in NETFUTURE Issue No. 44, April 1997.



Do Computers Kill People?


“Everything that happens anywhere in society,” according to Phil Agre (Red Rock Eater News Service), “happens on the Internet too, but everything that happens on the Internet is news, and when something bad happens on the Internet, the 'line' instantly arises that the bad thing in question is a property of the Internet.”

Agre is commenting on the Internet-assisted spread of a, presumably silly, rumor about comet Hale-Bopp. The rumor, he points out, was also effectively countered by means of the Internet. He goes on to offer some useful advice about from-the-hip characterizations of the Net: “Let's not let anyone essentialize the Internet and say 'the Internet does this' and 'the Internet does that' and 'the Internet spreads rumors' and 'the Internet causes social hierarchies to collapse and brings an era of peaceableness and decentralization to the world forever and ever amen,' because those are not things that the Internet itself is capable of doing. Those are things that people do, or don't do, as they collectively see fit.”

All such statements of the “guns don't kill people” variety (or of the opposite, “guns do kill people” variety) are likely to provoke yet another instalment of my periodic harangue about technological neutrality. This one is no exception.

The argument that “guns don't kill people; people do” is unassailably correct – and comes down nicely on the side of human freedom to use technology as we choose. The theme of freedom – along with its correlate, responsibility – is one I've pressed repeatedly in NETFUTURE.

But there's another side to the story. Every technology already embodies certain human choices. It expresses meanings and intentions. A gun, after all, was pretty much designed to kill living organisms at a distance, which gives it an “essentially” different nature from, say, a pair of binoculars.

If all technology bears human meanings and intentions, the networked computer carries the game to an entirely different level. Its whole purpose is to carry our meanings and intentions with a degree of explicitness, subtlety, intricacy, and completeness unimaginable in earlier machines. Every executing program is a condensation of certain human thinking processes. At a more general level, the computer embodies our resolve to approach much of life with a programmatic or recipe-like (algorithmic) mindset. That resolve, expressed in the machinery, is far from innocent or neutral when, for example, we begin to adapt group behavior to programmed constraints.

Putting it in slightly different terms: Yes, our choices individually and collectively are the central thing. But a long history of choices is already built into the technology. We meet ourselves – our deepest tendencies, whether savory or unsavory, conscious or unconscious – in the things we have made. And, as always, the weight of accumulated choices begins to bind us. Our freedom is never absolute, but is conditioned by what we have made of

ourselves and our world so far. The toxic materials I spread over my yard yesterday restrict my options today.

It is true, then, that everything comes down to human freedom and responsibility. But the results of many free choices – above all today – find their way into technology, where they gain a life and staying power of their own. We need, on the one hand, to recognize ourselves – pat, formulaic, uncreative – in our machines even as, on the other hand, we allow that recognition to spur us toward mastery of the machine.

It is not, incidentally, that the effort to develop the latest software and hardware was necessarily “pat and formulaic”. It may have been extremely creative. But once the machine is running and doing its job, it represents only that past, creative act. Now it all too readily stifles the new, creative approaches that might arise among its users. Every past choice, so far as it pushes forward purely on the strength of its old impetus, so far as it remains automatically operative and thereby displaces new choices – so far, that is, as it discourages us from creatively embracing all the potentials of the current moment – diminishes the human being. And the computer is designed precisely to remain operative – to keep running by itself – as an automaton dutifully carrying out its program.

The only way to keep our balance is to recognize what we have built into the computer and continually assert ourselves against it, just as you and I must continually assert ourselves against the limitations imposed by our pasts and expressed in our current natures.

It is not my primary purpose here to comment on the Internet as a rumour mill, but it is worth pointing out that there is a certain built-in Net bias to worry about. It may not be an “essential” bias, but it is a bias of the Net we happen to have built. For in the Net – from our design of its underlying structures to the deep grooves cut by our habits of use – we have nearly perfected the tendency, already partially expressed in printing technology, radio, and television, to decontextualize the word. More and more the Net presents us with words detached from any known speaker and from any very profound meeting of persons. At the same time, the Net offers us a wonderfully privatized blank screen against which to project our fantasies, personas, wishes.

Obviously, there is much more to say. But not even the whole of it would be to argue that rumour must triumph over truth on the Net. It would only be to acknowledge that the Net has been constructed in accordance with certain tendencies of ours – not many of them wakeful, and therefore not many of them safe to hand ourselves over to without full alertness. The Net does have a given nature, even if that nature is, finally, our own. Not all things our own can easily be waved away upon a moment's new resolve, even if the will is there. “The spirit is willing, but the flesh is weak” – and the programs are hard to change.

I need only add that Phil Agre, whose remarks stimulated this harangue, is shouldering more than his share of responsibility for cultivating a proper alertness among Net users.

Finally, in the spirit of provocation I challenge anyone out there who is bumping up against the question of technological

neutrality: rumour mills aside, demonstrate how the analysis I have offered is fundamentally inadequate as a first-order breakdown of the question.

This piece first appeared in NETFUTURE Issue No. 37, January 1997.



Railway Modeller bans the Internet


They are among Britain's most uncontroversial hobby enthusiasts; people whose idea of a good argument is debating rolling stock gauges, and whose interest in technology extends no further than the Hornby Zero One computer control console they bought in 1982.

But from the sedate quarter of railway modelling, a storm is brewing which could form the unlikeliest challenge yet to the relentless development of cyberspace.

Faced with the incoming tide of global communications, Railway Modeller, the hobby's bible, has decided to take on the role of Canute. Without warning, the magazine told advertisers, many of whom had previously included Internet addresses, that the publication of URLs was banned. The reason, according to the powers that be? They “felt like it”.

The result has been a row which has seen the hobby's establishment voice pitted against a small band of young Turks determined to place it at the cutting edge of technology.

To the rebels, the ban was nothing short of a declaration of open warfare on new technology and free speech by the magazine. “Smacks of 1930s Germany, and a Thatcherite Tory party 'There will be no criticism',” steamed one angry contribution among many posted on the otherwise uneventful newsgroup backwater, uk.railway .

To add to the sense of farce surrounding the affair, few critics are willing to put their name to the criticism. “I don't want them cancelling my subscription,” explained one nervous reader who has a complete collection of Railway Modellers dating back to the early 1980s.

“I am so fed up about this that I am even considering not renewing my subscription – and that is a decision I wouldn't take lightly,” he added. “It's ridiculous because the Internet has really taken off for modellers as a means of communication. I am based away from the mainstream clubs, but with the Internet I can put up questions and they will be answered within the hour, and the magazine should be supporting that.”

An advertiser who also asked not to be named agreed. “In their Luddite way I can understand up to a point that they might fear the competition. But that is not a realistic scenario. It reminds me of railway modelling shops refusing to stock catalogues for mail order companies – they ended up losing so many customers they were forced to shut down.”

There is another twist to the tale: far from turning its back on the Net, Railway Modeller's Devon-based publisher, Peco, runs its own Web site, where it publishes preview articles and tasters for forthcoming issues. “It's not that they don't want to embrace new technology; they're just playing at being awkward buggers in the same way they have done for years,” says another reader.

Such criticism runs off the back of Charles Pritchard, the magazine's managing editor. “It does seem odd, but it's a gut feeling we have. We don't want to rush into something that nobody knows very much about – and we can do what we like. Our job is to further the interests of the hobby, not the interests of the Internet. That's just another electronic game, and when people are playing with that they are not modelling railways.”

Pritchard is equally dismissive of the rebels' opinions. “I like to think we're an up-and-coming hobby. These people are not railway modellers, they are just out to stir things up. Some of the statements they have made about us have made us wonder whether we really want to be associated with them in the first place.”

From the Online Guardian of 26 March 1997.
http://online.guardian.co.uk     /one.html

Top Ten Reasons Why Microsoft Loves Java


(CNET Digital Dispatch)

At a recent JavaOne conference in San Francisco, Microsoft was all over the Java programming language. Why? We know of ten reasons:

10    They're just plain scared.
9    Java runs really badly on the Mac.
8    Runs really, really badly on Windows 3.1.
7    Easier to comply with a standard when you own it.
6    Can distribute Java code at Seattle Starbucks franchises (along with the printed edition of Slate).
5    Never really liked ActiveX anyway.
4    Big mix-up: Gates trying to buy sun-drenched island in South Pacific, got programming language instead.
3    Been looking to replace MS COBOL for a while.
2    Can blame all bugs on Sun.
1    Microsoft programmers get to hang around with Kim Polese at Java developers' conferences.

To subscribe or unsubscribe to Digital Dispatch or to find out more about our member services, point your browser here:
http://www.cnet.com/Communit y/Mservice/faq.html?dd . To tell us what you think of CNET Digital Dispatch send mail to: dispatch-comments@cnet.com



[Forward]
Tel: 01763 273 475
Fax: 01763 273 255
Web: Webmaster
Queries: Ask Here
Join UKUUG Today!

UKUUG Secretariat
PO BOX 37
Buntingford
Herts
SG9 9UQ