[UKUUG Logo] Copyright © 1995-2004 UKUUG Ltd


Previous Next

Soap Box

The Trouble with Ubiquitous Technology Pushers (Part 2)


Why We'd Be Better Off without the MIT Media Lab

(Steve Talbot)

In part 1 of this series I voiced my first complaint against the ubiquitous technology pushers: by letting their work develop out of a one-sided preoccupation with the technological milieu rather than immersion in the meaningful contexts affected by their inventions, they inflict technological "answers" upon us without any serious reference to the supposed problems.

I don't mean to suggest that the bearers of technological wonders are shy about telling us how their inventions will solve this or that problem. They are all too eager. When you are convinced you have a nifty answer, everything begins to look like a problem demanding your answer.

This leads to my second complaint: technology pushers too often fail to recognize the difference between solving a problem and contributing to the health of society. Solving problems is, in fact, one of the easiest ways to sicken society. A technical device or procedure can solve problem X while worsening an underlying condition much more serious than X. Here are a few examples:

But what is it that makes one alone? Doesn't the widespread use of cell phones, in our cultural milieu, tend to thicken a little further that mutual insulation between us by which society becomes a less hospitable and less safe place? Each of us becomes less inclined to seek help from those immediately around us, and the habit of offering help weakens. For people who pass each other with cell phone attached to ear, the important items of business -- including the sources of help -- always seem to be elsewhere, and there is not much room for attention to the immediately surrounding social context. The question, "Who is my neighbor?" becomes harder and harder to answer.

The Basic Choice

None of this should be controversial. You might even say that these examples make the trivial and universally recognized point that social problems are complex. But what isn't so widely recognized -- or is too often forgotten -- is that the technological mindset, so excellently trained to think in terms of discrete solutions, bugs, fixes, precise "specs", and well-defined syntaxes, is not inclined toward a reckoning with organic complexity.

But this is exactly what is needed. With an organism, or a society of organisms, changing one "spec" implies changes to everything. While (with some justification) we make it the engineer's task to frame problems that are as "well-behaved" and as rigorously specifiable as possible, we face social problems that can be fully understood only with the fluid, pictorial, category-blurring, whole-encompassing finesse of the imagination.

Or, putting it a little differently: society presents us with conversations we must enter into, not problems to be solved, however much we find the reduction to manageable problems a necessary, temporary expedient. Only when we remain aware of what we are doing and continually allow the larger context to discipline, dissolve, and re-shape our narrowly focused problem solving do we remain on safe ground.

But let me clarify what I am and am not saying. I'm not saying that you shouldn't give your daughter a cell phone. I can imagine situations where I would do it. This would have the immediate (and substantial!) virtue of contributing to the safety of a loved one. But if I were not also working consciously against the unhealthy tendencies of the larger context that necessitated the phone, and to which the phone itself all too naturally contributes, then I would be adding my small share to the miseries of society. I would be making society safer only in the sense that exclusive, gated communities may make a society safer -- for some people, and for now.

Seeking clarity at this point is crucial because what the technology critic seems to be saying can easily provoke a justified incredulity in those who, with all good faith, are working to put more sophisticated technical resources at our disposal. "Do you really mean that, in terms of our underlying social problems, we'd be better off without cell phones -- and computers, and GPS locators, and space probes, and genetic engineering techniques? And even if this were true, can you possibly believe that, outside the dreams of madmen, the world's vast apparatus of technological advance could be dismantled?"

No, I believe none of those things. What I do believe is that, with our technologies in hand, we are given the freedom to construct a hellish, counter-human, machine-like society, or else a humane society in which the machine, by being held in its place, reflects back to us our own inner powers of mastery. And the difference between these antithetical movements is the difference between focusing more on the human dimensions of whatever domain we are concerned with, or on the technological dimensions. In the former case, we will recognize that the primary challenges always have to do with the development of character, insight, volitional strength, imagination, and so on; our technical activities will be valued above all for the way they can help us develop these capacities. The other, gravely misdirected approach is to focus on technological developments as if they themselves held solutions.

So, no, I don't suggest that we ban cell phones. But our society's fixation upon technological development as the very substance and marrow of human evolution has become ferocious. There is a grotesque disproportion within American culture between the terms in which we see our billion-dollar investments and the real needs around us. This distortion is dangerous and needs healing -- a prospect that admittedly appears as unlikely today as a broad, public consciousness of recycling, pollution, and environmental issues must have seemed in the Fifties. A Paradoxical Reversal I pointed out above that solving problem X is not necessarily to contribute to society's health. This can be stated more strongly and paradoxically: to the extent we believe we have a rigorous technological solution, that solution will probably worsen the very problem it was intended to solve.

You can already see this reversal in the bulleted examples listed above. For example, devices helping to "guarantee" our safety may, in the end, work against safety itself. But we need to take clear hold of the dynamic at work here.

The automobile, an early-twentieth-century driver might well have thought, will bind us into closer communities. The distance between us is overcome and we can connect more easily with each other. Yet the automobile's effect on our communities was quite otherwise. One can in fact argue -- I often do so in my public lectures -- that all distance-collapsing technologies, by their very nature, end up inserting greater distance between us. I have no space to develop this thought here, but I think you can see the force of the claim easily enough.

Look at it this way: the whole idea of a distance-collapsing technology is to enable us to get more quickly from point A to point B. But getting more quickly from A to B means having less time and opportunity for attending to any of the points between A and B. Moreover, as the influence of distance-collapsing technologies spreads, A and B themselves become intermediary points in an ever-expanding net of one-time destinations that are now mere waystations. If we're to cover those spaces efficiently, we have no more time for A and B than for any of the points between. And so we find ourselves in a world where we're all just passing through.

How can people who are just passing through -- determined to criss-cross each other's paths at ever more dizzying speeds -- come closer together? The easiest result -- not an absolutely necessary one, but the result we can most naturally fall into -- is the one that only seemed at first glance to be paradoxical: we find ourselves flying further and further apart rather than coming together. As abstract spatial distance yields to our technological prowess, the qualitative nooks and corners of particular places -- places where significant meetings can occur -- disappear into the quantitative vastnesses of that abstract space.

Clearly I am distinguishing here between two different senses of "coming together." And that is the crux of the matter. Technology can indeed overcome those physical spaces, but if this is how we frame the problem (and we must frame it this way if we want a perfectly effective technological "solution") then we have turned our eyes away from the much less easily defined problems that really matter. This is how the new and wondrous technology becomes guaranteed to make the real problem worse. If you falsely believe that X will achieve Y, then you've not only lost sight of how Y can really be achieved, but you're also turning your attention in unpromising directions.

The certainty of the unhappy reversal, in other words, is a direct result of a technological fixation that encourages a subtle but disastrous shift in what we imagine our problems to be. The engineer, of course, can always say, "Hey, I was just trying to overcome the problem of spatial distance. What people do with this opportunity is their choice." There's profound truth in that. But the disclaimer is more than a little disingenuous in a society -- and an engineering culture -- where the exercise of the technical machinery for connecting persons is chronically confused with personal connections.

The Machine and I

In summary: There's nothing easier than to find problems your new gadget will solve. It's so easy that it has encouraged a standard formula of journalism: "Dr. Jones' new discovery (or invention) could lead in time to [your choice of solved problems here]". How standard this formula has become is a good measure of how technocentric our society has become. The technical achievement just must, it seems, translate into a social good. There is no equivalent standard formula that routinely acknowledges the risks of the new development. There is no recognition of the historical logic of reversal I've discussed here -- and therefore the prevailing formula becomes part of this logic, helping to guarantee a destructive result.

I don't know of any truth more worthy of contemplation in our society today than this one, startling as it may appear: No problem for which there is a well-defined technical solution is a human problem. It has not yet been raised through imagination and will and self-understanding into the sphere of the human being. And what is this sphere? It is, above all, the domain of the "I", or self. The "I", as Jacques Lusseyran remarks,

nourishes itself exclusively on its own activity. Actions that others take in its stead, far from helping, serve only to weaken it. If it does not come to meeting things halfway out of its own initiative, the things will push it back; they will overpower it and will not rest until it either withdraws altogether or dies.
Against the Pollution of the I, Parabola, 1999
All problems of society are, in the end, weaknesses of the "I", and it is undeniable that technologies, by substituting for human effort, invite the "I" toward a numbing passivity. But by challenging us with less-than- fully-human problems and solutions, technologies also invite the "I" to assert itself. This assertion, this grace bestowed by technology, always requires us to work, in a sense, against the technology, countering it with an activity of our own -- countering it, that is, with something more than technological. Then the technology becomes part of a larger redemptive development. When, on the other hand, technology itself is seen to bear "solutions", the disastrous reversal has already occurred.

What we should ask of the technology pushers, whether they reside as engineers at the MIT Media Lab or as employees at high-tech companies or as consumers in our own homes, is a recognition that the primary danger today is the danger of this reversal, where the strengthening activity of the "I" is sacrificed to the automatisms around us. For every technology we embrace, we should require of ourselves an answer to the question, "What counter-force does this thing require from me in order to prevent it from diminishing both me and the social contexts in which I live?"

I spoke a moment ago of technologies inviting us toward passivity, or else inviting us toward self-assertion. But this is not quite the same thing as saying that technologies present us with choices and we are equally free to go to the right or to the left. The choices aren't symmetrical. It takes an inner wrench, a difficult, willful arousing of self, to accept active responsibility for what technologies do to us. Passivity, on the other hand, is easy. It's the choice we can make, so to speak, without bothering to choose. It's also the predominant stance toward technology in our society today. Many a massive PR and sales apparatus is aimed at dressing up the choices of passivity to make them as titillating and irresistible as possible. And, by many accounts, our yielding to the titillation is what drives the "new economy".

The subtitle of this series of articles is "Why We'd Be Better Off without the MIT Media Lab". Let me broaden that here. What we'd be better off without is every organization that pushes purely technological "solutions" as if they were what could make us better off. The Media Lab has done its best to make itself the reigning symbol of this push -- and I think would proudly lay claim to the crown. But it remains true that the pathology infects our society as a whole.

In part 3 of this series I will look at the prospects for labor-saving and time-saving devices. Related articles:

Steve Talbot is editor of Netfuture, a freely distributed newsletter dealing with technology and human responsibility.

Part 3 will be published in the September newsletter.

Previous Next
Tel: 01763 273 475
Fax: 01763 273 255
Web: Webmaster
Queries: Ask Here
Join UKUUG Today!

UKUUG Secretariat