[UKUUG Logo] Copyright © 1995-2004 UKUUG Ltd

UKUUG


Previous Table of Contents Next

Soap Box

Understanding the Linux Kernel

(Josette Garcia)

Linux was once seen as a kind of counter-culture hacker experiment. But as Linux has increasingly become a mission-critical part of many organizations, a deep knowledge of Linux is increasingly valued as a sophisticated display of programming skill. In order to really understand Linux, you must understand the kernel.

Linux was developed by Linus Torvalds at the University of Helsinki in Finland. To complete the operating system, Torvalds and other team members made use of system components developed by members of the Free Software Foundation for the GNU project. Thus, the only software to which the term "Linux" applies is the kernel. The Linux kernel is responsible for the sophisticated memory management of the whole system, and the force behind Linux efficiency.

The kernel is the essential center of Linux, providing all the basic services for all other parts of the operating system. Typically, the kernel handles all requests or completed I/O operations and determines which programs will share the kernel's processing time and in what order.

"Linux source code for all supported architectures is contained in about 4500 C and Assembly files stored in about 270 subdirectories. It consists of about 2 million lines of code, which occupy more than 58 megabytes of disk space," says Daniel P. Bovet, co-author of the latest O'Reilly release Understanding the Linux Kernel. "After reading this book, you should be able to find your way through the code, distinguishing between crucial data structures and secondary ones -- in short, you'll become a true Linux hacker."

If you have ever wondered why Linux is so efficient, or if you want to know if its performance will be useful for some unusual application that you have, O'Reilly's latest release, Understanding the Linux Kernel should be on your radar. Understanding the Linux Kernel provides a guided tour of the Linux kernel along with valuable and significant insights.

Josette is Marketing Manager for O'Reilly's UK operations.


The Trouble with Ubiquitous Technology Pushers (Part 3)

To Automate, or Re-enflesh?

(Steve Talbott)

Part 1 of this series appeared in Volume 9, Number 1. Part 2 appeared in Volume 9, Number 2.

Some while back a reader urged upon me this principle: "Anything we do that can be automated should be automated". It's a principle that appeals to the common sense of many people today, and complements the notion that machines can unburden us of the more tedious and mechanized work, leaving us free to occupy ourselves with "higher" and more "human" tasks.

Appealing as the reader's suggestion is, I'm convinced that it readily promotes an unhealthy relation to technology. Here's why: First, it obscures the truth that nothing we do can be automated. Sure, I know that a computer can "add two plus two", but what it does is not what we do. It does not bring consciousness to the act. It is not exercising and therefore strengthening certain skills and cognitive capacities. It requires no attention, no will power, no motivation, no supportive metabolism, no memory, no imagination, and no sympathetic muscle movements. Nor is it engaged in any larger purpose when it carries out the computation -- or any purpose at all. It is amazing to see how readily we forget these things today and equate a computer's action with human performance.

When a machine "does" what we do, we typically mean that something about the structure of the machine's activity can be mapped (by us) to a narrow set of formal features abstracted from our own activity. For example, a pattern of electrical pulses can be seen as analogous to the formal structure of a problem in arithmetic addition. This sort of mapping happens to be very useful, but no worthwhile effort to assess the usefulness can begin with the false notion that the machine is doing what we do.

Actually, the more relevant fact is that the machine displaces and eliminates from the situation much that we do, leaving us to consider (1) how we might compensate for the disuse of our own capacities, and (2) how the entire context and significance of the work has been altered by its reduction to those few formal features.

It's all too easy for the facile calculations of the spreadsheet software to begin narrowing a business' conception of its own work, even though the business may have begun with a richly meaningful and idealistic set of intentions. Intention doesn't enter into the software's calculations, and as that software plays an ever greater role in the business, the question is, "Where will the guiding intentions come from -- or will we simply allow them to disappear as we yield to the machine's empty guidance?"

There Is No Stopping Place

If the first problem with our reader's formulation is that nothing we do can be automated, the second problem is that everything can be automated. That is, once you equate the kind of reduction I've been talking about with "automating human activity", there's no line separating things that can be automated from those that cannot. So "automate whatever can be automated" provides no guidance whatever. In the reduced sense that applies, everything can be automated.

As many have pointed out, you can abstract some sort of formal structure from any activity you can describe (the description itself embodies a syntactic structure) and this structure can be impressed upon a machine. So as soon as you are convinced you have automated the simplest human activity, you are climbing a ladder possessing no special rung to mark a stopping place. If a calculator "does what we do", then a computer can in one sense or another do what a judge or composer or physicist does. If we do not pay attention to the difference between the computational abstraction and the human reality in the simple cases, nothing will require our attention to those differences in the "higher" cases.

Further, the more you automate, the more you tend to reduce the affected contexts to the terms of your automation, so that the next "higher" activity looks more and more like an automatic one that should be handed over to a machine. When, finally, the supervisor is supervising only machines, there's no reason for the supervisor himself not to become a machine.

So the idea that automation relieves us from grunt work in order to concentrate on higher things looks rather like the opposite of the truth. Automation tends continually to reduce the higher work to mechanical and computational terms. At least, it does this when we lose sight of the full reality of the work, reconceiving it as if its entire significance lay in the few decontextualized structural features we can analogize in a machine. (In a machine-driven world, we are always pressured toward this reconceptualization.) But if, on the other hand, we do not lose sight of the full reality of the work, then the "lower-level" stuff may look just as much worth doing ourselves as the "higher" -- in which case we have to ask, "What, really, is the rationale for automating it?"

This is not to say that, for example, endless hours spent manually adding columns of numbers would prove rewarding to most people. But where we typically run into such tasks is precisely where reductive technologies (such as those involved in the machinery of bookkeeping and accounting) have already shaped the work to be done. In general, the grunt work we want to get rid of is the result of automation, and while additional automation may relieve us of that particular work, it also recasts a yet wider sphere of work in terms seemingly fit only for automation. After all, the ever more sophisticated accounting software requires ever more extensive inputs, so more and more people in the organization find themselves caught up in paper-shuffling (or electronic file-shuffling).

It's where automation has not already destroyed the meaningfulness of the low-level work that we discover how high-level it can really be. The farmer may choose not to abandon his occasional manual hoeing -- not because he is a hopeless romantic, but because there is satisfaction in the simple rhythms, good health in the exercise, and essential knowledge of soil and crop conditions in the observations made along the way. What will provide these benefits when he resides in a sealed, air-conditioned cab fifteen feet off the ground?

A Strengthened Inner Activity

You may ask, then, "Should nothing be automated?" I didn't say that! I've only suggested that we avoid deluding ourselves about automation freeing us for higher things. Have we in fact been enjoying such a release? Any investigation of the matter will reveal that the machine's pull is most naturally downward. It's hard to relate to a machine except by becoming machine-like in some part of ourselves.

When we yield ourselves to automatisms, we become sleepwalkers. But if instead they serve as foils for our own increased wakefulness, then they will have performed a high service. After all, downward forces, too, can be essential to our health. We couldn't walk upright without the force of gravity to work against, and our muscles would atrophy without the effort.

It is, I think, inescapable that we should automate many things -- and, of course, there are many pleasures to be had in achieving this. When I said above that an automating mentality will not find any clear stopping place, I did not mean to imply that there should be such a stopping place -- certainly not in any absolute sense. In fact, I think it's wrong to imagine a stopping place defined in terms of the "objective" nature of the work.

Everything is potentially automatable in the restricted sense I have indicated, and pretending there is a natural stopping place only encourages the kind of mindless automation that is the real problem. What is crucial is for us to be aware of what we're doing and to find within ourselves the necessary compensations. We have to struggle ever more determinedly to hold on to the realities and meanings our automated abstractions were originally derived from. That is, we must learn to bring the abstractions alive again through a strengthened inner activity -- a tough challenge when the machine continually invites us to let go of our own activity and accept the task in reduced terms!

The limits of our compensatory capacities will always suggest wise stopping places, if we are willing to attend to those limits. But not absolute stopping places; they will shift as our capacities grow.

Are we currently setting the bounds of automation wisely? You tell me. Has the accounting software and the remarkable automation of global financial transactions been countered by our resolve to impose our own conscious meanings upon those transactions? Or, rather, does the entire financial system function more and more like a machine, merely computing an abstract bottom line?

Well, if you're looking at the dominant institutions, I imagine your answer will be pessimistic. But perhaps the most important developments for the future are the less conspicuous ones -- for example, the alternative food and health systems, the growing interest in product labeling, the investing-with-a-conscience movement. What's essential in these is the determination to restore the automated abstraction -- for example, the nutrient in the processed food, the number in the accountant's spreadsheet -- to the meaningful context it was originally ripped out of.

Holding the Balance

I guess the sum of the matter is that the restoration entails a gesture exactly opposite to the one expressed in, "if it can be automated, it should be". It's more like, "if it can be re-enfleshed, it should be". As long as these two movements are held in balance, we're probably okay. We should automate only where we can, out of our inner resources, re- enliven. For example, we should substitute written notes and email for face-to-face-exchanges only so far as we have learned the higher and more demanding art of revivifying the written word so that it reveals the other person as deeply as possible and gives us something of his "presence". Of course, this is not the way most of us relate to email -- not even when the frenetic, email-influenced pace of work would allow it.

I suppose few would quarrel with the proposition that our society is much more gripped by the imperative to automate than the imperative to re-enflesh. Certainly this is ground for worry, given that the push for automation alone is a push to eradicate the human being.

The threat of eradication was Bill Joy's concern in his notorious Wired article. I share his concern, but it seems to me that the effort to define a fixed stopping place is inherently untenable; it just can't be done with any consistency. Nor would we expect that it could be done if we had grown accustomed to think organically and imaginatively, in terms of movement, balance, tension, polarity (exactly what our machines train us away from!).

It seems to me that some such awareness as I have tried to adumbrate here is the prerequisite for our avoiding the eventual loss of ourselves. It must be an awareness of our own, machine-transcending capacities. We must exercise these in a living, tensive balance as we counter the pull of all the mechanisms around us.

This is exactly the awareness that many of Joy's critics have refused. It seems obvious to Ray Kurzweil (author of The Age of Spiritual Machines) that digital technologies will transform human consciousness, and not at all obvious that a transformed human consciousness is the only thing that can sustain future technologies -- just as transformations of human consciousness have been required to generate and sustain all earlier technologies. There's something self-fulfilling in Kurzweil's prophecies; when you lose sight of the machine-transcending qualities of your own mind, it is not surprising that you find yourself increasingly susceptible to machine-like influences.

In other words, one way for us to transform our powers of consciousness is to abdicate them. Then it really does become reasonable to see ourselves in a competition, perhaps even a desperate competition, with our machines. This is the inevitable conclusion of the single-minded drive to automate everything.

Joy's alarm is justified. But our core response, while it will certainly touch policy domains, must arise first of all in that place within ourselves where we are inspired to re-enflesh whatever can be re-enfleshed. To focus instead merely on stopping automation is already to have accepted that the machine, rather than our own journey of self-transformation, is the decisive shaper of our future. Yes, we urgently need to find the right place for our machines, but we can do so only by finding the right place for ourselves.

Steve Talbott is the author of The Future Does Not Compute: Transcending the Machines in Our Midst and Editor of NetFuture, where this article originally appeared.

NetFuture is supported by freely given user contributions, and could not survive without them. For details and special offers, see http://www.netfuture.org/support.html.


Previous Table of Contents Next
Tel: 01763 273 475
Fax: 01763 273 255
Web: Webmaster
Queries: Ask Here
Join UKUUG Today!

UKUUG Secretariat
PO BOX 37
Buntingford
Herts
SG9 9UQ