[UKUUG Logo] Copyright © 1995-2004 UKUUG Ltd

UKUUG


[Back]

Newsletter Section 2

Reviews




The UKUUG Winter Conference 1996

(Matt Finlay)

The UK UNIX users group Winter 1996 conference, entitled “Building and maintaining secure open systems”, ran across Tuesday 17 and Wednesday 18 December. It encompassed several speakers on certain topics varying from hackers (their tools and methods) to firewalls. This article is (hopefully!) an informative review of the conference, both of what was said and the points brought up by those present during panel sessions and questioning of the speakers.

Registration was at 1:30pm on the Tuesday, a sensible time which gave me (and others!) plenty of time to travel to Manchester, but the first session didn't start until 2:00pm. This gave me a bit of time to “mingle” and have a look round. There was an extensive O'Reilly book stand with a large number of their books to look through, though I did notice a copy of “Inside the Win95 Registry” hanging around.....probably a bit out of place at a conference organised and attended by UNIX users. There were plenty of copies of the launch issue of Linux World too, and, although I haven't finished reading it yet, it seems a fairly well written magazine, helping to fill the gap in published Linux documentation and general information. Mail info@eurodream.co.uk if you would like more information on Linux World.

From general conversation, it seemed that of the 50 or so attending the conference, about half represented academic institutions, the other half representing companies of varying size and a few banks. It's understandable that Universities are so security conscious, since they cater for a few
thousand or so users, some of which can be very knowledgeable about UNIX security.

The first speaker was Eddie Bleasdale, his talk entitled “The affects of Intranet technology on the design of enterprise client-server systems.” He covered the technologies available, where we are now, where we want to be and how to get there. So, the first part of his presentation covered hardware technologies such as the increase in processing power and the perennially falling price of disk storage. This lead into the current state of systems, focusing on the domination of the PC and the evolution of the common operating systems for it. He then focused on client/server architecture and the proposed network computer (NC). This consisted largely of an overview of the Java Virtual Machine (JVM), which is fundamentally more important than the Java language, since it allows the separation of software and hardware from the application. That is to say an application can be developed using the JVM totally system independent. This is the basis of NCs, but Eddie indicated that people view NCs as inferior to PCs, and that people will therefore be reluctant to substitute a PC for an NC. Finally, server architecture (eg NUMA [non-uniform memory architecture] computers and MPP [massively parallel processor] computers) and design were covered, followed by new enterprise architecture and a view for future opportunity.

Following this first talk was a short coffee break, then Steve Bailey of Reflex Magnetics Ltd gave his presentation on “What you could do without a secure system”. He basically gave an overview of what system administrators were trying to protect themselves from. He started with why UNIX was so vulnerable: pointing out that UNIX has no default security, the source code is open for all to play with, and the presence of many bugs and holes in the operating system. Next he indicated what resources were available to hackers. These include e-zines, such as Phrack, along with published magazines, 2600 and Blacklisted being the only two available in the UK. There are regular 2600 meetings on the first Friday of every month in several cities around the UK, but the strongest are in London and Manchester. There are also annual meetings, namely Defcon in America and Access All Areas ( http://www.access.org.uk ) in the UK. A few books are available detailing information from a hackers point of view, eg Computer Hacking: detection and protection by Imtiaz Malik. Then Steve listed the methods of attack hackers can use, such as trojan horses, logic bombs, time bombs and worms. Also a threat are UNIX viruses, although these are far less of a threat currently than DOS or Win viruses. The best known UNIX viruses are the X21, X23 (both shell scripts) and the snoopy virus (which modifies /etc/passwd ). He also briefly listed misuse of UNIX features, such as the rprotocols (eg rsh , rlogin etc), telnet, the subversion of email (eg fakemail by telneting to port 25 or sniffing) and IP spoofing. Also covered were Denial of Service attacks, such as spawning multiple processes or the recently much-publicised SYN-ACK attack.

In the questions posed at the end of the presentation, it was pointed out that in the experience of some members of the audience, most “hackers” were simply running scripts available on the Internet to gain root access. This seemed contrary to what Steve pointed out during his talk: that many hackers were simply after the challenge of hacking root and that the number of malicious hackers out to do damage were few and far between. It is the author's opinion that Steve's comment was largely true, but with the growth in popularity of the Internet many people are experimenting with tools they can download. I believe that if a system administrator is generally security conscious, by updating utilities as patches are released, and by being aware of the main security holes in UNIX, then these simply curious people will pose no problem. Although most of the information presented was of a basic nature, it was interesting to see the issue of security from a hacker's viewpoint and the resources available to them.

The discussion at the end of this talk led into a panel session “Are modern systems really secure?”, although the natural flow of the session tended to draw the conversation off-topic a bit. The topics covered started with the main points for subversion of systems. The overall solutions and procedures to follow were seen to be:

.     Server security
.     User authentication
.     User authorisation
.     Encryption/decryption of data
.     Integrity of data

Server security is sold by the vendor as insecure, since this is easiest to use. A balance must be generated between risk and adequate security - there is no point in wasting time and money securing a system to a far greater degree than is reasonably necessary.

Passwords are weak - people chose simple, easy to guess passwords, and if they are forced to use a complex password they usually defeat the purpose of it by writing it down somewhere. The pros and cons of the various possible solutions to passwords were discussed, such as cards with a one time password, or biometrics.

Kerberos was suggested as offering some user authorisation, as well as user authentication.

Encryption is a good, strong solution to the subversion of data, but is generally difficult to use and can be time consuming. There is a lot of choice as regards encryption, with PGP being a popular method of securing data from sedition.

Verifying data integrity is linked to its encryption, and is performed by the use of digital signatures. Here again, PGP is a good choice.

Following this panel session was a short break, followed by a visit to the bar before the evening meal.

The second day opened with “Network security without firewalls” by Ian Batten. This presentation documented Ian's experiences and opinions of firewalls, and why he believes they give no advantage for the system set-up he uses. He overviewed the advantages (firewalls do work, and can be very beneficial if administered properly) and disadvantages (complex to administer, can breed complacency) of firewall systems, and the alternatives to them. The disadvantage of the complacency issue was stressed. There is a general attitude of “we've got a firewall, so we're OK as regards security”. This is terribly naive. For most systems, the greatest threat is from within. The percentage of adept penetrations from outside the system is very low in comparison with the percentage of data loss from users taking software and information off the system. The alternative system proposed was that of a screening router coupled with host security. So, all packets that are anomalous (eg claiming to be from that machine) or unnecessary (eg ICMP redirects) are denied, and internal measures are taken to help counter any external attack. These include using encrypted ident as an auditing tool, using ssh for remote access to the system, and extensive system logging among others. The latter was stressed extensively, though care must be taken to ensure log files are resistant to tampering.

After this talk, the questions focused on possible measures to counteract employee abuse of data and the punishments available on the basis of log files. Many universities have a system by which the students must sign an agreement not to mistreat the system as part of the initiation to the system,

although it was suggested that this agreement is legally unenforceable.

The second presentation was given by Shaun Lowry of March Systems Ltd. It was titled “Practical experience of implementing security using firewalls”, and was similar to the previous talk in that it was given on the basis of evaluating a firewall, but in this case it had been decided to use the firewall instead of any of the possible alternatives. The company worked on security of systems, therefore the system they ran was intrinsically insecure (since they needed to test security models). Further, they had a customer who required them to set up a firewall. This customer supplied some basic equipment to allow them to set up a firewall (ie a PC and a modem). The company did this with Linux and the TIS (Trusted Information Systems, which provides several extensively logged utilities: SMAP [secure mail application server - helps to stop the external exploitation of sendmail], an FTP gateway, a http gateway, a telnet gateway and a ping gateway for other services. See http://www.tis.com for more information). There were several generations of firewalls: firstly with this set-up just described to a dialup account (uucp) to an ISP. This was a fairly secure set-up, but only allowed access to email services. The second generation ran with a faster modem and used a PPP connection to the ISP. This was, like the previous incarnation, cheap, and allowed access to a lot more Internet services (ie USENET, FTP and WWW), but still had low bandwidth and required close monitoring of all incoming connections. The final generation ran through a ISDN packet filtering router outside the firewall. This, though not as cheap as the previous systems, was cost effective and the firewall and router complement each other well to give a secure model. However, there are no incoming connections (FTP and WWW site is held at the ISP).

Following on from this was a coffee break before the final presentation by Jim Reid, titled “Open system security: Traps and Pitfalls”. This talk focused largely on the weaknesses inherent in open systems and some of the many routes to subvert security in them. Again, the lack of security in vendor distributed systems was stressed. Many permissions and ownerships of important files are too lax. It was pointed out that all files should be installed according to least privilege: this is the lowest level of permissions and ownerships necessary for the file to be run normally. Granting any greater privileges is a potential security risk. Also a huge compromise is the misuse of setuid and setguid files: most are unnecessarily set as such. The insecurity of passwords was once again pointed out. Unfortunately, due to a slightly late start and a restriction on the finishing time, Jim was unable to completely finish his presentation. However, an extensive synopsis is given in the documentation, including insecurity of X windows, problems with MIME, JAVA and the WWW, along with some security solutions such as firewalls, SSL and SSH. Finally, there is a complete breakdown of a security hole: the infamous BSD lpd hole, which allows any file on the system to be overwritten.

In the documentation there are also slides and a synopsis of Eddie Bleasdale's presentation, along with slides from Steve Bailey's and Shawn Lowry's talks. An overview of Ian Batten's talk was provided separately. These provide good material and important considerations when trying to set-up a secure multi-user system.

A final panel session centred on “Are firewalls just smoke screens?” Here, the fact that a firewall is a help but not a complete solution was once again stressed, and that using a firewall did not mean a sysadmin could just forget about security issues.

There was a final lunch at 1:00pm, before departing. The choice of the Manchester Conference Centre for the conference was a sound one: the food was good (as was the wine!) and the rooms were comfortable and well kept. It was close to the station, and the fact that it was smack in the middle of the UMIST campus meant that all the university representatives felt at home!

On a final note, I'm sure I speak on behalf of everyone present in thanking those who organised the conference and those who spoke at it. On a personal note, I would like to thank Mick and Sue for allowing me to attend the conference - I thoroughly enjoyed the opportunity to talk to those involved in system security. If anyone has any comments on this article or wants to ask me anything about the conference, please feel free to mail me on bras0036@sable.ox.ac.uk .


Secure Shell – the Tele-worker's dream!

(Martin Houston)

[mh] One of the biggest problems with the Internet as it grows is security. The Internet is by far the easiest method for people and corporations to communicate on a global scale. As the nature of that communication has gone beyond the academic and what is harmless public knowledge, the need for assurance that communications are private becomes vital.

At the moment the Internet is as open as a postal service where everyone is obliged to write everything on the backs of post-cards.

If all you are saying is “Weather is lovely - Wish you were here” then this is no problem. It would be much less fun if the postman came strolling up the path laughing at the current state of your bank account!

The Internet is like that because anyone with access to a network, over which Internet packets flow, can 'snoop' on those packets and save for later any with an interesting content (like credit card numbers). The software to do this is readily available on the net itself.

Secure shell is a solution to the problem of wanting to use remote computers, without fear that the information you pass to them will be revealed to others, just because the transfer is taking place on the Internet.

I first encountered secure shell on the UK UNIX User Group central system. Here it has been operating for many months, controlling access to the box so that only authorized users could change information on the UKUUG web pages ( http://www.ukuug.org ).

I fetched secure shell, compiled it for Linux and managed to get secure and fast communications between my home Linux machine and the UKUUG Sun Sparc.

The success of ssh has opened up the opportunity of working from home without a client's confidential data being plastered all over the Internet. All that is needed is a secure shell installation and my home Linux machine becomes a fully functional and secure X terminal.

In the current issue of Linux World we bring you secure shell and full instructions on how to get it working on the Red Hat 3.03 Linux that was on the cover CD of the last issue of Linux World.

One note of caution, using the strong encryption may or may not be legal in your country. Don't assume that it is just because you have the software to do it!

Installing ssh on Redhat 3.03 Release

This article assumes that you have set up and have fully functioning a Red Hat 3.03 Linux release like the one on the cover CD of issue 1 of Linux World. The instructions here will probably apply to other Linux distributions as well, but there may be small differences.

Log into the system with a normal user account (not the root account).

Copy the file ssh1214.taz from the CD into your home directory.

Type tar xvzf ssh1214.taz . This will create a directory called ssh-1.2.14 .

Change directory to ssh-1.2.14 . Read the README and INSTALL files there.

Type ./configure , this will automatically work out the correct configuration for ssh under Linux.

Typing make will build the ssh program.

If you are happy that ssh has been made correctly then type su root and run make install . By default this will install ssh into the /usr/local area and generate a unique key for your machine. It should also install ssh documentation into /usr/local/man . Type man ssh and you should see the on-line documentation if the installation was successful.

While you are still root type /usr/local/sbin/sshd to start up the ssh daemon process and then test that ssh is working on the local loop-back by typing ssh localhost . If you want sshd to run every time the machine is restarted, this command has to be put in the /etc/rc.d/rc.local file. You will be asked for your password and then ssh will sign you into the local machine through an encrypted channel.

If you have been doing this while running under X Windows, you should now be able to test the X server re-direction facility by starting an xterm session from your secure shell. You can tell you are going through a re-directed X server as your DISPLAY variable ( echo $DISPLAY ) will end in :1.0 instead of :0:0 .

If you repeat these steps on a second Linux machine you will be able to type ssh and run private X sessions between the two machines. It makes no difference to your privacy if the machines are on the same LAN or over the Internet on the other side of the world. An added bonus of the ssh X re-director is that the X traffic is compressed. This can boost X windows performance considerably if the network connection between them is slow.

For example, it is just about bearable to use multiple X terminals and a xload process over a 14.4Kbps modem connection.

You can also set up ssh so it acts as a replacement for rsh and rcp . This will be covered briefly later in this article. Even in its simplest form ssh gives you two really great advantages:

1.    You know that anything that you type in an interactive session to a remote machine will not be read across the Internet by some snooper. It makes tele-working on sensitive projects over the Internet possible.

2.    The extra bonus of gzip compression of the encrypted data stream means that working with ssh is faster than a raw telnet session alone. You gain much and lose nothing from better privacy.

Making a remote system trust you

Sometimes it is inconvenient to have to give a password to the remote machine every time you connect to it. Bear in mind that under ssh the password is never transmitted across the network, except when strongly encrypted. However, it is sometimes better to set up the remote machine so that it has the trust of the local machine. This means that ssh command execution and scp file transfer can be worked into shell or Perl scripts.

A pre-requisite for this stage is that the remote system has been set up with ssh and you have an account on it.

At this stage you should be able to use ssh to log into the remote machine, although you will be required to give your system password each time.

On both the local and remote systems you must run the ssh-keygen command. This will generate you a public/private key pair and ask you for a pass phrase, so that your stored key is protected. As well as being able to remember this pass phrase, it is also very important that other people cannot guess it. The best pass phrases are pieces of nonsense that you find memorable, that cannot be connected with you. If you are a Bank Manager then “Wibble flobble ploop” would be a much better choice than “I've got the cash”.

What ssh-keygen will do on each machine is put files called identity and identity.pub in a sub directory called .ssh of your home directory. The actual contents of the private key are protected by the pass-phrase.

The first thing that you must do is let the target system have a copy of your public key. This is stored in the file ~/.ssh/identity.pub and needs to be transferred by some means to a file called ~/.ssh/authorized_keys .

In fact the authorized keys file can contain more than one key. However, this will only occur if you are setting up a system where multiple remote users have trusted access to the same account on the remote system.

So you have managed to copy the identity.pub file on your local system to authorized_keys on the remote system.

Now you try ssh and instead of asking you for your password it now wants you to give it the pass phrase for your local private key. The reason for this is to avoid the need for the remote machine to ask you for a password. The pass phrase is needed so that the copy of your local private key can be unlocked and used to encrypt communication to the remote system. The remote system then knows it must be you as the public key that you placed in authorized_keys can be used to decrypt the message.

This is still no more useful than the remote system asking for a password. However it is more secure because the option of allowing people to sign on with a system password can then be turned off in the ssh configuration file.

As mentioned earlier, it is possible to dispense with the need to give the remote system a password every time you connect to it. This intermediate stage involves you in more work not less! The secret is another part of the ssh software called ssh-agent . It is an in-memory process that you give your pass-phrase to, which then uses your private key to communicate with the remote system.

As communications between ssh-agent and ssh processes need to be secure, ssh-agent must be an ancestor process of ssh processes. The command ssh-agent sh will start a sub-shell from which ssh commands can be run. The first thing we need to do, however, is tell this ssh-agent process what our pass phrase is:

$ ssh-agent sh
<<< Next prompt is from ssh-agents sub-shell, shown folded
$ ssh-add
Need pass-phrase for /home/mhouston/.ssh/identity /mhouston@mh01.demon.co.uk).
Enter passphrase:

When you start using ssh you enter your pass-phrase just once. Now you will be able to enter ssh and scp commands on any system that trusts you, without being asked for any more passwords.

Security Considerations

Setting up ssh trust on your Linux machine means that if security is breached on that machine, it is breached for your account on all machines that trust you. The machine that ssh originates from and that stores your private key should ideally be a personal system that only you use. A personal laptop computer that you can take

with you, would be better than an account on a big system with a large (and inquisitive) user population.

Ssh-agent is not daft enough to leave your pass-phrase lying around in memory, but a determined hacker with root privilege on your system might just be able to catch the input as it is typed. Such a hacker would then have the ability to become you and cause havoc!

Once secure shell is in place, we can remove the ability of systems to receive insecure service requests such as telnet and ftp .

Where SSH is going

As freely distributed software, ssh has found its way onto many thousands of computers in at least 40 countries. The basic ssh protocol will remain free in the author's hope that it will become one of the main-stay standard protocols of the Internet.

For those that need commercial support for ssh, and also enhancements to it, like a secure shell client for Microsoft Windows, then Data Fellows Ltd has a commercially licensed version called “F-Secure SSH”. More information can be found on Data Fellows web site:
http://www.datafellows.com/ .

Secure shell is, after the World Wide Web, the second “killer” application of the Internet. Whereas the web has revolutionised the public exchange of information on a global scale, secure shell will enable private data exchange to allow virtual corporations and whole trans-national virtual economies to develop. I can see why some governments are so opposed to people exercising their right to privacy!

Martin Houston is 33 years old, married with a daughter and four cats. A UNIX Systems programmer and system administrator for 13 years and organser of the UKUUG Linux SIG since 1994.
Reprinted with permission from Linux World Magazine



[Forward]
Tel: 01763 273 475
Fax: 01763 273 255
Web: Webmaster
Queries: Ask Here
Join UKUUG Today!

UKUUG Secretariat
PO BOX 37
Buntingford
Herts
SG9 9UQ