|Copyright © 2003-4 UKUUG Ltd|
UKUUG LISA/Winter Conference
Security & Networks
24th & 25th February 2005
|Event Homepage||Timetable||Programme||Booking||Venue & Travel||Conference Dinner||Accommodation|
SPADE (Statistical Packet Anomaly Detection Engine) is a plugin to the open source intrusion detection system Snort. At it's simplest, it performs a statistical analysis of network traffic which allows it to detect "unusual" activity. It is an addition to the traditional signature-based IDS methodology that will fail to detect anything that doesn't match a known signature; this allows, hopefully, for "zero-day" detection of new attacks. I will quickly cover installation, configuration and simple running of the system on UNIX, and highlight the benefits of running it alongside a traditional IDS.
The Network Time Protocol, NTP, allows computers to sync their clocks with high precision. The standard implementation around for years has a slight problem with the license, is huge, runs as root, is hard to configure and use, and has a poor security record. As a result, far too many machines have unsynchronized clocks.
That is where OpenNTPD kicks in - a lightweight and secure NTP implementation which is extremely easy to use. The design prerequisites, the actual design, and the implementation will be looked at, with a focus on security.
The Border Gateway Protocol, BGP, is the standard protocol to exchange routing information. So-called full mesh BGP peers build up a table describing the entire Internet. When something goes wrong with bgp, such as loosing tcp session(s) or the bgp daemon dieing, the affected router is loosing all its routes and the affected site may disappear from the internet until the problem is fixed. If an attacker is able to insert routes remotely the implications are even worse. The basics of the BGP protocol are going to be looked at from a security point of view, especially taking care about the tcp session, and what has been done in OpenBSD and the included bgpd to secure BGP.
I discovered a new mission. I have to clue people on writing signal handlers. This is a small thing, I need like 10 .. 15 minutes. If there's room for such a thing I'd be happy to use it.
In distributed applications security aspects are often neglected during design and development. Developers typically are experts in areas such as particle physics or medicine - security expertise is normally not available right from the start of their software development projects. When such scientific applications scale up to large distribute compute clusters suddenly privacy and/or secrecy issues appear. Security has to be bolted on to existing software, what is something very difficult to do and often not to successful.
In this presentation I want to give an introduction to programming methods that allow to completely separate a distributed application's security model from its executable code, in particular its publicly exposed services. Such a software framework will allow application developers to purely focus on their domain problems, even encourage them to ignore security issues and leave it to security experts to solve this problems for them. I will give various examples using the jboss application server.
One of the more vexing problems with the lack of security in Internet email is "collateral spam": the backscatter of bounces from forged email that is sent to the forged address. The effect of this is for users to ignore bounces as just another kind of spam, and even for some email system administrators to disable bounces alltogether. So as well as being a nuisance, it is making email less reliable.
This talk will describe a technique (known as Bounce Address Tag Validation or Signed Envelope Sender) for determining whether a bounce is in response to a legitimate message or if it is collateral spam. I'll describe the difficulties which make the technique more complicated to implement and deploy, and my approach to a solution for Cambridge University. I'll also describe how the technique can be extended to detecting the original forgeries as well as the backscatter.
It is possible to make one LDAP directory serve many applications in an organisation. This has the advantage of reducing the effort required to maintain the data, but it does mean that the design must be thought out very carefully before implementation starts.
LDAP directories are structured as a tree of entries, where each entry consists of a set of attribute-value pairs describing one object. The objects are often people, organisations, and departments, but can be anything at all. Schema is the term used to describe the shape of the directory and the rules that govern its content.
A hypothetical organisation is described, with requirements for 'white pages' directory service as well as a wide range of authentication, authorisation, and application-specific directory needs. The issues arising from the LDAP standards are discussed, along with the problems of maintaining compatibility with a range of existing LDAP clients.
Some options are examined for the layout of the directory tree, with particular emphasis on avoiding the need to re-organise it later. This involves careful separation of the data describing people, departments, groups, and application-specific objects. A simple approach to entry design is proposed, based on the use of locally-defined auxiliary object classes. The effects of schema design on lookup performance are discussed. Some design tricks and pitfalls are presented, based on recent consulting experience.
In this lightning talk, Ivo will describe a number of security vulnerabilities in the GLOBUS GRID middleware software.
The Digital Certificate Operation in a Complex Environment (DCOCE) project has spent two years looking into the use of client/end user digital certificates as means of authentication in a university environment. We have developed a system of distributing digital certificates - in a fairly user-friendly way - to end users, and have been looking into people's experiences as they obtain their certificates and go on to use them. This talk explores the advantages and disadvantages of certificates used in this way and whether there is a future in such a technology: are certificates only useful on the server (SSL)? Would certificates be useful as the front-end authentication tokens for single sign on systems? Does it matter that most users will never fully understand how they work?
The DCOCE project went further than most feasibility studies in this area. We recruited scores of users, with a spectrum of technical abilities. However, our most interesting user group was always going to be the technophobes: if they were happy - it would work.
Whatever your thoughts about client digital certificates, there is much to learn from the (human) methodologies of public key infrastructure (PKI) and how these can be made to scale. For instance, the approval of an applicant?s request for a certificate is carried out by a registration authority (RA). An RA should not necessarily be a technical person, but should be a trusted person. If you can trust a person to give out the keys to the building, a membership card or an access pass, then they should be able to verify a user's identity and play a dominant role in establishing user accounts within the organisation. The sysadmins should police the RAs, but the RAs should do the majority of the work: they are the ones that know if the applicant is bogus or for real. This ?scalable? situation should be the goal of account creation mechanisms within large organisations. At present, most organisations either use sysadmins for these tasks, or - at best - semi-technical registration staff. Can we improve on this situation? Would the complete separation of authentication and authorisation help to achieve this?
We have a lot to learn from the philosophy of PKI, especially in the way in which a certification authority can trust a less technically-literate, but reliable, registration authority.
And the question is still open: will we be able to use client digital certificates for our end users, one day?
For any network administrator, monitoring network traffic is vital to identify data flow and potential abuse. With the increases in both usage and bandwidth, more data passes through routers than ever before, so dealing with the information can pose interesting problems. One of the most common formats for the traffic data produced is Netflow. This technology was developed by Cisco for use with their routers and has been adopted by many network systems.
However, system administrators have encountered bottlenecks when dealing with Netflow data generated by large, busy networks. A commonly desired method is to parse the data being received from the routers and to insert it into SQL.
This talk looks at the development of a system that allows many routers to have their Netflow data parsed and organised into multiple DBMSs. The project utilises JavaSpaces, a service that is part of the Java/Jini infrastructure. By using the concept of a global shared memory, data from routers can be easily passed between agents, allowing the processing of the data to be distributed. Individual connections can also be maintained to databases, reducing the load of establishing multiple connections.
Allison Randal, President of the Perl Foundation and project manager for Perl 6, will deliver a Perl 6 workshop. This will cover Perl 6 syntax and the Parrot interpreter. Delegates will receive a copy of Allison's book, Perl 6 and Parrot Essentials.
The traditional OSI networking model includes seven layers, ranging from the physical layer up to the Application layer. The concept of Integration occurs of course at the Application layer but this one has to be expanded to express the different types of integration which are needed. Integration has evolved and needs Data integration, Application integration, and Business Process integration. Each one of these integration types represents a layer, build on top of the previous one and, as in the OSI model, uses and hides the features of the lower one. Those integration layers have their own standards and protocols which are at different stages of evolution and stabilisation. The upper one, ie the Business Process layer is today strongly influencing the way software systems are designed by introducing the concept of Service Oriented Architecture. Complex systems are described under the form of services where each one is accessible by a well defined interface. Those services are distributed in a computing environment which is not limited to the enterprise but can include the internet. The activation of those services implies the existence of an infrastructure called the Enterprise Service Bus which is the highest level of network sophistication. This presentation introduces and positions new concepts such RosettaNet, adapter, SOAP, XML or BPEL. It shows how existing technologies allow to make those high level concepts work in a business environment.
Wietse discusses lessons learned from the software that he released over the years. This includes how the software came into being, the widely varying publicity that his work received, and the impact his work had on open source and security.
Wietse analyzes a very small program that is obviously correct, yet completely fails to perform as expected for a multitude of reasons. The audience is expected to have some programming experience, but deep language of C or UNIX is not required.
|For more information please contact UKUUG||Problems? e-mail webmaster|