This page is all specific to the department for which this computation-server site was written. It is not likely to interest outside people!
There are three main computers open for shared use. Later in this page are descriptions of their hardware and operating system.
All the computers are specific to groups, but are available openly to the whole division until such time as competition over resources forces discussion of time-planning or new purchase -- I don't expect such serious competition to occur at all often, as most peoples' high demand for computing is quite transient.
There are several places where users can store files; these have very different properties with regard to speed, available space, cross-system visibility, and volatility! Copying files to and from other computers can be done in several ways.
User accounts are the same (login, pass, home) on all these systems, and on the webserver, fileserver and desktop Linux computers.
There are several ways to use the servers remotely. The Remote Access section describes them and their relative merits; the different demands of different users probably make the optimal method vary between them.
(The diagnostics group's simulation server.)
This has two quad-core Intel Xeon E5345 2.33GHz processors (CPUs), which have the recent `Core-2' architecture that gives much more work per CPU MHz than the earlier Pentium4-based ones. They support the `amd64' extended memory instructions (known to Intel as EM64T), allowing old 32-bit programs to run, but allowing 64-bit programs too, in order to get access to all the available memory for a single program. There is 16 GB of DDR2-667 memory (RAM), and three 80 GB SATA disks on which a total of ~32 GB of striped (equal priority) swap space is available. The system and the /local storage space are separate RAID5 devices across the remainder of the disks (swap is first, on the fast part of the disks). A fourth, 10krpm SATA disk is mounted directly on /vmware as a place for virtual machine images for users who use programs that don't run directly on the main host operating system.
diagsim should be used for most work, as it's so much less likely than the other, smaller, machines to get slowed down by multiple users' CPU and memory use.
(The magnetics group's simulation server.)
Costing the same in 2004 as diagsim did in 2007, this has two single-core hyperthreaded Intel Xeon 3.00GHz processors based on the Pentium4. It has 2 GB of RAM, and three 146 GB SCSI (10000rpm) disks on which ~24 GB of striped swap-space is available and RAID5 devices for system and /local are held.
magsim should be used mainly by the magnetics group, but can be used by others as a further couple of processors if diagsim is already fully loaded. Amicable agreements can doubtless be acheived even when some projects demand a spurt of intensive use! The old webpages specifically about magsim, are here.
(A desktop computer open for general use.)
This has a dual-core 3.0 GHz AMD Athlon64, 2 GB of RAM, and ~12 GB of striped swap over three 160 GB SATA disks on which (again) system and /local also live as RAID5.
It is suggested that this should not normally be used for login and interactive work (partly because it's more likely to be kicked accidentally, turned off to move a desk, etc.) but anyone wanting to run multiple jobs over as many CPUs as possible (see optimisation page) will probably want to add this computer to the list, as will anyone wanting to compare the (sometimes large) difference between AMD and Intel processors for certain tasks.
The kernel on each system is a recent Linux. The base system of C-libraries and commands is GNU, as is pretty much universal in a system based on the Linux kernel. All our shared computers have 64-bit processors that can also run 32-bit programs; the main system and libraries are all 64-bit, but some support libraries for 32-bit programs are provided.
A GNU/Linux system is chosen for being good and stable (we run these for a year or more before rebooting for a new kernel, in spite of lots of loading), having lots of useful native scripting languages and several compilers easily available, being able to host even the proprietary programs that some of us use (see the emulation page for possibilities for anyone else who needs programs not available for GNU/Linux), and having several ways for (unrestricted numbers of) users to access the computer remotely.
The particular distribution (i.e. collection
of Linux, GNU, and loads of other programs and libraries and documentation) that
is used is Gentoo. The main advantage of Gentoo
over simpler alternatives such as RedHat,
is that it makes very easy the job of installing lots of rather unusual programs
(extra python and perl modules, blas-atlas, R, Scilab, texmacs, kile, etc. etc.)
by single commands rather than by getting packages manually (the available packages
can be seen as the subdirectories of /usr/portage/*).
Gentoo also allows all kinds of options and optimisations to the packages, as
evidenced in the file /etc/make.conf .
Information sources
Information about the hardware and present loading is available from several sources.
Page started: 2007-11-xx
Last change: 2008-12-01