First ideas for a computer cluster (during data analyses for
the DDoSVax project)
KIM (Kommission für Informatik-Mittel) accepts 22 node
Inauguration of our new 22 node "Scylla" cluster (Athlon 2.8
MHz, 1 GB RAM; Debian)
Our cluster was shown on the
National TV SF1 daily news "Tagesschau"
in a report
on the MyDoom worm
analysis done for DDoSVax
What is Scylla?
An experimental research computer cluster of 22 identical nodes and one additional
gateway node, operated by TIK at ETH
This cluster is used for network traffic analysis (DDoSVax), massive distributed
simulations (P2P nodes), experimental kernel installations, Gigabit/s network
testing, and computer security and network related research.
Computer Hardware Configuration
22 cluster nodes:
Athlon XP "Barton" 2.8 GHz
1 GB RAM, 120 GB Harddisk
One single gateway node:
Athlon XP "Barton" 2.8 GHz
1 GB RAM
2x 200GB Harddisk (200 GB RAID-1 mirrored)
1 Gbit/s-Ethernet internal; 100 Mbit/s external.
Why is it named Scylla?
In Greek mythology, Scylla was an attractive nymph and a daughter
of Phorcys. One day, the sea-god Glaucis fell passionately in love with her.
However, she rejected him. Therefore, Glaucis asked the sorceress Circe to create
him a love potion. Bad enough, Circe instantly fell in love with Glaucis, who
Filled with jealousy against Scylla, Circe poured a potion of herbs into the
water, where Scylla was bathing in, and then cast her spell. Suddenly, out of
Scyllas lower body half six monstrous dog heads grew.
We named the cluster Scylla as the clusters possiblities are
fascinating at first, however controlling it efficiently is a delicate task.
Cables and Networking
In total ca. 260 m of cables, thereof:
70 m of Ethernet Cat5e Cable,
130 m of video and keyboard cables, and
60 m of power cables. And approx. 200 cable ties.
24 port switch full-duplex at 1 GBit/s; 20 Gbit/s aggregate bandwidth
Cluster nodes have private IPs 10.0.0.0/24;
Gateway node: scylla.ethz.ch (external interface);
Ethernet star topology.
Default Software Installation
- Nodes n01-n22:
Debian Linux 2.4.22 with the openMosix cluster software extension
Debian Linux 2.4.24; role as Debian-testing mirror for nodes
Software distribution to nodes (from gateway) with FAI
(Fully Automatic Installation):
Boot via Etherboot (floppy), kernel is fetched via tftp from gateway. This allows
simple and individual change of boot kernel of the nodes from the gateway
(re-)installation of all nodes in
less than 10 minutes.
The clustering software openMosix
is a Linux kernel extension for single-system image clustering, which turns
a network of ordinary computers into a supercomputer.
Electrical Power Consumption
Approx. 150W power consumption
Full cluster (gateway + 22 nodes):
Approx. 3.5 kW power consumption (and heat generation!)
Cluster has its own dedicated
3 * 25A power line and circuit breaker
- Harddisks failures:
SMART monitoring of the harddisk drive in each node by the gateway; detects
HDDs about to fail.
- Temperature alerts:
Temperature monitoring of each node by the gateway, and automatic shutdown
of individual nodes or the whole cluster in case of overtemperature.
- n01 is a test node with remote power switch and serial console
- Experimental cluster means that OS installation and software
configurations can be changed
- 100 GB persistent data storage/node; a new installation only changes 10
GB system/swap space partition
Cluster Administration and Support
Please contact Arno Wagner.
On January 22nd, 2004, the inauguration of the new TIK
experimental cluster "Scylla" took place. After a technical presentation,
Prof. Dr. Bernhard Plattner,
head of the Computer Engineering and Networks Laboratory (TIK) cut the ribbon
and switched on the power for our cluster. After all 22 nodes had successfully
started and were ready for use, the inaugration was celebrated.