TIK Experimental Cluster “Scylla”

DDoSVax
Project Description
Attack Analyses
Publications

Student Theses

Cluster "Scylla"

Contact

News

July 2003

First ideas for a computer cluster (during data analyses for the DDoSVax project)

20.10.2003   

KIM (Kommission für Informatik-Mittel) accepts 22 node cluster proposal

22.1.2004

Inauguration of our new 22 node "Scylla" cluster (Athlon 2.8 MHz, 1 GB RAM; Debian)

1.2.2004

Our cluster was shown on the Swiss National TV SF1 daily news "Tagesschau" in a report
on the MyDoom worm analysis done for DDoSVax

What is “Scylla”?

An experimental research computer cluster of 22 identical nodes and one additional gateway node, operated by TIK at ETH Zürich.
This cluster is used for network traffic analysis (DDoSVax), massive distributed simulations (P2P nodes), experimental kernel installations, Gigabit/s network testing, and computer security and network related research.


Computer Hardware Configuration

22 cluster nodes:
Athlon XP "Barton" 2.8 GHz
1 GB RAM, 120 GB Harddisk
CD-ROM Drive
1 Gbit/s-Ethernet

One single gateway node:
Athlon XP "Barton" 2.8 GHz
1 GB RAM
2x 200GB Harddisk (200 GB RAID-1 mirrored)
CDROM
1 Gbit/s-Ethernet internal; 100 Mbit/s external.

Why is it named “Scylla”?

In Greek mythology, „Scylla“ was an attractive nymph and a daughter of Phorcys. One day, the sea-god Glaucis fell passionately in love with her. However, she rejected him. Therefore, Glaucis asked the sorceress Circe to create him a love potion. Bad enough, Circe instantly fell in love with Glaucis, who rejected her.
Filled with jealousy against Scylla, Circe poured a potion of herbs into the water, where Scylla was bathing in, and then cast her spell. Suddenly, out of Scylla‘s lower body half six monstrous dog heads grew.

We named the cluster „Scylla“ as the cluster‘s possiblities are fascinating at first, however controlling it efficiently is a delicate task.

Cables and Networking

  

In total ca. 260 m of cables, thereof:
70 m of Ethernet Cat5e Cable,
130 m of video and keyboard cables, and
60 m of power cables. And approx. 200 cable ties.

Networking
24 port switch full-duplex at 1 GBit/s; 20 Gbit/s aggregate bandwidth
Cluster nodes have private IPs 10.0.0.0/24;
Gateway node: scylla.ethz.ch (external interface);
Ethernet star topology.

Default Software Installation

  • Nodes n01-n22:
    Debian Linux 2.4.22 with the openMosix cluster software extension

  • Gateway:
    Debian Linux 2.4.24; role as Debian-testing mirror for nodes

Software distribution to nodes (from gateway) with FAI (Fully Automatic Installation):
Boot via Etherboot (floppy), kernel is fetched via tftp from gateway. This allows simple and individual change of boot kernel of the nodes from the gateway and (re-)installation of all nodes in less than 10 minutes.

The clustering software openMosix is a Linux kernel extension for single-system image clustering, which turns a network of ordinary computers into a supercomputer.

Electrical Power Consumption

  

One node:
Approx. 150W power consumption

Full cluster (gateway + 22 nodes):
Approx. 3.5 kW power consumption (and heat generation!)

Cluster has its own dedicated
3 * 25A power line and circuit breaker

Hardware Monitoring

  • Harddisks failures:
    SMART monitoring of the harddisk drive in each node by the gateway; detects HDDs about to fail.
  • Temperature alerts:
    Temperature monitoring of each node by the gateway, and automatic shutdown of individual nodes or the whole cluster in case of overtemperature.

Miscellaneous

  • n01 is a test node with remote power switch and serial console
  • “Experimental” cluster means that OS installation and software configurations can be changed
  • 100 GB persistent data storage/node; a new installation only changes 10 GB system/swap space partition

Cluster Administration and Support

Please contact Arno Wagner.

Links

Inauguration Party

On January 22nd, 2004, the inauguration of the new TIK experimental cluster "Scylla" took place. After a technical presentation, Prof. Dr. Bernhard Plattner, head of the Computer Engineering and Networks Laboratory (TIK) cut the ribbon and switched on the power for our cluster. After all 22 nodes had successfully started and were ready for use, the inaugration was celebrated.

 
Prof. Plattner cuts the red ribbon and ... ... switches the power on. Students watch the 22 nodes starting up. All 22 nodes are now in state "ready". Inauguration celebration with Champagne.

(c) 2004  DDoSVax at TIK CSG ETH Zurich, Thomas Dübendorfer, Arno Wagner, last change: 4th May 2004