Delivering free software at 2Gbit/s

This is a brief overvew on how the Academic Computer Club at Umeå University managed to setup a system that could sustain 2Gbit/s of downloads to the general public for the latest Debian and Ubuntu releases.

Our goal

When a popular linux distribution or other content releases, there is a huge peak of demand the first day or week. We wanted a system that could handle that load, up to the limits set to us by the university network here at Umeå University.

In this text, B means byte and b means bit, in acronyms as MB/s and Gb/s (megabyte per second and gigabit per second).

Our means

The regular ftp.acc.umu.se cluster (an old IBM SP, more details here) manages to sustain about 70MB/s, but is fairly load tolerant. This means even if we are at maximum capacity, adding a few hundred more users will not have a big effect, beyond just making every single download slightly slower.

For the debian release, we also have the resources of cdimage.debian.org, which is a donation by HP and a modern PC server with an external disk array (HP DL360 + MSA30).

In addition to this we borrowed two HP DL145 (dual opteron, 8 GB of RAM) from the local supercomputing center at the university (HPC2N) as temporary machines for just the release. These have only a slow IDE drive internally, so delivering data from disk is not an option. But delivering data from RAM, they can easily saturate a gigabit ethernet uplink. These would serve as http offload hosts.

And of course, the network. Thanks to the university, the computer club has several gigabit ethernet ports available, from a switch that has dual gigabit uplink to the campus network and in turn to the swedish university network.

Software, we run AIX on ftp.acc.umu.se and Linux (Debian/Ubuntu) on the other hosts, with apache2 patched for LFS and the worker MPM. We typically tune apache to allow 2000-5000 concurrent downloads.

We tune the TCP settings according to my network performance guide to get resonable performance.

Our setup, take one - Debian Sarge

For the Debian Sarge release, we set up the DL145s as partial mirrors of the i386 iso tree. They each got a DVD image and 3-4 CD images, totalling about 6 GB, the rest of the CD images were split up between different hosts elsewhere on the network. The primary site was the cdimage.debian.org host, and in addition ftp.acc.umu.se had a full mirror.

Before the release, we put in http redirect rules on cdimage.debian.org, pointing all downloaders to ftp.acc.umu.se, to leave cdimage.debian.org free to act as master node for all the mirrors that needed to get their copy.

After we had built all the images, we triggered mirrors and copied out the data to the temporary hosts, then we started distributing the load by means of http redirects to the different hosts. This worked pretty well, but as we had promised not to use more than 2Gbit/s we pushed more and more traffic off-campus during the day.

Soon there was also a rebuild of a fixed version, due to a bug in the first set. This also made the distribution hard to coordinate and our bandwidth graphs were not as impressive as they could have been.

Network statistics from stats.sunet.se for the entire university, "in" means in towards the core network, that is, downloads from our archive:

Lessons learned

Enough ram to handle the "hot" set of data. Almost all accesses were to the i386 set, but this was too large to fit into ram on the two DL145s together. Had this fit, the load handling could be much easier.

After talking to the network people, they were fine with us actually pushing the limit on the 2Gbit/s uplink to the computer club. So we did not need to push the traffic off-campus as we approached it, saturation was fine.

We also learnt the importance of verifiying that the isos you redirect to are actually there, in current versions, before adding redirects. Mistakes were made, late that night.

Our setup, take two - Ubuntu Breezy

In this case, the setup was pretty similar to Sarge, with the exception that cdimage.debian.org was not involved at all beyond providing some background load on the network. As ftp.acc.umu.se also takes a large portion of the releases.ubuntu.com DNS name, we had plenty of users coming our way for their downloads.

One thing that made the Ubuntu release easier than Debian is that they only released two to four (counting Kubuntu) CD images per architecture and only two architectures (i386 and amd64) had high demand. This meant that the data in high demand easily fit into the 16 GB of ram we had available.

Because I slept late, first ftp.acc.umu.se had to take all the load itself, which it did resonably fine to start with, but after a while the load on some of the cluster nodes became high enough for it to be rather unusable. Once we introduced the http redirects, the load quickly fell away to managable levels on the cluster.

The load balancing was ad-hoc by looking on how many concurrent downloads of the different CD images there were on ftp.acc.umu.se and the two HP servers, adding http redirects as needed. This ramped the network load up rapidly, and we were soon hitting the 2Gbit/s limit on our network uplink.

The day was spent with some small tunings to keep the load even, and not to put too much load on ftp.acc.umu.se, while making sure that the offload hosts had a small enough working set to keep it all in RAM. We kept the network load pinning our uplink all day, until it dropped off in the evening. While the load picked up in the days after, we didn't saturate our capacity again.

Network statistics from stats.sunet.se for the entire university, "in" means in towards the core network, that is, downloads from our archive:

Our network statistics for the offload hosts and cdimage.debian.org (Farbror in the graph, not actually serving Ubuntu CD images, but Debian CD images):

The 42TB total network traffic over the week around the Breezy release shown in this last graph is equivalent to about 70 thousand cd-images. We estimate that about 10-15 thousand cd-images were downloaded during the first day and about 100 thousand cd-images (60TB) during the week following the release.

As we found out later, our activities had both filled an uplink two "hops" up from our university, at the Nordic University Network level to the point of warnings going off and made some DoS-alarms go off at the national level. All part of a fun release.

Our thanks

To Umeå University and SUNET for the network making this possible.

To HP and HPC2N and PDC for the equipment used.

To Debian and Ubuntu for eager users, wanting downloads.

Us

I am Mattias Wadenstein, admin at Academic Computer Club at Umeå University. "We" here generally refers to me and my co-admins at the club.