If you represent and organisation that is in contact with Euro-IX then there is a good chance that you already have an account in the Member Area. If your e-mail address is registered in our database please recover your password here
If you need a new account, please register here
Given that the majority of IXs have opted for a switched Ethernet architecture, the information in this section concentrates on this type of equipment. However, the general principles will largely apply to other architectures. It is also assumed that the IX will be on a single site, the issues surrounding the building and operation of a multi-site IX are outside the scope of this document.
The most fundamental, and singly most important part of the IX, is the switch equipment. It is understood that many start up IXs will have tight financial constraints, and this will impose limitations on the type and amount of equipment available. A new IX may have little or no choice in the equipment used, and the only option of getting the IX started may be to use whatever equipment 'comes to hand'. This has been the case for many established IXs. To provide a 'proof of concept', and get an IX running, the priority must be to build an infrastructure with networks connected as early as possible. It is probably better to have something in place, even though it may fall short of the ultimate aims of the participants, than to delay whilst the funds are found (or not found!) to build the ideal IX.
However, the switch equipment is essentially the IX, and it is vital that careful consideration is given to it, if only to plan a road map for future growth and migration. Consideration at an early stage could reduce the risk of making premature decisions that result in major disruption when upgrade or growth takes place. One decision will be whether to have single or multiple switch equipment. Financial constraints may mean that the IX can only afford a single switch, and indeed many successful IXs have started this way. As mentioned above, this may be the only solution initially available to the IX, however, some observations with regard to the use of multiple switches are discussed here. It is hoped that this will provide some assistance in planning for the future growth of an IX, even where the IX has had to be established with modest resources.
The major benefit of building a multiple switch IX is redundancy. This may be purely physical redundancy by using more than one switch from a single vendor, or physical and 'genetic' redundancy by using switches from two or more vendors. Single vendor redundancy can help to ensure the operation of the IX in the case of a hardware failure but not certain software (or firmware) failures that may be common to all devices from that vendor. Multiple vendor redundancy can potentially help in both failure modes; it is unlikely that a software bug causing the failure of one vendor's equipment will also affect another vendors' equipment. Against this some other factors should be considered; equipment from a single vendor may provide better economies of scale in purchasing, spares holding, maintenance and management than that of two or more vendors; also interoperability issues are more likely with equipment from multiple vendors.
In both single and multiple vendor situations, multiple switch IXs can offer better continuity of service than a single switch IX. IXs with multiple switches often offer, allow, or require members/customers to have two or more connections to two or more physically separate switches. In this situation, should one switch fail the connected member/customer network has an alternative route to the IX. Continuity of service can also be provided during routine maintenance or upgrade of any one switch device. This, does however, result in increased hardware costs, greater management overhead, and more cost for members/customers, who will require a second router interface and more cabling from their infrastructure to the IX.
The phenomenal, and sometimes unexpected, growth that many IXs have experienced would suggest that scaleability is a very important factor. Fortunately, modular 'chassis and blade' switch equipment is available. By purchasing switch chassis and adding interface blades when required an IX can allow for a reasonable amount of growth and expansion whilst limiting the initial investment. A facet of switch equipment that has changed since the inception of the older IXs is the availability of higher speed ('fast', 100Mbps and 'GigE', 1Gbps) interfaces. Most; if not all, equipment is currently capable of 100Mbps, and this is often the basic interface standard, however these interfaces can usually also support 10Mbps. Much current equipment is GigE capable, but to support this speed extra optical interface hardware and other options are usually required. The cost of these are often be high in comparison with the switches and standard interfaces themselves, but the modular design of switches allow an IX to start with, say, one or two 10/100Mbps blades and add GigE blades to the same chassis when traffic levels demand higher speed interfaces.
In conclusion, the critical nature of the switch infrastructure means that it is advisable for the IX to invest in the best and most expandable equipment that its financial circumstances allow. Whilst a full technical review of switch equipment is outside the scope of this document, some items to consider are: modularity/upgradeability, power supply redundancy, management processor redundancy, software update mechanisms, stability of software, interoperability and out-of-band access. In addition to the main switch infrastructure, there is a range of ancillary equipment that is not essential to the core function of the IX. Since it is not 'core' the start up IX may not wish to, (or not be able to) invest in this equipment, and in some cases the function of the equipment can be adequately carried out, and often is in the start up phase, by members themselves.
To assist the IX and members in troubleshooting, some IXs provide a router with which all members peer and announce their routes. The router listens to, or 'collects' these announcements, but does not announce any routes itself, hence some IXs use the term 'collector' router for this equipment. IX staff and member ISPs have user accounts on this router, enabling them to have a central 'view' of the IX, independent of the 'view' through their own connection.
Where an IX has server equipment hosting, for example, their web site and email, and possibly some staff requiring Internet access, a router with full Internet connectivity is obviously required. (With care in configuration, this function can be combined with that of a 'collector' router.) Connectivity for this router, and therefore for the IX's own network infrastructure, is often provided by member ISPs with little or no formal agreement. Whilst in a start up scenario this sort of arrangement can be a quick and convenient solution, it is advisable that these relationships are formalised by contract as soon as possible.
Web and email servers
An IXP will, of course, require equipment to host their web site and email. This is, however, one of the easiest elements for a volunteer member or third party to provide. In many start up IXPs the persons running and managing the IXP are staff of a member ISP, so it is practical for the member ISP to provide this. As the IXP grows, and more people become involved in running the IXP, (perhaps being employed by the IXP itself) there will come a point when advantage of having this equipment under direct control of the IXP will outweigh the relatively small cost of server equipment.
Whilst IP address space is not physical equipment, it is worthy of some comment. It is quite possible to operate the IXP using a block of addresses from a member ISP, or even a third party, but for orderly management and administration of the IXP it is preferable for it to have its own address allocation.