Networks and IT Security
Sciences and Exploration Directorate - NASA's Goddard Space Flight Center

Key Initial Features of GSFC's L–Net Design

GSFC's L–Net design is best described in three parts

Each of these parts is summarized below

GSFC Local Network Part (i.e., within GSFC)

The initial GSFC L–Net will have an inter–building 10–Gigabit Ethernet (GE) backbone implemented with 10–GE inter–connected Force10 E600 and E300 10–GE switches, which in turn will support 10–GE up/down links to Extreme Network Summit 400–48t 1–GE switches with up to two 10–GE up/down links to various computer, display, and/or storage area network clusters. It also will include up to four 1–GE connections with GSFC's Science and Engineering Network.

The 10–GE connected clusters minimally will include:

  • Thunderhead Beowulf compute cluster (512 CPUs): L–Net will connect two sets of 16 CPUs, each with their own 10–GE uplink to enable local tests of distributed cluster processing
  • Tiled display cluster in Scientific Visualization Studio: either the existing 2x3, or the new 3x3, CPUs in the tiled display cluster
  • Network test stations in, and inter–building connection between, GSFC's High End Computer Network and EOSDIS Network Prototyping Lab
  • Optical switch from UMBC/Ray Chen
Other GSFC–based computer, display, and/or storage area network clusters can be easily added as funding is available for the L–Net's extension. A specific set of such extensions are described in Pat Gary's FY05 proposal for IRAD funding (scrubbed of specific building info) (pdf 1.9 MB). These include:

Others can be easily planned as the design of the L–Net is easily extendable.

Regional Network Part (i.e., between GSFC in Greenbelt, MD, & Level3 POP in McLean, VA)

In cooperation with the NSF–funded Dynamic Resource Allocation via GMPLS Optical Networks (DRAGON) Project (see Related Links), the L–Net Project is arranging to have 1 dedicated and 1 shared 10–GE link across the DRAGON network between a Force10 E600 10–GE switch at GSFC and a Force10 E300 10–GE switch that the L–Net Project will place in the Level3 POP at McLean. These two 10–GE links will ride over DRAGON's Movaz–based dense wave division multiplexing (DWDM) infrastructure (pdf 40KB).

DRAGON, with partial funding from the L–Net Project, will lease colo space in the Level3 POP in McLean and a ring of fiber pairs from the Level3 POP to GWU to ISI–East to the Level3 POP. An overview diagram of the planned DRAGON network (pdf 44KB) is provided. An alternate overview diagram of the planned DRAGON network (pdf 100KB) is provided. A map–overlaid overview diagram with inter–site–fiber–distance of the planned DRAGON network (pdf 276KB) is provided.

In the Level3 POP, DRAGON will deploy a Movaz RAYexpress OADM; and GSFC will deploy its Force10 E300 10–GE switch plus a couple of PC–based network performance test workstations with 10–GE network interface cards (NICs), quad–GE NICs, and/or 1–GE NICs.

The first two PC workstations to be placed at McLean will be dual 3.06 GHz Intel Xeon processors with 1 GB of memory and 80 GB disks, running Red Hat Linux 8.0 and a 2.4.18 SMP Linux kernel. They each will have one Intel PRO/10GbE LR Server Adapter 10–GE NIC (capable of streaming 5 Gbps), one Intel PRO/1000 MT Quad Port Server Adapter quad–GE NIC (with each 1–GE interface capable of full 1–GE line rate), and one Intel PRO/1000 MT Server Adapter 1–GE NIC (capable of full 1–GE line rate). Each PC is capable of sustaining 5–Gbps network throughput (either via a single stream on its 10–GE interface or five streams across its five 1–GE interfaces); so the pair of PCs in tandem can fully saturate a 10–GE connection, either via two 5–Gbps streams or 10 (2*5) 1–Gbps streams.

Transcontinental Network Part (i.e., the NLR and the GSFC 10–GE switch and workstations in the Level3 POP in McLean, VA)

The NLR (see Related Links) has planned extensive Layer1/Physical network deployments (pdf 52KB) and an initial set of four network–wide 10–Gbps lambdas (pdf 48KB). Phase 1 deployment of NLR's Layer1/Physical network was completed on 08/28/04.

The L–Net Project will arrange GSFC membership in the Mid–Atlantic Terascale Partnership (MATP) (see Related Links) through which GSFC will obtain rights as defined for Class A members to use the assets and services of the NLR, giving GSFC full rights to use the NLR's 10–GE Shared IP lambda and the 1–GE VLAN lambda.

An L–Net project Force10 E300 10–GE switch will be placed in the Level3 POP in McLean initially with two 10–GE LAN PHY ports for connecting with NLR lambdas, in addition to its 4 other 10–GE ports for connecting across DRAGON or with the network performance test workstations GSFC will place there. GSFC's two 10–GE ports for NLR are planned initially for interfacing respectively with MATP's interface (pdf 44KB) and HOPI's interface (pdf 36KB) equipment; but temporarily at least one of these 10–GE ports could be used for an OptIPuter (see Related Links) interconnection.

An illustration providing an overview of the 10–Gbps connections between the computational, storage, visualization, and network test clusters at OptIPuter/UCSD, OptIPuter/UIC, and GSFC(pdf 452KB) is provided.

A diagram (pdf 48KB) summarily illustrating the GSFC L–Net design is provided in the following figure

GSFC L-Net Configuration at McLean and Greenbelt