Networks and IT Security
Sciences and Exploration Directorate - NASA's Goddard Space Flight Center

FY04 IRAD-FUNDED GSFC LAMBDA NETWORK (L-Net)

Notable Progress on L-Net's Implementation

Similar to GSFC's L-Net design, notable progress on GSFC's L-Net implementation is best described in three parts

Each of these parts is summarized below

GSFC Local Network Part (i.e., within GSFC)

  • On March 2, 2004, funding for the L-Net Project was awarded by GSFC's Executive Council through the Goddard Technology Management Office.
  • In March 2004 through August 2004, prototype intra-and inter-building 1-Gigabit Ethernet (GE) and 10-GE L-Net connections were tested and demonstrated with 1-GE and 10-GE switches borrowed from vendors.
  • During the week of March 8, 2004, while stress testing the throughput performance of the 10-GE interfaces of Force10's E300 10-GE switch, GSFC's Bill Fink and Paul Lang (ADNET) transferred over 1.1 petabytes of data in three days (pdf, 124KB) using standard IP protocols. A diagram illustrating the test configuration(pdf, 112KB) is provided.
  • During the week of April 12, 2004, references to GSFC's 10-GE L-Net efforts also appeared in the press as a Federal Computer Week April 12, 2004 article and a HPCwire April 16, 2004 article (107454) (pdf, 16KB).
  • On April 20, 2004, Bill Fink produced a new local "high water mark" by measuring greater than 5.0 gigabits per second (Gbps) in end user-to-enduser, TCP/IP-carried, single stream, half-duplex (one direction), memory-to-memory data transfers in the High End Computer Network (HECN) R&D testbed. The testing was conducted between two dual 3 GHz Xeon PCs with 133 MHz PCI-X buses in a NIC-to-NIC interconnection configuration with Intel PRO/10GbE LR Server Adapter network interface cards (NICs). Fink also measured greater than 5.7 Gbps in tests involving two (bi-directional) data streams across that same interconnection. These improvements over the local previous 3.2 Gbps single, and 4.5 Gbps bi-directional, stream measurements were made possible primarily by Fink's enabling 4096 byte (rather than the default 512 byte) burst transfers on the PCI-X buses. Additional details (pdf, 12KB) are available.
  • On August 3, 2004, GSFC's first inter-building 10-GE L-Net link was implemented between respective Force10 E300 10-GE switches in the HECN lab in one building at GSFC and the EOSDIS Network Prototyping Lab in another building at GSFC.
  • On August 3, 2004, two 10-GE uplinks from a single Extreme Network Summit 400 were added to the GSFC-local 10-Gbps L-Net network. Each 10-GE uplink aggregately supported 16 1-GE connections associated with a cluster of 16 CPUs separated from the Thunderhead cluster. Each CPU in each cluster had a 1-GE connection to the Extreme Network Summit 400.
  • The September 16, 2004 email from Bill Fink with September 17, 2004 introductory info from Pat Gary (pdf, 28KB) describes both the type of 10-GE inter-cluster connectivity that has been planned via GSFC's L-Net efforts and the end-to-end user/application-level throughput performance that has been targeted. A diagram illustrating the inter-cluster connectivity (pdf, 32KB) is provided. A subset of the throughput performance (pdf, 40KB) is provided.
  • During the week of November 1, 2004 on the Thunderhead cluster, the borrowed 10-GE uplink switch (with two sets of sixteen 1-GE connections respectively to two sets of sixteen CPUs in the Thunderhead cluster) was replaced with a L-Net project-acquired Extreme Network Summit 400-48t switch. A set of two photos taken by GSFC's Kevin Kranacs showing the Extreme Network Summit 400-48t switch connected to the Thunderhead cluster is provided.
  • During the week of November 1, 2004, Shujia Zhou (Northrop Grumman/TASC) demonstrated a prototype of the Grid-enabled ESMF, based on DOE's CCA/XCAT framework, across a 10-GE link between two nodes in the Thunderhead cluster connected via the L-Net. This implementation is a follow-up to the effort Zhou described (pdf, 255KB).
  • On November 22, 2004, a diagram illustrating Zhou's prototype effort described above was included in the presentation by Josephine Palencia (pdf, 708KB) at GSFC's Thunderhead Beowulf Cluster Workshop organized by GSFC's Commodity Computing Cluster Project (see Related Links).
  • On November 29, 2004, Shujia Zhou provided a diagram illustrating his more recent prototype effort (pdf, 12KB).
  • During the week of December 13, 2004, the "High speed networking and Grid computing for large-scale simulation in geodynamics" poster (ppt, 1.3MB) by GSFC's Weijia Kuang et al was presented at the 2004 AGU Fall Meeting. The accepted abstract (pdf, 12KB) submitted for the Meeting is provided. A full-size view of the GSFC 10-GE L-Net diagram (pdf, 28KB) shown as Figure 5 in the poster is provided.
  • On February 9, 2005, a Force10 E600 10-GE switch/router with Terascale architecture and twelve 10-GE ports was installed at GSFC as a new 10-GE GSFC local hub for the L-Net. A set of three photos taken by GSFC's Kevin Kranacs showing this set up at GSFC is provided.
  • In May 2005, the 10-GE ports active in the Force10 E600 10-GE switch/router at GSFC included connections with: ENPL's Force10 E300 10-GE switch/router, GSFC's Geophysical and Astronomical Observatory, two test clusters extracted from Thunderhead (1 port each), NCCS's loaner Force10 E300 10-GE switch/router, and up/down links from/to two Extreme Network Summit 400 1-GE switches (1 port each).
  • On June 22, 2005, Shujia Zhou presented "High-Speed Network and Grid Computing for High-End Computation: Application in Geodynamics Ensemble Simulations"(pdf, 784KB) at the CompFrame 2005 Workshop in Atlanta.
  • On July 8, 2005, in addition to the seven active 10-GE ports noted above in the May 2005 entry , the following 10-GE ports also were active in the L-Net's Force10 E600 10-GE switch/router at GSFC: two Finisar alpha version 80-km DWDM XFP's (1 port each), four 10-GE NIC's (two Intel Pro/1000 (1 port each) and two Chelsio T110 (1 port each) which predates the now released Chelsio T210) connecting two Pentium/Linux-based network test workstations (2 NICs in each workstation), and one Juniper beta version NetScreen 5400 firewall (2 ports). This brought the total number of active 10-GE ports on that switch/router to 15. On July 12, 2005, Juniper Networks concluded its NetScreen SPM 2XGEBeta Program, and the L-Net project subsequently submitted its Post-Beta feedback and returned the two loaner NetScreen 5400's.
  • On July 20, 2005, S. Zhou presented "Grid Computing in Distributed High-End Computing Applications: Coupled Climate Models and Geodynamics Ensemble Simulations" at the ESMF on the GRID Workshop held as the first day of the 4th Annual ESMF Community Meeting.
  • During the week of August 1, 2005, one Extreme Network Summit 400 1-GE switch was moved to GSFC'S SVS and used for connecting both the 9-tile/cpu �hydra' Hyperwall cluster and the 2-tile/cpu �lambda" cluster with a 10-GE uplink to the L-Net. A photo showing the Hyperwall cluster displaying two sets of 1-km resolution Land Information System data together with real-time HD video streamed from UCSD is provided. A closer photo of the Hyperwall is provided; and an even closer photo is provided.
  • On August 8, 2005, a demo of "21st Century National-Scale Team Science", involving UCSD and GSFC linked using OptIPuter technologies over a 10-Gbps lambda across the NLR and DRAGON networks, was presented in GSFC's SVS to NASA Associate Administrator of Science Alphonso Diaz and ~30 other attendees. The demo's Program Guide/Agenda (pdf, 564KB) is provided. A news release prepared by UCSD about the demo is provided. A partially representative set of GSFC photos taken during the demo is provided. A montage of amateur photos (pdf, 11.1MB) taken during the demo is provided.
  • On October 19, 2005, Bill Fink arranged data flow paths between thunder2 and thunder3 clusters to loop across the DRAGON through McLean and back to enable S. Zhou's demonstration prototypes of cross-organization coupling of climate models over a high speed network as referenced in the ESMF-related poster "Cross-Organization Coupling of Climate Models through ESMF." A diagram (pdf, 356KB) illustrating the looped 10-Gbps data flow path across the DRAGON network between thunder2 and thunder3 is available.
  • During the week of November 14-18, 2005, as part of the NASA exhibit (booth 1810) at the SC|05 International Conference for High Performance Computing, Networking, Storage and Analysis held November 12-18, 2005 in Seattle, for the GSFC ESTO/CT's Earth System Modeling Framework (ESMF) project, Shujia Zhou (610.3/Northrop Grumman) demonstrated a prototype of ESMF-based cross-organization coupling of climate models over a high speed network using the thunder2 and thunder3 clusters at GSFC where the data flows between those clusters intentionally are presently looped across GSFC's 10-Gbps Lambda Network connection with the NSF-funded Dynamic Resource Allocation via GMPLS Optical Networks (DRAGON) regional optical network to allow preliminary study of the WAN-oriented latency aspects of distributed processing. A diagram illustrating the looped 10-Gbps data flow path across the DRAGON network between thunder2 and thunder3 is referenced in the above October 19 bullet. Mrtg-generated timeline graphs (pdf, 64KB) of the "weekly" 30 minute average data rates of the traffic flows supported during SC|05 to/from thunder2, thunder3, and the loop port on the 10-GE switch at McLean are available. The slides(pdf, 6.1MB) used by Zhou at the exhibit booth to describe the prototyping of ESMF-based cross-organization coupling of climate models over a high speed network are available.
  • On April 9, 2006, Bill Fink released some nuttcp-based throughput performance results he obtained while testing "back-to-back" Myricom 10-GigE PCI-Express NICs in Northrop Grumman-provided dual 2.2 GHz Opteron CPUs and in DRAGON Project-provided dual Xeon64 3.2GHz processors (the latter tests were conducted on March 3, 2006).
  • On April 19, 2006, the NCCS began pre-operational end-to-end readiness tests of its 10-Gbps network infrastructure, which includes a near-10-Gbps firewall capability designed and implemented by the NCCS's Lee Sheridan (CSC), multiple 10-GE ports on its new Force10 E600 10-GE switch/router, and 10-GE network interface cards (NICs) for several NCCS supercomputer platforms. NCCS' firewall capability is essentially an Intel-based Linux box, with Neterion 10-GE NICs in PCI-X2 I/O buses, running IP Tables stateful rulesets. Bill Fink's stress testing that new firewall capability demonstrated that the NCCS's new firewall capability supports up to 6.7-Gbps of network throughput performance. Fink's stress testing was accomplished using locally-generated single-stream Transport Control Protocol (TCP)-based IP packets between a pair high-performance workstations with Myricom 10-GE NICs in their PCI-Express I/O buses. End-to-end single-stream TCP-based IP packet flow testing between an NCCS SGI Origin 3800 and a not-fully-tuned high-performance workstation at ARC across the 10-Gbps NREN demonstrated up to 1.5-Gbps throughput performance, and testing with two simultaneous TCP-based streams demonstrated up to 3.0-Gbps throughput performance.
  • On June 7, 2006, the NCCS operationally cut-over the entire set of NCCS supercomputer platforms to their 10-GE network infrastructure so that now all high performance computer (HPC) data flows to hosts on the local SEN or CNE or the wide area NREN cross their new near-10-Gbps firewall capability. An indication of the more than doubling of the throughput performance rates that now are achieved by all HPC data flows to and/or from the NCCS is illustrated in the NREN-provided graph of 5-9Jun06 data flows from the 1-GE-connected NCCS datastage to the NAS-based lou file server (pdf, 572KB).

Regional Network Part (i.e., between GSFC in Greenbelt, MD & Level3 POP in McLean, VA

  • On September 1, 2004, the DRAGON Project (see Related Links) set up its first inter-campus DWDM links between GSFC, University of Maryland College Park, and University of Southern California's Information Systems Institute in Arlington, VA. Three 2.4 Gbps DWDM channels were set up end-to-end using Movaz iWSS optical switch and RayExpress optical add/drop multiplexer (OADM) components. A set of nine photos taken by GSFC's Kevin Kranacs showing this set up at GSFC is provided.
  • During the week of October 18, 2004, Bill Fink and Paul Lang deployed a 10-Gbps L-Net connection between select GSFC buildings to support greater than 1-Gbps real-time data flows from the GGAO facility for the e-VLBI Project (see Related Links). The data then flows from GSFC on 2.4-Gbps optical wavelength links across the DRAGON and BOSSNET networks to MIT/Haystack where they are correlated with similar data acquired in Westford, MA. Two high level network diagrams showing these links (pdf, 92KB) are provided.
  • During the week of November 8, 2004, live demonstrations of the above described e-VLBI real-time flows/correlations were shown in the Xnet booth at SC2004. Additional information highlighting the role of the DRAGON and end site networks in implementing the e-VLBI real-time flows/correlations is available. An Internet2-provided summary of eVLBI's data flows during SC2004 also is available.
  • In April 2005, DRAGON, with partial funding from the L-Net Project, contracted for leasing colo space in the Level3 POP in McLean and a ring of fiber pairs from the Level3 POP to GWU to ISI-East to the Level3 POP.
  • On April 27, 2005, with the installation at GSFC of a set of DRAGON-provisioned standard 10-Gbps optical transponders from Movaz Networks, the L-Net's first 10-Gbps lambda across the DRAGON RON became operational. But this ITU-channel36-compliant lambda only connected GSFC with College Park, because between DCNE and MCLN through ARLG the same ITU-channel36-compliant lambda was temporarily "loaned out" by DRAGON to the HOPI project to effect a 10-Gbps connection for them between the Abilene network at DCNE and the NLR/HOPI 10-Gbps at MCLN.
  • During May through July 2005, the 10-Gbps ITU-channel36-compliant lambda between GSFC and CLPK was used by DRAGON's Chris Tracy and GSFC's Paul Lang for conducting a series of distributed tests of Raptor Network Technology's new Ether-Raptor ER-1010 stackable 10-Gbps switches.
  • In May 2005, in the new DRAGON/MAX/HOPI/NASA suite in the Level3 POP, DRAGON deployed a Movaz RAYexpress OADM initially with only one 2.4-Gbps DWDM channel for L-Net use; and Bill Fink, Paul Lang, and Jeff Martz relocated the L-Net's Force10 E300 10-GE switch plus its two network performance test workstations from the NLR suite to the new DRAGON/MAX/HOPI/NASA suite in the Level3 POP.
  • On May 26, 2005, DRAGON's Chris Tracy connected their Movaz RAYexpress OADM in the Level3 POP in McLean to the fiber pairs between the Level3 POP and DRAGON at GWU; and Bill Fink conducted 1-GE end-to-end nuttcp tests pair-wise among network test workstations both between GSFC and McLean and between GSFC and San Diego. A summary of the throughput performance obtained (pdf, 28KB) is provided.
  • On July 27, 2005, using DRAGON's 10-Gbps ITU-channel36-compliant lambda between GSFC and College Park, Physical Optics Corporation (POC) demonstrated at GSFC a holographic true-3D HDTV video display system that does not require goggles or other special head-gear, using live high definition stereoscopic video feeds from cameras placed in College Park, MD. A description of the display device is provided. Photos taken during the demo (pdf, 60KB) are provided.
  • On July 27, 2005, DRAGON's Chris Tracy reconfigured the HOPI connection across DRAGON to use the ITU-channel39-compliant lambda, freeing DRAGON's ITU-channel36-compliant lambda for other use. Tracy and GSFC's Bill Fink then tried to establish 10-Gbps links across DRAGON between GSFC and MCLN using the ~115 km end-to-end fiber path which traversed via GWU. Both standard transponders from Movaz and pre-commercial DWDM XFP's from Finisar were tried but, since both were rated only for up to 80-km use, neither was successful at that distance.
  • On July 28, 2005, DRAGON's Chris Tracy and Tom Lehman re-connected their Movaz RAYexpress OADM in the Level3 POP in McLean to the then newly available fiber pair between the Level3 POP and DRAGON at ISI-E in Arlington. This shortened the DRAGON end-to-end fiber path between GSFC and McLean from ~115 km to ~80 km and permitted the successful set up of two 10-Gbps lambdas between GSFC and McLean – one being ITU-channel49-compliant using Finisar pre-commercial 80-km DWDM XFP's and the other being ITU-channel36-compliant with Movaz standard transponders. GSFC's Bill Fink then successfully conducted GSFC's first 10-Gbps tests across DRAGON between the L-Net's 10-GE-connected workstations at GSFC and MCLN, and then coast-to-coast across DRAGON and the inter-connected NLR-based WASH-STAR and CAVEwave lambdas between the L-Net 10-GE-connected workstations at GSFC and UCSD. A diagram illustrating the GSFC-UCSD coast-to-coast connection across DRAGON and the inter-connected NLR-based WASH-STAR and CAVEwave lambdas is provided. A news article (pdf, 60KB) summarily describing these connections is provided.
  • On September 27, 2005, Curt Tilmes summarized plans for the Service Oriented Atmospheric Radiances (SOAR) Project (pdf, 1mb) which will utilize a new high-speed link on the DRAGON between UMBC and GSFC's L-Net to interconnect a UMBC IBM BladeCenter cluster to a smaller IBM cluster at GSFC.
  • On February 21 and 22, 2006, the L-Net and DRAGON projects provided collaborative support to Northrup Grumman Corporation (NGC)'s TASC IRAD effort in High-Performance Networking Architectures. A summary of that effort's objectives and a live data transmission demo were presented by NGC's Robert Link et al during NGC TASC's by-invitation-only Open House. With L-Net and DRAGON collaboration, the demo included multiple parallel data transmissions from GSFC's 10-GE-connected chance5 network-test-workstation in MCLN across the L-Net-provisioned channel49 lambda on DRAGON respectively to 4 different 1-GE-connected compute nodes on GSFC's thunder2 cluster at GSFC in Greenbelt. To contrast the end-to-end throughput performance between chance5 and the thundeer2 nodes when the data flows were across DRAGON, the same flows were transmitted across a commodity Internet connection between those same end hosts. MRTG graphs of the transmit and receive interfaces of chance5, thunder2, and the channel49 lambda on DRAGON illustrating their respective throughputs on 23Feb06 are available.
  • From mid-April until June 14, 2006, GSFC's Ben Kobler and the HECN Team assisted NGC in planning and readiness tests for using the L-Net's 10-Gbps lambda49 connection across DRAGON to MCLN and 10-GE connection with the NGC BICC Lab at Colshire in a set of NGC-led Wide Area Storage (WAS) demos. The WAS demos utilize features of McData's Eclipse 1620's and YottaYotta's GSX2400's in a multi-site configuration to show continuity of operations, disaster-recovery, and high-availability during a real-time video-payback application/scenario.
  • On June 14, 2006, NGC provided their first live demonstration of their WAS in a three-site configuration (pdf, 232KB).

Transcontinental Network Part (i.e., the NLR and the GSFC 10-GE switch and workstations in the Level3 POP in McLean, VA)

  • Before June 30, 2004, with cooperation from ARC's NREN Project, the L-Net Project arranged GSFC membership in MATP (see Related Links) which made GSFC a first-year MATP member and avoided fee increases scheduled for new members joining MATP after its first year. Also, particularly because of GSFC's participation in DRAGON, GSFC's prospects for being able to experimentally use the 10-Gbps HOPI lambda are also in good shape.
  • During the week of November 1, 2004, with cooperation from the NLR (see Related Links) and the UCSD and UIC OptIPuter teams, Paul Lang, Mike Stefanelli (CSC), and Bill Fink enabled NASA's first use of the NLR by installing in the NLR suite in Level3's POP in McLean, a Force10 E300 10-GE switch and two Pentium/Linux-based workstations each with one 10-GE NIC, one quad-GE NIC, and one 1-GE NIC connected with the NLR. On November 7, 2004 (just before the start of SC2004), GSFC's workstations in McLean transmitted "first ping" with similar UIC workstations in Chicago and in the NLR booth at SC2004 in Pittsburg, using new NLR 10-Gbps wavelengths set up for this purpose.
  • During the week of November 8, 2004, Earth science data sets pre-provided by GSFC's Scientific Visualization Studio (SVS) were retrieved across the NLR in real time from UIC and UCSD OptIPuter servers in Chicago and San Diego, respectively, and from the GSFC servers in McLean, and displayed at the SC2004 in Pittsburgh. A set of six photos, provided by Randall Jones (GST) of the SVS showing GSFC's presence in the NLR booth with the OptIPuter-provided 15-screen tiled display cluster during SC2004, is provided. An informal announcement (pdf, 36KB) identifying NASA GSFC as one of the first 10 users of the NLR is also provided.
  • Post-SC2004 during November 2004, with cooperation from the NLR, the UCSD and UIC OptIPuter teams, and the VaTech-lead MATP, Pat Gary arranged that the GSFC L-Net equipment described above could remain in the NLR suite in Level3's POP in McLean, Virginia. This has been to facilitate: 10-Gbps test data flows (IP-based) between workstations in McLean and clusters either at UIC or UCSD for network R&D; early checkout with real user data flows of NLR lambdas between the above referenced sites even before the rest of NLR's Cisco routers/switches are deployed for layer 3 & 2 services/experiments; and early knowledge gained by users working with NLR and with one another.
  • On November 16, 2004, continuing plans for science-enablement via 10-Gbps lambda networking, UCSD's Larry Smarr et al submitted the "MAP Core Integration LambdaGrid Infrastructure" proposal to NASA's MAP NRA. Summaries (pdf, 6.2MB) of the science drivers and evaluators identified in that proposal is provided.
  • On December 2, 2004, NLR's Layer 1 Network Operations Center activated a cross-connect between what were previously the two separate 10-Gbps lambda-based OptIPuter circuits, respectively, from the Chicago-based Starlight facility into the SC2004 and from the McLean-based GSFC 10-GE switch/router/servers into the SC2004. The newly created cross-connected circuit NLR-WASH-STAR-10GE-22 directly connects the McLean-based GSFC 10-GE switch/router and servers with the Starlight facility. Additional details (pdf, 12KB) are available.
  • On December 9, 2004 using OptIPuter/University of Illinois Chicago (UIC) addresses and static routes assigned and set up by Alan Verlo (UIC) and Linda Winkler [Argonne National Laboratory (ARL)], from a McLean-based GSFC workstation/server Bill Fink pinged UIC's Force10 E1200 10-GE switch/router and some attached UIC workstations/servers at Starlight across the NLR-WASH-STAR-10GE-22 circuit. Between the McLean-based GSFC workstation/server and the Chicago-based E1200, round trip times (RTTs) averaged 18.8 milliseconds (msec) for 56-byte pings and 22.0 msec for 8192-byte pings in jumbo frames; RTTs of 56-byte pings with UIC's workstation/server averaged 16.9 msec. A diagram illustrating the then-current network connectivity(pdf, 12KB) is provided.
  • On December 15, 2004, using OptIPuter/UCSD addresses and additional static routes across the OptIPuter CAVEwave between Chicago and San Diego, from a McLean-based workstation/server Bill Fink pinged two of UCSD's Extreme Network Summit 400 10-GE switches/routers, a San Diego Supercomputer Center (SDSC) cluster, and a National Center for Microscopy and Imaging Research (NCMIR) cluster in San Diego. Between the McLean-based workstation/server and either of the two San Diego-based clusters, RTTs of 56-byte pings averaged 94.8 msec; and required 2 and 3 router hops, respectively, over the entire path. An OptIPuter-provided diagram illustrating UCSD's local OptIPuter connections (pdf, 152KB) is available; the two Summit 400s and the clusters Fink pinged are shown in the upper half of that diagram.
  • During December 2004 and January 2005, various nuttcp-based end-to-end memory-to-memory throughput performance tests were conducted between GSFC's 10-GE connected network test workstations in McLean and OptIPuter's 1-GE connected clusters/workstations at StarLight/Chicago and UCSD as indicated in the diagram provided. Also GSFC made plans to deploy 10-GE connected network test workstations at StarLight/Chicago and UCSD, also as indicated in the diagram provided.
  • During February through June 2005, nuttcp-based tests continued with varying results (now thought to be more dependent on the versions of Linux used in the workstations tested). Some representative tests results are provided.
  • On May 19, 2005, continuing plans for science-enablement via 10-Gbps lambda networking, GSFC's Ramapriyan et al submitted the "Brokering and Chaining Distributed Services and Data Using OptIPuter and the National Lambda Rail" proposal to NASA's ROSES NRA. A diagram (pdf, 128KB) illustrating the networking concepts identified in that proposal is provided.
  • On June 9, 2005, with the progress achieved to date, Pat Gary emailed GSFC's input to OptiPuter's Annual Report (pdf, 78KB) in response to a request from OptiPuter Co-PI Tom DeFant.
  • On June 16, 2005, continuing plans for science-enablement via 10-Gbps lambda networking, CUNY's Ibrahim Habib et al submitted the "Enabling NASA Applications Across Heterogeneous High Performance Networks"proposal to NASA's NNH05ZDA001N-Applied Information Systems Research (a.k.a. ROSES:D3) NRA. A diagram (pdf, 36KB) illustrating the networking concepts identified in that proposal is provided.
  • On July 28, 2005, with the 10-Gbps lambdas across DRAGON completed as noted in the July 28, 2005 entry for the Regional Network section and its news article (pdf, 60KB), Bill Fink conducted 10-second-duration GSFC-UCSD nuttcp-enabled UDP-based flow tests in both directions. He measured greater than 5-Gbps in each direction and no packet losses. A diagram illustrating the GSFC-UCSD coast-to-coast connection across DRAGON and the inter-connected NLR-based WASH-STAR and CAVEwave lambdas is provided.
  • On July 29, 2005, Bill Fink conducted 15-minute-duration GSFC-UCSD nuttcp-enabled UDP-based flow tests in both directions. He measured greater than 5-Gbps in each direction and no-to-negligible packet losses. A MRTG graph of the flow across the NLR-based WASH-STAR lambda (pdf, 44KB) is provided. A MRTG graph of the flow across the DRAGON-based DWDM XFP-enabled ITU-channel49-compliant lambda (pdf, 44KB) is provided.
  • On August 5, 2005, Bill Fink simultaneously conducted two 15-minute-duration nuttcp-enabled UDP-based 4.5-Gbps flow tests, with one flow between GSFC-UCSD and the other between GSFC-StarLight/Chicago. This filled both the NLR/WASH-STAR and DRAGON/channel49 lambdas to 90% of capacity. Flows were also tested in both directions. He measured greater than 9-Gbps aggregate in each direction and no-to-negligible packet losses. A montage of MRTG graphs (pdf, 60KB) for the NLR/WASH-STAR and DRAGON/channel49 lambdas and the two network test workstations at GSFC used during these tests is provided.
  • On August 8, 2005, a demo of "21st Century National-Scale Team Science", involving UCSD and GSFC linked using OptIPuter technologies over a 10-Gbps lambda across the NLR and DRAGON networks, was presented in GSFC's SVS to NASA Associate Administrator of Science Alphonso Diaz and ~30 other attendees. The demo's Program Guide/Agenda (pdf, 564KB) is provided. A news release prepared by UCSD about the demo is provided. A partially representative set of GSFC photos (pdf, 4.5MB) taken during the demo is provided. A montage of amateur photos(pdf, 11.1MB) taken during the demo is provided.
  • On August 24, 2005, ARC's Dave Hartzell and Steve Shultz installed and began configuration setup of NASA Research and Engineering Network (NREN) 10-Gbps network equipment at GSFC. This equipment is planned to be interconnected locally with GSFC's 10-Gbps L-Net/Scientific and Engineering Network and transcontinentally with ARC across a lambda of the DRAGON and NLR multiwavelength optical networks. A montage of photos (pdf, 5.7 MB) of the installation is provided.
  • During September 12-14, 2005, L-Net rep Pat Gary participated in the Optical Network Testbeds 2 Workshop hosted at NASA/ARC. During that meeting, NREN Deputy PM and Engineering Group Lead Kevin Jones (ARC) provided the first "public" presentation of NREN's plans to upgrade to 10-Gbps via the NLR with whom they contracted on August 31, 2005. GSFC's L-Net/SEN1010Gig network is planned as NREN's first 10-Gbps end user site connection.
  • During September 26-29, 2005, the L-Net supported two "live" demonstrations requiring real-time data flows from GSFC to the iGrid 2005 Conference which was hosted by the California Institute for Telecommunications and Information Technology at the UCSD in San Diego and drew ~450 attendees. The L-Net supported demonstrations were the US122/Dynamic_Provisioning_&_VLBI and US130/Real-Time_True-3D_Visualization exhibitors, who were among a total of 49 from 20 countries who coordinated to accelerate the use of multi-10-Gbps international and national networks, to advance scientific research, and to educate decision makers, academicians and industry researchers on the value of a new generation of connection-oriented network paradigms dubbed "Light Paths." For the US122/Dynamic_Provisioning_&_VLBI exhibitors, the L-Net enabled real-time radio-antenna-acquired 512-Mbps IP-encapsulated data flows from the GSFC Geophysical and Astronomical Observatory (GGAO) to MIT/Haystack where GGAO's data was correlated in real-time with concurrently acquired data obtained and streamed from telescopes in Sweden, the Netherlands, Japan, and the United Kingdom. MIT/Haystack's description of this demonstration is available. The presentation given by J. Sobieski describing this demonstration is available. For the US130/Real-Time_True-3D_Visualization exhibitors, the L-Net enabled 1-Gbps "live" high definition IP-encapsulated video from each of two stereoscopically-aligned HDTV cameras, set up in the HECN network lab, to be transmitted over the L-Net's 10-Gbps connections with the DRAGON and the NLR networks for viewing at iGrid 2005 on Physical Optics Corporation's 35" x 35" holographic 3-D HDTV video display system that does not require goggles or other special head gear for 3D viewing. MRTG graphs (pdf, 56 KB) of the L-Net supported data flows for iGrid 2005 are provided. A 5-page 6-photo/page montage (pdf, 2.4 MB) of "amateur" photos of selected aspects of the iGrid 2005 Conference taken by Pat Gary are available. A 1-page 5-photos/page montage (pdf, 536kb) of photos of selected views of the two stereoscopically-aligned HDTV cameras in the HECN network lab supporting the US130/Real-Time_True-3D_Visualization exhibit, taken by Kevin Kranacs, are available.   More comprehensive photos and information about iGrid 2005 are available at its website.
  • On September 29-30, 2005, L-Net rep Pat Gary participated in the annual meeting of the Global Lambda Integrated Facility (GLIF) and its Control Plane Working Group. Through L-Net's participation in the GLIF, GSFC-based users of the L-Net now can conduct up to 10-Gbps data flows internationally through GLIF's global Light Paths.
  • On October 3, 2005, the CISTO News Summer 2005 was released containing the L-Net related article "Large-Scale Team Science Demonstrated Over 10 Gbps Coast-to-Coast Network" by Jarrett Cohen (GST).
  • On October 4, 2005, Bill Fink and Paul Lang assisted the NREN in enabling their 10-GE connection across DRAGON/channel26 between NREN's Cisco 6506's at MCLN and GSFC.
  • On October 26, 2005, NREN's Mark Foster (ARC) tested the readiness for production use of NREN's new 1-Gbps end-to-end link between ARC and GSFC. Besides being 1-Gbps end-to-end, a key difference of this new link (versus NREN's present production link between ARC and GSFC which traverses Qwest's ATM-based Layer 2 service at ~100 Mbps) is that this new link traverses the NLR's GE/VLAN-based Layer 2 service between NREN's Cisco 6506's in Sunnyvale and McLean. Using Bill Fink's nuttcp throughput performance testing software for test data generation and UDP-transmission, Foster transmitted over 1.5 terabytes (TB) of data in each direction in 4 hours with single-direction throughputs of ~1-Gbps. A mrtg-generated timeline graph of the first few minutes of the traffic flow between Pittsburgh and McLean on NLR's Layer 2 service measured by NLR's NOC is provided.
  • On October 27, 2005, Bill Fink separately tested the readiness for production use of NREN's new 1-Gbps end-to-end link between ARC and GSFC, transmitting a total of 2 TB of data (1 TB simultaneously in each direction) with zero packet losses. Using Fink's nuttcp throughput performance testing software, the data were generated and UDP-transmitted in one approximately 2.5-hour-duration test with single-direction throughputs of ~1-Gbps.
  • On October 28, 2005, Bill Fink tested the readiness for production use of NREN's 10-GE link across DRAGON's ITU-channel26-compliant lambda between NREN's Cisco 6506's in McLean and GSFC, transmitting a total of 6 TB of data (3 TB in each direction) with zero packet losses. Using Fink's nuttcp throughput performance testing software, the data were generated and UDP-transmitted in three approximately 0.5-hour-duration tests (each flowing two 1-TB's of test data either in the same or opposite directions) with individual-test aggregate simultaneous throughputs measured at ~10-Gbps. Mrtg-generated timeline graphs of the aggregate throughputs of the three test data transfers measured at McLean and measured at GSFC are provided.
  • On November 3, 2005, with assistance from GSFC's Bill Fink (606.1) and Paul Lang (ADNET), the ARC-based NASA Research and Engineering Network (NREN) project switched the routing of its production data flows between ARC's NAS supercomputer facilities and computers connected to GSFC's Scientific and Engineering Network (SEN). Such flows no longer use NREN's otherwise present production pathways implemented over Qwest's ATM-based Layer 2 service, which provided only ~100 Mbps throughput performance for data flows between the NAS and SEN-connected computers at GSFC, but instead now use the 26Oct05-initiated 1-Gbps end-to-end pathway that traverses the NLR's GE/VLAN-based Layer 2 service between NREN's Cisco 6506's in Sunnyvale and McLean and then traverses the DRAGON's ITU-channel26-compliant lambda between NREN's Cisco 6506's in McLean and GSFC, Data flows between ARC's NAS and computers connected to all other networks including GSFC's Center Network Environment remain unchanged. Hence the key result of this change is that computers connected to the SEN, particularly including those of the NCCS, can exchange data with those of the NAS, particularly including the Project Columbia supercomputer, at up to 1-Gbps or ~10 times the throughput rate that they could previously. Mrtg-generated timeline graphs showing the end of data flows with NAS on the SEN's former interface with NREN and the start of data flows with NAS on the SEN's new interface with NREN are provided.
  • During the week of November 14-18, 2005, as part of the NASA exhibit (booth 1810) at the SC|05 International Conference for High Performance Computing, Networking, Storage and Analysis held November 12-18, 2005 in Seattle, for the NASA MAP'05 project, Bill Putman (610.3) demonstrated hurricane data sets exchanged between ARC's Project Columbia supercomputer and GSFC's NCCS across the recently-upgraded-to-1-Gbps NREN path across the NLR between ARC's NAS supercomputer facilities and computers connected to the SEN. Interactive views of the data shown by Putman are available. A mrtg-generated timeline graph (pdf, 52KB) of the "weekly" 30 minute average data rates of the traffic flows supported during SC|05 across the NREN between the NAS and the SEN is available.
  • During the week of November 14-18, 2005, as part of the Internet2 exhibit (booth 2435) at SC|05, for the international eVLBI project, DRAGON Principal Investigator Jerry Sobieski (UMCP) together with other DRAGON representatives including MIT/Haystack's Chester Ruszczyk demonstrated real-time radio-antenna-acquired 512-Mbps IP-encapsulated data flows from the GSFC Geophysical and Astronomical Observatory (GGAO) to MIT/Haystack where GGAO's data was correlated in real-time with concurrently acquired data obtained and streamed from three other telescopes, respectively, in Westford, Massachusetts, Onsala, Sweden, and Kashima, Japan. Mrtg-generated timeline graphs (pdf, 52KB) of the "weekly" 30 minute average data rates of the GGAO traffic flows supported the three weeks before and during SC|05 are available. MIT/Haystack's description of essentially the same demonstration given at iGrid2005 on September 26-29, 2005, is available. Internet2's November 16, 2005, I2-NEWS article covering this demonstration is available.
  • On February 1, 2006, to conduct initial user testing of NREN's new 10-GigE lambda (NLR-SUNN-WASH-10GE-92) on the NLR between Sunnyvale and McLean, Bill Fink and Paul Lang arranged for NLR NOC engineers to enable a client side soft loop on the cisco 15454 at Sunnyvale. Fink and Lang were then able to conduct nuttcp network performance testing across the McLean to Sunnyvale NLR path, which was then looped back to McLean, using the special configuration planned and set up by Fink and Lang. They ran the nuttcp network performance test by sending one 4.5 Gbps UDP stream from chance4 at McLean to chance5 at McLean via the looped NLR-SUNN-WASH-10GE-92 circuit, and simultaneously sent a second 4.5 Gbps UDP stream from chance at GSFC to chance2 at GSFC, also traversing the looped NLR-SUNN-WASH-10GE-92 circuit (utilizing the 10-GigE DRAGON network between GSFC and McLean to reach the looped NLR circuit). The UDP transfers were sent from chance4 to chance5, and from chance to chance2, to insure the data actually traversed the looped NLR-SUNN-WASH-10GE-92 circuit, and not just the local jumper cable between the 2 ports on our McLean-based E300. This test filled the NLR-SUNN-WASH-10GE-92 10-GigE circuit to 90% of capacity in both directions simultaneously. The test was run for 20 minutes, which resulted in over 1.28 TB of data being transmitted simultaneously in each direction. There were no errors reported on either the transmit or receive 10-GigE interfaces. MRTG graphs of the transmit and receive 10-GigE interfaces on our McLean-based E300 that connect to the NLR-SUNN-WASH-10GE-92 circuit are available for the testing period.
  • On February 2, 2006, Bill Fink conducted additional nuttcp network performance tests of the same type as described above across the McLean to Sunnyvale NLR path, which was then looped back to McLean, using the special configuration also described above. This test was run for 6 hours, which resulted in over 23.75 TB of data being transmitted simultaneously in each direction. There were no errors reported on either the transmit or receive 10-GigE interfaces. MRTG graphs of the transmit (pdf, 312KB) and receive (pdf, 316KB) 10-GigE interfaces on our McLean-based E300 that connect to the NLR-SUNN-WASH-10GE-92 circuit are available for the testing period. MRTG graphs of the 10-GigE interfaces (pdf, 76KB) of chance4, chance5, chance, and chance2 are available for the testing period.
  • On March 21, 2006, to conduct initial user testing of OptIPuter's new 10-GigE lambda (NLR-STAR-WASH-10GE-103) on the NLR between Chicago and McLean, Bill Fink and Paul Lang arranged for NLR NOC engineers to enable a client side soft loop on the cisco 15454 at Chicago. Fink and Lang were then able to conduct nuttcp network performance testing across the McLean to Chicago NLR path, which was then looped back to McLean, using the special configuration (pdf, 20KB) planned and set up by Fink and Lang. They ran the nuttcp network performance test by sending one 4.6 Gbps UDP stream from chance4 at McLean to chance5 at McLean via the looped NLR-STAR-WASH-10GE-103 circuit, and simultaneously sent a second 4.6 Gbps UDP stream from chance at GSFC to chance2 at GSFC, also traversing the looped NLR-STAR-WASH-10GE-103 circuit (utilizing the 10-GigE DRAGON network between GSFC and McLean to reach the looped NLR circuit). The UDP transfers were sent from chance4 to chance5, and from chance to chance2, to insure the data actually traversed the looped NLR-STAR-WASH-10GE-103 circuit, and not just the local jumper cable between the 2 ports on our McLean-based E300. This test filled the NLR-STAR-WASH-10GE-103 10-GigE circuit to 92% of capacity in both directions simultaneously. The test was run for 6 hours, which resulted in over 22.68 TB of data being transmitted simultaneously in each direction. There were no errors reported on either the transmit or receive 10-GigE interfaces, and no packets were loosed or dropped. MRTG graphs (pdf, 80KB) of the transmit and receive 10-GigE interfaces on our McLean-based E300 that connect to the NLR-STAR-WASH-10GE-103 circuit are available for the testing period. MRTG graphs of the 10-GigE interfaces (pdf, 76KB) of chance4, chance5, chance, and chance2 also are available for the testing period, but were captured the following day so only the data on the far left of the graphs applies.
  • On April 3, 2006, in collaboration with Dr. Robert Grossman (UIC) and the TeraFlow Testbed (TFT) project, the HECN Team connected a TFT-provided cluster of four dual-Opteron 265's servers to the L-Net's Force10 E600 10-GE switch at GSFC. Each TFT server is 10-GE connected via an Intel Pro1000 NIC to a TFT-provided SMC SMC8708L2 10-GE switch which connects via 10-GE with the L-Net's Force10 E600 at GSFC. The TFT cluster at GSFC will conduct "teraflows" with other TFT clusters in Chicago, Illinois; Kingston, Ontario; Amsterdam, The Netherlands; Geneva, Switerland; Tokyo, Japan; and London, England. The TFT cluster at GSFC will use the L-Net's 10-Gbps pathway across the L-Net's channel49 lambda on the DRAGON and WASH-STAR lambda on the NLR to connect with the OptIPuter's Force10 E1200 10-GE switch at the StarLight facility in Chicago. TFT's network diagram which shows GSFC's connection in the TFT is available. GSFC's L-Net and its users are hoping to leverage TFT's connection into Tokyo to enable high performance access to Coordinated Enhanced Observing Period (CEOP) project data sets hosted at the University of Tokyo.
  • On April 4, 2006, in cooperation with OptIPuter's Alan Verlo (UIC) and after re-configuring from the special loop-back test configuration used on 21Mar06, Bill Fink set up and successfully tested a tagged 10-Gbps VLAN between StarLight's Force10 E1200 10-GE switch in Chicago and the L-Net's Force10 E300 10-GE switch in McLean across the OptIPuter's new NLR-STAR-WASH-10GE-103 10-GE lambda on the NLR. After additional network connections are made across DRAGON, this new OptIPuter 10-Gbps pathway will be used by the Calit2/OptIPuter and the J. Craig Venter Institute for the Community Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA) project, and for some telescience experiments with GSFC.
  • On June 21, 2006, in response to a request from OptiPuter Project Manager Maxine Brown (UIC), Pat Gary emailed GSFC's input (pdf, 64KB ) to OptiPuter's FY2006 Annual Progress Report and FY2007 Program Plan, and received the reply from OptIPuter's PI Larry Smarr "I really appreciate your pro-active attitude!"
  • During the week of November 12-16, 2006, GSFC's High End Computer Network (HECN) Team supported four realtime high performance networking data flow demonstrations into the showroom floor of the International Conference for High Performance Computing, Networking and Storage, a.k.a. SC2006, hosted in Tampa, FL. Further information (pdf, 612KB) about the projects supported, and their SC06 demos and data flows across the relevant network infrastructure used is provided.