Activities of the SAACC Meeting April 21-22, 2020

To download a pdf version of the report click here.

This report documents the key updates from the AmLight SAACC Meeting on April 21-22, 2020. The meeting gathered participants from universities, organizations, and research institutions from the USA, Latin America, and Europe. For the first time, the SAACC included a participant from South Africa too. The SAACC Meeting was comprised of two sessions: Science Requirements & Activities Updates, and Providers updates.

The Science Requirements & Activity Updates session started with welcome remarks by the Co-Chairs (Julio Ibarra and Jeff Kantor), and an introduction followed by presentations from AURA, NRAO, ALMA, Vera Rubin Observatory and ended with Open Discussion & Coordination. The Providers Updates session started with network presentation updates including AmLight-ExP, REUNA, RNP, RedCLARA, and Internet2 followed by Open Discussion and Coordination.

The AmLight SAACC meeting participants hailed from the following universities, organizations, and research institutions:

  • Brazilian e-science/astronomy virtual institute LINEA
  • Brazilian National Research and Educational Network (Rede Nacional de Ensino e Pesquisa -RNP)
  • Cerro Tololo Inter-American Observatory (CTIO)
  • CIARA at Florida International University (FIU)
  • Energy Sciences Network (ESnet)
  • European Research and Educational Network (GÉANT)
  • European Southern Observatory (ESO)
  • Fermilab (FNAL)
  • Florida LambdaRail (FLR)
  • French National Institute of Nuclear and Particle Physics (In2p3)
  • Gemini Observatory-NOIRLab
  • Giant Magellan Telescope Observatory (GMTO)
  • Information Science Institute (ISI) at the University of Southern California
  • Internet2
  • Latin American Advanced Networks Cooperation (Cooperación Latino Americana de Redes Avanzadas – RedCLARA)
  • National Center for Supercomputing Applications (NSCA)
  • National Radio Astronomy Observatory (NRAO)
  • National Science Foundation (NSF)
  • National University Network Chile (Red Universitaria Nacional -REUNA)
  • Next-Generation Very Large Array (ngVLA)
  • NSF’s National Optical-Infrared Astronomy Research Laboratory (NSF’s NOIRLab)
  • Simons Observatory
  • Tertiary Education and Research Network of South Africa (TENET)
  • Vera Rubin Observatory

There were 42 participants attending the first day of the meeting and 47 the second day. The meeting was organized into two sessions and presentations.

 

Program for the SAACC Meeting
Tuesday, April 21, 2020
11:00 – Welcome | Presentation
Session I: Science Requirements & Activities Updates

11:10 – Vera C. Rubin Observatory Construction, US-ELT (Jeffrey Kantor) | Presentation
11:30 – Vera C. Rubin Observatory Operations (Bob Blum) | Presentation
11:50 – NOIRLab (Mauricio Rojas) | Presentation
12:10 – NOIRLab-Gemini (Eduardo Toro) | Presentation
12:30 – Refreshment Break
13:00 – NRAO (David Halstead, Adele Plunkett) | Presentation
13:20 – GMTO (Mauricio Pilleux, Sam Chan) | Presentation
13:40 – Simons (Simone Aiola) | Presentation
14:00 – ngVLA (Rob Selina) | Presentation
14:20 – Open Discussion/Coordination

Wednesday, April 22, 2020
11:00 – Welcome

Session II: Providers updates
11:10 – AmLight1: International links (Jeronimo Bezerra) | Presentation
11:30 – AmLight2: Performance monitoring (Renata Frez) | Presentation
11:50 – REUNA (Albert Astudillo) | Presentation
12:10 – RedCLARA (Luis Eliécer Cadenas) | Presentation
12:30 – Refreshment Break
13:10 – RNP (Michael Stanton, Eduardo Grizendi) | <Presentation
13:30 – NCSA (Matt Kollross) | <Presentation
13:50 – Internet2 (John Hicks, Dale Finkelson) | Presentation
14:10 – ESnet (Paul Wefel) | Presentation
14:30 – Open Discussion/Coordination
15:00 – Adjourn

 

List of Participants:

Day 1 April 21, 2020 Day 2 April 22, 2020
Adele Plunkett (NRAO) Adele Plunkett (NRAO)
Adil Zahir (AmLight) Adil Zahir ( FIU-AmLight )
Albert Astudillo (REUNA) Albert Astudillo (REUNA)
Aluizio Hazin (RNP) Aluizio Hazin (RNP)
Andres Vinet (ESO) Andrey Bobyshev (FermiLab)
Bob Blum (Rubin Observatory) Chris Griffin (FLR)
Claudia Inostroza (REUNA) Claudia Inostroza (REUNA)
Cristian Silva (Rubin Obs) Cristian Silva (Rubin Obs)
Dale Finkelson (Internet2) Dale Finkelson (Internet2)
David Halstead (NRAO) David Halstead (NRAO)
Eduardo Toro (NOIRlab) Eduardo Grizendi (RNP)
Edward Ajhar (NSF) Eduardo Toro (NOIRlab)
Giorgio Filippi (ESO) Edward Ajhar (NSF)
Heidi Morgan (AmLight Co-PI) Giorgio Filippi (ESO)
Inder Monga (Esnet) Heidi Morgan (AmLight Co PI)
Jeferson Souza (RNP/Linea) Inder Monga (ESnet)
Jeff Kantor (Rubin Obs) Italo Valcy (FIU-AmLight)
Jeronimo Bezerra (AmLight) Jeferson Souza (RNP/Linea)
John Hicks (Internet2) Jeff Kantor (Rubin Observatory)
Julio Constanzo (Rubin Obs) Jeronimo Bezerra (AmLight)
Julio Ibarra (FIU-AmLight) John Hicks (Internet2)
Kevin Thompson (NSF) Jorge Dupeyron (ESO)
Luis Cadenas (RedCLARA) Julio Constanzo (Rubin Obs)
Luiz da Costa (Linea) Julio Ibarra (FIU-AmLight)
Marco Teixeira (RedCLARA) Kate Robinson (ESnet)
Matt Zekauskas (Internet2) Kevin Thompson (NSF)
Matthew Kollross (NSCA) Len Lotz (TENET)
Mauricio Pilleux (GMTO) Luis Cadenas (RedCLARA)
Mauricio Rojas (NOIRLab) Luiz da Costa (Linea)
Michael Stanton (RNP) Marco Teixeira (RedCLARA)
Nadine Neyroud (In2p3) Mat Kollross (NSCA)
Paul Wefel (Esnet) Matt Zekauskas (Internet2)
Phil DeMar (Fermilab) Mauricio Rojas (NOIRLab)
Renata Frez (AmLight/RNP) Michael Stanton (RNP)
Richard Hughes-Jones (GÉANT) Nadine Neyroud (CTAO)
Rob Selina (NRAO) Paul Wefel (ESnet)
Ronald Lambert (Vera Rubin Observatory) Phil DeMar (Fermilab)
Sam Chan (GMTO) Ray Pasetes (Fermilab)
Sergio Cofré (REUNA) Renata Frez (AmLight/RNP)
Sergio Cofré (REUNA) Richard Hughes-Jones (GÉANT)
Simone Aiola (CCA (Simone Aiola) Rob Selina (NRAO) (Rob Selina)
Vasilka Chergarova (FIU-AmLight) Ronald Lambert (Rubin Observatory)
William O’Mullane (Rubin Observatory) Sam Chan (GMTO)
Sergio Cofré (REUNA)
Simone Aiola (CCA)
Vasilka Chergarova (FIU-AmLight)
William O’Mullane (Rubin Observatory)

 

Science Requirements & Activities Updates

Vera Rubin Observatory

The Vera Rubin Observatory data production includes nightly data products (alerts, difference images, and catalogs) with 60 seconds latency from the time of readout from the camera to the US Data Facility and accumulated annual data products which are distributed first for the US and then to France. The distributed computing supports the community developed data products to be delivered to the computing and storage Data Access Centers (US Data Facility and In2p3) by using software including middleware, pipelines, algorithms, and tools.

The nightly data flows have a 60 seconds image delivery requirement from the Base site to the Archive site. The raw image size is estimated to be 24-30TB/night (6.4GB/image) and 18-bits uncompressed, and then compressed at the Base site. The Data Management of up to 39GB data is as follows: Northbound includes raw images, wavefront images, and raw calibration images; the Southbound includes the DIAObjects catalog, DIASourse catalog, and calibration images.

The non-nightly data flows, and distributed computing includes L2 Calibration data products from the US archive site to the Base site and from the US to the French archive site. The data management from the US to France is as follow: The Eastbound includes raw images, calibration products, engineering facility database, ½ of the L2 data products (coadds, catalog, and Science Data Quality Assessment- SDQA); and the Westbound will include the other ½ L2 data products.

The Vera Rubin Observatory Long-Haul Network (LHN) Links for FY 2020 include:

Primary path:

  • Cerro Pachon to La Serena (40x10G and 2x10G AURA and 10x10GB non-LSST AURA)
  • La Serena to Santiago (1x 100G REUNA)
  • Santiago to Sao Paulo (100G spectrum over AmLight)
  • Sao Paulo to Boca Raton (100G spectrum over AmLight)
  • Boca Raton to Miami (1x100G shared, path)
  • Miami to Jacksonville (1x100G AMPATH/FLR shared)
  • Jacksonville to Atlanta (1x100G I2 shared AL2s)
  • Atlanta to Chicago (1x20G ESnet shared)
  • Chicago to Champagne (1x 100G NSCA/ICCN)
  • Chicago to CC-IN2P3 France (2x10G undefined yet)
  • Chicago to Tucson (10G AmLight/I2)
  • Chicago to SLAC and other DOE facilities (10G ESnet)

Secondary path:

  • La Serena to Santiago (1x40G or 100G REUNA link)
  • Santiago to Panama to Miami (1x100G over AmLight/ANSP/RedCLARA)
  • Santiago to Sao Paulo to Miami (1x100G over AmLight/ANSP/RedCLARA)
  • Miami to Jacksonville (1x100G AMPATH/FLR shared)
  • Atlanta to Chicago (1x20G I2)
  • Chicago to Champagne (2x20G ESnet/I2 shared)

The LHN Links for FY 2022 include the following updates:

Primary path

  • Boca to Atlanta (1x100G spectrum)
  • Atlanta to Chicago (2x100G ESnet dedicated)

Secondary path:

  • Boca Raton to Miami (1x100G shared diverse path)
  • Miami to Jacksonville (2x100G AMPATH/FLR shared)

Vera Rubin Observatory Operations

The Vera C. Rubin Observatory’s mission is to create a vast astronomical dataset and web-based analysis environment for unprecedented discovery of the deep and dynamic universe.

The Legacy Survey of Space and Time (LSST) will provide the community with the data to address some of the most fundamental questions in astrophysics, advance the field of astronomy, and engage the public in the discovery process.

The survey is scheduled to begin operations in 2022 (pending COVID 19 rebaseline of construction project) and will continue for ten years observing 40 billion objects, 10 million transient events per night and producing approximately 20TB per night, and annual data releases, a final catalog of 15PB and 500PB image data. The telescope, the camera, and the auxiliary /calibration telescope are located at the Summit. The office space, meeting rooms, remote control room, computing facility, the Chilean Data Access (DAC), and a backup data archive are located in the Base Facility in La Serena.

There are four organizations to manage the construction through the operations phase: the NSF’s National Optical-Infrared Astronomy Research Laboratory (NSF’s NOIRLab), SLAC National Accelerator Laboratory, and the US data facility, which will be connected with other international data facilities (e.g., France, UK). About 41 international (non-US and Chile) groups are seeking data rights for 580 PIs in return for in-kind contributions to Rubin and LSST.

NSF’s National Optical-Infrared Astronomy Research Laboratory (NSF’s NOIRLab)

The NOIRLab is the acronym for the NSF’s National Optical Infrared Research Laboratory. It is comprised of all the NSF nighttime ground based optical programs: Gemini, the Kitt Peak National Observatory (KPNO), Cerro Tololo Interamerican Observatory (CTIO 4ms), the Community Science and Data Center (ex-National Optical Astronomy Observatory – NOAO), and Vera C. Rubin Observatory. Rubin construction will finish and not be part of NOIRLab. The NOIRLab is a matrix organization where engineers and scientists are part of central service pools and matrixed to programs like Tololo, Gemini, or Rubin. The head of IT is in Chile, but the IT group includes people in Chile, Hawaii, and Arizona (same for facility support and education and public outreach).

The network backbone users include large tenants, such as Gemini[1], Victor Blanco[2], SOAR[3], Vera Rubin Observatory[4], Las Campanas Observatory[5], and ALMA/NRAO[6]; and a smaller tenants as Smart group[7], Prompt[8], Gong[9], Alo[10], Wham[11], Lcogtn[12], Kasi[13], Asas-sn[14], mEarth[15], EvryScope[16], and T80[17]. The initial Harris microwaves have been removed, and the current primary link is fiber with a backup link provided by the Wireless ISPs such as Cambium and Ubiquity. The plans for 2020, after COVID-19 situation is resolved, include a cooperation project between AURA and Entel[18] to provide a new backup link from La Serena-Tololo and La Serena-Pachon. The installation is in progress, and the second semester of 2020 a new backup link should be up and running from La Serena-Tololo and La Serena to Pachon.

The network backbone from La Serena to Santiago, REUNA has installed two 100Gbps DWDM optical networks and a 100Gbps wave was configured over them. A backup IP service with capacity of 4Gbps is provided by REUNA to add redundancy to the Vera Rubin 100Gbps wave. This backup service will be upgraded to 40/100Gbps by FY2023, but will be transitioned to supporting Rubin Observatory at that time.  No NOIRLab backup is currently baselined after that transition.

From Santiago to Miami, there are 2 paths via CenturyLink: primary at the West pacific and secondary backup at the East Atlantic. A tertiary link is used as back up over RedCLARA in addition to the Vera Rubin Observatory 100G from Chile to the US. Equipment devices, connections to ISPs (data and voice), border router, firewalls, middle routers, core switch, fibers, call managers, servers, patch panels have been moved to the new Base Data Center (BDC) and are currently operational.

Project Lyra[19] is a feasibility study of a mission to the interstellar object ʻOumuamua, initiated in 2017 by the Initiative for Interstellar Studies (i4is). A 200Mbps link to support the Lyra project was tested in July 2019 and is technically operational. The link is a backup link for La Silla for data movement between La Silla and Santiago and also will be a backup link for Tololo & Pachon in case of disaster. NOIRlab IT operations (ITOps) will support Mid-Scale Observations (MSO), Community Science & Data Science (CSDC), Gemini Observatory and Vera Rubin Observatory in the future.

NOIRlab – Gemini Observatory

The Gemini Observatory’s mission is to advance our knowledge of the Universe by providing the international Gemini Community with forefront access to the entire sky. Gemini Observatory consists of two sites: Gemini North in Mauna Kea Mountain, Hawaii, and Gemini South in Cerro Pachon, Chile. Gemini partners are the USA, Canada, Chile, Brazil, Argentina, and Korea.

Gemini ITS is now part of one unit named the ITOps Department of NOIRlab. The new group is creating synergy and coordination with all the IT groups to deliver efficient and effective IT throughout the organization.

Gemini has four data centers. Gemini North data centers are located at Hilo Base Facility (HBF), Hilo City and the Mauna Kea Operations (MKO), at 4200m. Gemini South data centers are situated in La Serena Base Facility (SBF), La Serena, and at Cerro Pachon Operation (CPO), at 2700m. The South and North facilities are close to identical.

The key use cases include:

  • High QoS at the Base facility operations – network is used for remote observing and instrumentation
  • High bandwidth for the summit base data transfer – consists of a current primary channel 4x10Gb/s and backup link 300Mb/s
  • Multiple paths for high availability – include data center redundancy, fiber optic, and microwave link.
  • Cross-site coordination with low latency between Hilo and La Serena (latency ~226ms) and from La Serena to Tucson (latency ~167ms)
  • High reliability for cloud data archiving – fast and low-cost observatory data archive

During 2019, the Next Generation Firewalls were implemented along with migrating from old EoL equipment in 4 locations, increased bandwidth to the AURA Border router, an improvement on the use of high availability, and incorporated other new features. Current NOIRLab projects include a centralized authentication and authorization service implementation, email & calendar and collaboration project, DNS NOIRLab project, VoIP system alignment, and a centralized video conferencing.

Data transfer from Atacama Large Millimeter/Submillimeter Array (ALMA) to North America

ALMA is a multinational project with many partners and three ALMA Regional Centers (ARCs): North America (NRAO, Charlottesville, VA), Europe (ESO, Garching/Munich), and Asia (NAOJ, Mitaka/Tokyo, Japan). ALMA is the largest mm/submm telescope ever built. ALMA interferometer combines signals from 66 antennas to form an image and can have three projects observed at once. ALMA is operated in a “space mission” style where the data is processed and archived at each ARC site (and Chile). Current Cycle 7 observations began in October 2019. ALMA currently shut down since March 22, not observing amidst COVID-19 danger. The data transport within Chile from ALMA to Santiago (2.5Gb/s) now includes fiber from ALMA to Calama, commercial fiber from Calama to Antofagasta, and EVALSO/REUNA from Antofagasta to Santiago. A redundant fiber loop via Argentina is planned. From Chile to Charlottesville (NAASC), the typical rate obtained during peak data transfer periods is 2-300Mb/s, with bursts up to 600Mb/s. Currently, the ALMA team is working on establishing network monitoring and improving its understanding of how the link performs in typical load conditions (~1TB/day). The estimated data size for the next three years (including product size mitigation) is around 200-300TB/year (raw data and products roughly equal).

The ALMA team would like to establish a link with 10Gb/s available bandwidth (out of a 100Gb/s pipe) within the next 1-2 years to improve transfer speed to and from Chile for bulk reprocessing, and to help with occasional large data and metadata transports. Most of the new developments (e.g., next-generation correlator) on 5-10 years’ timescale can probably be accommodated without increasing the data rate significantly.

All 50 antennas of the 12-meter antenna array have been used together; and it seems that at least some projects have used 50 x 12-meter antennas and some (~10) of the 7-meter antennas. ALMA does not formally offer to run science projects with 66 antennas together, because they are generally operated in 3 parallel arrays (12-meter antenna interferometry array; 7-meter antenna array; and total power “single dish” mode).  However, some tests have been done with ~64 antennas connected.

Giant Magellan Telescope Observatory (GMTO)

The GMTO[20] will be the largest telescope in the world in 2022, with 25m in diameter and 62m high. The cost of the telescope is $1,950 million, and the first light is scheduled for 2029. The research area the project is targeting is about exoplanets and their atmospheres, dark matter, distant objects, and the unknown. Currently, the site has residences with 92 rooms completed with 228 people capacity; wide roads are ready, hard-rock excavation for telescope enclosure and auxiliary buildings are ready, and water and telescope/enclosure cooling system utilities.

The IT cyberinfrastructure for GMTO includes data centers in Pasadena (main) and Las Campanas (backup). The use of Amazon Web Services (AWS) is considered along with WiFi 6 and 5G. The IT communication strategy is currently under discussion.

Simons Observatory

The Simons Observatory (SO)[21] is a forthcoming polarization-sensitive Cosmic Microwave Background (CMB) experiment, located in the high Atacama Desert in Northern Chile inside the Chajnantor Science Preserve at 5,200 meters (17,000 ft). Its goals are to study how the universe began, what it is made of, and how it evolved to its current state. The Atacama Cosmology Telescope (ACT)[22], the POLARBEAR[23] /Simons Array, and Cosmology Large Angular Scale Surveyor (CLASS)[24] are currently making observations of the CMB. The project construction began in 2017 and will be finished in 2022. Simons’ foundation investment is $62.5M, with an additional $10M institutional commitment. The SO instrumentation consists of 70,000 dichroic detectors, one Large-Aperture Telescope (LAT) and three Small-Aperture Telescopes (SATs). The scientific observation will be from 2021 until 2026 with periodic and timely data releases, delivering to the community CMB, lensing maps and catalogs. The SAT survey will research the cosmic signature of inflation (high-risk/high reward science), where the LAT survey will research primordial perturbation, neutrino mass, relativistic species, reionization, dark energy, galaxy evolution, and transient events (same sky at the same time as LSST). Transferring the data from the summit site to the US is a major key point for transient events.

Observatory control system for the data management and pipelines for data reduction and simulation are under development. The SO site will hold ~1 month of data (and 1 copy) and move the data from site to North America within 24h. At least three copies of the raw dataset (collocated with analysis centers) will be stored. The total of the 5yr survey data volume will be ~3PB (~500TB/y) with a data rate of 132 Mbps. Most of the computational reduction and simulation pipeline requires MPI+OpenMP[25]. Atmospheric noise correlations (often) require matrix operations on large data volumes. For example, LAT has ~5400 dets for 15min @ 200 Hz ->  4GB/chunk + 300M pixel maps + metadata; where SATs have ~12,000 dets for 2hrs @ 30 Hz -> 11GB/chunk + small maps and metadata. The estimated computational cost will be 3M CPU hours to reduce 1yr of SO data. Time-domain simulations would require more.

From the SO site to ALMA REUNA PoP, SO is using a fiber connection. Fully functional fiber connection from SO site to the USA will begin along with the science operation. The data will be transferred to San Diego Supercomputing Center SDSC (full raw dataset and data reduction) to National Energy Research Scientific Computing Center (NERSC) (full data set, data reduction, sims). From NERSC and SDSC will be copied to Princeton University (full dataset, a copy, data reduction). The primary path will be from the SO site to NERSC and an alternative/backup to SDSC. A 1GB connection between ALMA PoP and NERSC is tested (1month performance >700Mbps). The near-term goal is to test NERSC>>Princeton>>SDSC data transfer, IT automated transfer, and computing hardware at the University of Pennsylvania.

R&E Network providers have a significant impact on the quality and novelty of science that can be done. Having substantial computational power at the site is not cost effective. Moving data from the site to the U.S. reliably has been demonstrated.

Next Generation Very Large Array (ngVLA)

The Next Generation Very Large Array (ngVLA)[26] is a development project of the National Radio Astronomy Observatory (NRAO). NRAO is a facility of the National Science Foundation (NSF) operated under cooperative agreement by Associated Universities, Inc. Inspired by dramatic discoveries from the Jansky VLA[27] and ALMA, the astronomy community has initiated discussion of a future large-area radio array optimized for imaging of thermal emission to milli-arcsecond (mas) scales that will open new discovery space from proto-planetary disks to distant galaxies. The engagement of the science community (Canada, Mexico, Japan, Germany, Netherlands, Taiwan), defining the key science goals, and system reference design began in 2015. The operations will begin in 2034.

The frequency coverage will be1.2 – 116 GHz. The main array will consist of 214 x 18m offset Gregorian antennas. The short baseline array will consist of 19 x 6m offset Gregorian antenna. The Long Baseline Array (LBA) consists 30 x 18m antennas located across the continent for baselines up to 8860km. The antenna’s data rate will have a real-time correlation of all 244 array elements and up to 20GHz of instantaneous bandwidth per polarization. The main array fiber-optic network consists of a dedicated point-to-point fiber links for ~196 antennas in New Mexico within ~300 km radius of core. The array will use R&E networks to connect the elements beyond inner stations. The LBA sites will be connected via fiber-optic networks. Current options under discussion are leased fiber vs leased bandwidth. The data processing design includes storing of the raw visibilities and data rate of average (8 GB/s) and peak (128GB/s). The researchers will be able to use “Science Ready Data Products”. The discussed challenges are cost-performance optimizations, manufacturability, and reliability.

R&E Provider Updates

Americas Lightpaths Express and Protect (AmLight-ExP[28]) International Links

AmLight ExP is a reliable, leading-edge infrastructure for research and education, with significant investments from the National Science Foundation (NSF award # 1451018), Academic Network of São Paulo (ANSP), and Rede Nacional de Ensino e Pesquisa (RNP) and the Association of Universities for Research in Astronomy (AURA).

The AmLight Protect 100G ring: Miami-Fortaleza, Fortaleza-Sao Paulo, Sao Paulo-Santiago, Santiago- Panama, and Panama-Miami is operational. A 10G ring from Miami-Sao Paulo-Miami and 10G Miami-Santiago is also in place for protection. The 100G and 10G rings are diverse, operating on multiple submarine cables. By September 2019, using the Monet cable system, AmLight team activated: 2x100G from Boca Raton to Sao Paulo, 2x100G from Boca Raton to Fortaleza, 2x100G from Sao Paulo to Fortaleza. Using new dark fiber from Boca Raton to Miami, 2x 400Gbps transponders were installed, and 6x100G links were activated. By February 2020, AmLight team activated 1x100G between South America Exchange Point SAX/Fortaleza and ZAOXI exchange point/Cape Town, using the SACS and WACS cable systems. The total network capacity presently at 1.2 Tbps. The network has been evaluated through a few Supercomputing Conferences capacity demonstrations and tests.

Plan for 2020 includes activation of 100Gbps Miami – Jacksonville (Internet2) link, 200Gbps (RNP & RedCLARA) Sao Paulo – Santiago link (100Gbps to support Vera Rubin Observatory and 100Gbps for AmLight users). For 2021 a 300Gbps Boca Raton to Atlanta (ESnet) link is planned (100Gbps to support Vera Rubin Observatory, 100Gbps for AmLight users, and 100Gbps for FABRIC testbed).

The AmLight data plane refreshment includes an increase on the number of 100G ports for users and links and adding new NoviFlow 100G Tofino switches as a new switching fabric (32 x 100G ports, 64 x 100G ports, and 32 x 400G ports) which would support SDN, programmable data planes, and In-band Network Telemetry (INT). The AmLight control plane refreshment includes a new Kytos[29] SDN controller to focus on the Vera Rubin Observatory and AmLight’s needs, and an integrated solution for intra (SDN) and inter-domain (SDX) provisioning, and INT.

AmLight-ExP Performance Monitoring updates

Network performance tools save historical data, provide for a large number of tests, and help to identify potential improvement points. The perfSONAR (Ps) Open Source toolkit is well-known among the academic community and has more than 2000 measurement points deployed. AmLight testing included network throughput (TCP interval: 4 hours, parallel streams: 8), network packet loss (OWAMP interval run continuously), and network delay (OWAMP runs continuously). Currently, there are 10Gb Ps nodes installed in Miami, Panama, Santiago, Sao Paulo, La Serena (one for LSST and one for AURA), Cerro Pachon. The Ps Maddash presents the performance data as a 2D visual grid accessible at https://dashboard.ampath.net.

The Ps tests can verify the behavior of international links before and after the replacement (e.g., replacement of a Brocade for a Dell switch in Santiago). Integrations made by AmLight include an integration of the perfSONAR environment with AmLight NMS (Zabbix[30]). Zabbix can raise alarms as soon as a Ps test reports a poor performance. Such alarms provide a way to be advised when the network performance deteriorated quickly. For example, the Vera Rubin end-to-end path has 10,703 miles/17,125km from La Serena to Champaign, IL, where AmLight is responsible for 9,551 miles/15,281km of it. There are over 22 paths between Santiago and Atlanta and over 15 data centers in the path (over 20 cross-connects in the shortest path).

Additionally, the AmLight team is adding an In-band Network Telemetry (INT)[31], a framework designed to allow for the collection and reporting of a network state, by the data plane. The INT network telemetry overcomes limitations imposed by legacy technology, such as more metrics and granularity, sub-second data gathering, useful for microburst detection and queue utilization at a sub-second interval, and a complete view of network state in the flow’s path.

Future plans include adding perfSONAR nodes to the San Juan, PR and Fortaleza, Brazil AmLight sites, adding TENET’s perfSONAR node[32] to AmLight Maddash dashboard, creating more end-to-end tests to amplify our connectors’ perspective, and study if other types of tests available with the perfSONAR toolkit could improve our daily work.

REUNA

The Chilean Academic Network (REUNA) is connecting 37 organizations with over 300,000 students for more than 25 years. Approximately 80% of the research done in Chile is carried out by REUNA’s network spreading over 8500km. REUNA manages 15 PoPs (7 are with 100G capacity) located in the main cities. REUNA’s current 20-year project’s goal includes upgrading the bandwidth capacity in northern Chile (Arica-Iquique-Antofagasta) from 1G to 10G and connecting farthest institutions to the backbone; upgrading the Antofagasta to La Serena to 2x 100G and La Serena to Santiago to 100G fiber link. Enhancements on the southern backbone for 2021 include increased capacity to 100G (or multiple 10G) from Santiago to Temuco, and 10G from Temuco to Puerto Montt, and open new PoPs where necessary. The Fibra Óptica Austral (FOA) project is an initiative by the Chilean government to connect the southern part of the country. Low orbit satellite connectivity is also explored due to the greater capacity and low latency compared to current satellite connections.

REUNA is installing a PoP in ALMA to provide connectivity to the observatories and telescopes located at the Chajnantor. The Universidad Adolfo Ibañez Data Observatory is a newly added institution.

Latin American Cooperation of Advanced Networks (RedCLARA)

The RedCLARA network was created in 2003 with the support of the European Union and the ALICE 2003 project. The ALICE2 project followed in 2008. In 2016, the BELLA project was funded by a collaboration between the EU and Latin American R7E network. BELLA project’s goal is to provide a long-term interconnecting infrastructure for the region and a direct link to Europe. BELLA consist of two sub-projects: BELLA-S consists of a submarine long-term submarine spectrum link between Latin America and Europe, and BELLA-T consists of Latin American terrestrial backhaul network with access to the spectrum. Currently, the BELLA-S includes a large-scale spectrum (45x 37,5GHz) acquired on EllaLink cable, to be delivered by 2021. The BELLA-T is using the network infrastructure provided by Brazil, Chile, Colombia, Ecuador, and an ongoing tender for five links (spectrum) for delivery in 2019.

Additional updates on BELLA-T include a newly signed contract for six channels (100 Gbps backup) for 15 years from Buenos Aires to Santiago, RFS 2020 and cost 1.8 MM USD. Another newly signed contract for two fiber for 20 years from Tulcan to Ipiales in Ecuador and cost of €60K was signed. Final contract negotiation for a 12-year contract for $2.1M from Buenos Aires to Porto Alegre is in place and close to being signed. The negotiation for the connection from Fortaleza to Barranquilla is pending due to funding. Several routing equipment contracts are signed, and bidding for optical equipment is in process.

New capacity links being implemented in Guatemala (3Gbps), Honduras (3Gbps), Nicaragua (6 Gbps), Costa Rica (5 Gbps), and Mexico (3 Gbps). An ongoing conversation with the Mexican Government, IDB, and European Commission is in place for funding of optical rings and a new backbone agreement with total cost of $25M. To improve the connection from South to Central America, RedCLARA is looking to connect Panama to Columbia with a contract for 12 years and a cost of $7M and possible multiple paths to Miami and Panama. By the end of 2020, a connection from Peru to Bolivia with 10Gbps capacity will be in place.

In September 2019, RedCLARA and the Advanced Computing System of Latin America and the Caribbean (SCALAC) signed a Memorandum of Understanding (MoU) in which they commit to establishing cooperation guidelines to enhance access to high-performance computing (HPC) resources in the region. The new RedCLARA board president, Eduardo Grizendi (RNP, Brazil), was elected at the beginning of 2020.

Brazil’s academic network – Rede Nacional de Ensino e Pesquisa (RNP)

RNP continues to support the upgrade activities on the international links on AmLight Express and Protect network rings. RNP backbone upgrade to 100G is undergoing a process that will be completed in 2022. The expansion of the Brazilian R&E network is based on agreements with several electrical power companies such as CHESF in 2016 for the Northeast region, Furnas in 2017 for Southeast & Midwest region, Furnas and Eletrosul in 2018 for South & Midwest region, Taesal, Telebras and Regional ISPs (swap) in 2019 for North & Midwest region, and recently a collaboration with the South America Exchange (SAX) Point in Fortaleza and Angola Cables in 2018. Alternative routes of dark fiber are discussed as an alternative option for the South region of Brazil and also a bidding process for swapping optical paths.

New patterns of international traffic are beginning to occur in the Atlantic Ocean. Several southern transatlantic cable systems have been installed since the ATLANTIS2 cable system connecting Brazil, Argentina, West Africa, and Europe in 2000. South Atlantic Cable System (SACS) from Fortaleza, Brazil to Sangano, Angola, and South Atlantic Inter Link (SAIL) from Fortaleza, Brazil to Kribil, Cameroon are RFS since 2018. EllaLink from Fortaleza, Brazil, to Sines, Portugal, will be RFS in 2021.

Current ongoing collaborations include the construction of the Vera Rubin Observatory for which RNP will provide connectivity between Sao Paulo and Santiago in exchange for several 37,5 GHz “slots” on the Monet cable between Florida and São Paulo until 2032. Additionally, RNP collaborated in the AmLight-SACS project interconnecting the US, Brazil, and Africa.

National Center for Supercomputing Application (NCSA)

NCSA is a department of the University of Illinois (Urbana/Champaign Campus), and it is one of the original National Science Foundation (NSF) HPC funded centers in 1986. NCSA provides HPC resources for national researchers through a variety of NSF funded grants, and private funds (e.g., XSEDE, FABRIC, CILogon). Currently, NCSA has approximately 10-12 clusters, ranging from 100s of CPU/Cores to close to 1,000,000 (CPU and GPU Cores). NCSA is accommodating a wide range of science domains such as medical (e.g., genome sequencing), industry (e.g., fluid dynamics, modeling), satellite geography mapping (e.g., DoD, U of Minn), and general science (e.g., Virus, Tornado, galaxy modeling).

NCSA currently operates 420G combined WAN connectivity, which is connected to major research networks in Chicago by using the Inter-Campus Communications Network (ICCN)[33]. NCSA currently has allocated over 250 servers for the Rubin Observatory connected at 10G or 40G. The servers include Kubernetes clusters, DTN nodes, DAQ test stand, transfer nodes (forwarders), and Slurm. The Rubin Observatory storage currently is at 6.5 PB of storage (all storage nodes are 40G connected), which will expand to 100G later in 2020.

For 2020 the planned upgrades include replacement of both exit routers (currently MX960). With regards to the Rubin Observatory upgrades, the Rubin core was replaced with a multi 100G chassis switch, which in the future will support 100x100Gbps ports. The future TOR of switches will all connect at 100Gbps, and all storage nodes will be 100G. A dedicated wave from NCSA to ESnet will be configured for prompt processing.

Internet2 (I2)

The I2 Next Generation Infrastructure (NGI) program is a full set of activities to review and update the services, value, and supporting technology of the Internet2 infrastructure portfolio (and relationships in the larger ecosystem). The program includes services and service models infrastructure upgrade projects, and new features primarily driven by software and system virtualization. The goal of the program is to support the data-intensive research, software-driven infrastructure, cloud for research & administration, readily enable ecosystem-wide solutions, and reset economies of scale.

The I2 Cloud Services provides a resilient national footprint for Amazon Web Services (AWS), Microsoft Azure, Google cloud platform, campus networks, and other collaborators. It provides an automated cloud connect portal (API provisioning at L2&L3). The Rapid Private Interconnect (RPI) has the option for 10G and 100G. PefSONAR support for the cloud is also provided.

The I2 -PX peering capacity grew from 980 Gbps in 2019 to over 3110Gbps in 2020. The actions taken to achieve such growth include lifting caps restrictions, increasing the peering capacity, creating a new portal, introducing I2-RUSH ESports[34] peers, and Mutually Agreed Norms for Routing Security (MANRS)[35] office hours. The improvements of the infrastructure include new implementation of Segment Routing protocols, Ethernet VPN (EVPN), Standards-Based (MP-BGP) control plane, simplified MPLS L2 tunnels, and NGI optical upgrades. Additionally, I2 provides community support for telemetry and Open Science Grid caches services.

To simplify operations, coordination, and service activities across Atlantic and Pacific Exchange Points, an Initiative between the PacificWave and Internet2 is in place. The goal is to closely align operations, capabilities, and services at MANLAN, WIX, and PacificWave (e.g.,100G interconnect and shared backup paths).

Energy Science Network (ESnet)

ESnet is the US Department of Energy High-Performance Network (HPN). The mission of ESnet is to provide a science network user facility designed to accelerate scientific research and discovery. The vision of ESnet is to have the scientific progress entirely unconstrained by the physical location of instruments, people, computational resources, or data. The yearly aggregated traffic for FY 2019 carried out by ESnet is over 1078PB/year. ESnet traffic increases by about 10x every four years.

The next generation ESnet6 includes upgrading the Optical core (L0&L1 dedicated optical and transmission medium), Low-Touch service edge packet core (L2&L3, Ethernet Switching, MPLS, and IP routing), and High-Touch service edge (L3 and above). The High-Touch service edge includes QoS, automated management and dynamic creation of user-requested services, per-flow monitoring, high-speed per packet filtering and forwarding to enforce security policies – Programmable interfaces to support emerging and new Software Defined Networking (SDN) functions, and possible use of AI for automated error detection and flow characterization.

The current version, ESnet5.5, includes the existing ESnet packet layer and Optical Core upgrades on the transponders to provide more capacity (400G client-side interfaces) and optimized circuits (~11Tbps of circuits).

The Optical Core is a Layer1 open line system that has de-coupled line equipment from transponders. No restrictions on using non-line vendor components. The Optical Core provides point-to-point Ethernet circuits between backbone routers and service edge devices and within the backbone. The Optical Core uses Dense Wave Division Multiplexing (DWDM) to provide static, un-protected, high-capacity links to the Packet Core (L2 and up, using S-MPLS, and commercial Path Computation Engine (PCE) used for managing overlays). However, due to current COVID-19 restrictions, the schedule for ESnet6 implementations is interrupted.

ESnet key implementation strategies include purchasing commercial-off-the-shelf hardware and software (develop from the ground up only when required), subcontractor installation and colocation operator’s staff to manage logistics and physical installations, and coordinated by ESnet staff.

[1] Gemini Observatory http://www.gemini.edu/

[2] Victor Blanco Telescope http://www.ctio.noao.edu/noao/node/9

[3] Southern Astrophysical Research Telescope (SOAR) http://www.ctio.noao.edu/soar/

[4] Vera Rubin Observatory https://www.lsst.org/

[5] Las Campanas Observatory http://www.lco.cl/

[6] Atacama Large Millimeter/Submillimeter Array (ALMA) https://www.almaobservatory.org/en/home/

[7] Small and Moderate Aperture Research Telescope System http://www.ctio.noao.edu/noao/node/10

[8] PROMPT-Chile https://skynet.unc.edu/introastro/ourtelescopes/

[9] Global Oscillation Network Group (GONG) https://gong.nso.edu/

[10] Andes Lidar Observatory (ALO) http://lidar.erau.edu/

[11] Wham: http://www.astro.wisc.edu/wham/description-technical.html

[12] Las Cumbres Observatory Global Telescope Network (LCOGT) https://lco.global/

[13] Korea Astronomy and Space Science Institute (KASI) https://www.kasi.re.kr/eng/pageView/88

[14] All Sky Automated Survey for SuperNovae (ASAS-SN) http://www.astronomy.ohio-state.edu/asassn/index.shtml

[15] mEarth https://www.cfa.harvard.edu/MEarth/Telescopes.html

[16] EvryScope https://evryscope.astro.unc.edu/

[17] JAST/T80 Telescope http://www.j-plus.es/news/telescope

[18] Empresa Nacional de Telecomunicaciones S.A. is the largest Chilean telecommunications company: https://entel.cl/

[19] Project Lyra, a Mission to Chase Down that Interstellar Asteroid https://www.universetoday.com/137960/project-lyra-mission-chase-interstellar-asteroid-1/

[20] Giant Magellan Telescope Observatory https://www.gmto.org/

[21] Simons Observatory https://simonsobservatory.org/

[22] Atacama Cosmology Telescope ACT (2007-2022): 1 large-aperture telescope. https://act.princeton.edu/

[23] POLARBEAR PB/SA (2012-2022): 3 small-aperture telescopes. https://cosmology.ucsd.edu/Polarbear/

[24] Cosmology Large Angular Scale Surveyor (CLASS) (2016- ): 2 small-aperture telescopes https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20140017646.pdf

[25] Message Passing Interface. https://stackoverflow.com/questions/32464084/what-are-the-differences-between-mpi-and-openmp

[26] Next Generation Very Large Array (ngVLA) https://ngvla.nrao.edu/system/media_files/binaries/130/original/ngVLA-Project-Summary_Jan2019.pdf?1548895473

[27] Karl G. Jansky Very Large Array https://science.nrao.edu/facilities/vla/

[28] NSF Award #ACI-1451018 – IRNC: Backbone: AmLight Express and Protect (ExP), https://www.nsf.gov/awardsearch/showAward?AWD_ID=1451018&HistoricalAwards=false

[29] Kytos SDN Platform https://kytos.io/

[30] Zabbix https://www.zabbix.com/

[31] AmLight INT https://www.amlight.net/?page_id=3525

[32] Tertiary Education & Research Network of South Africa (TENET) https://www.tenet.ac.za/

[33] Inter-Campus Communications Network (ICCN) https://iccn.illinois.edu/

[34] Esports infrastructure requirements: https://meetings.internet2.edu/2019-technology-exchange/detail/10005493/

[35] Mutually Agreed Norms for Routing Security (MANRS) https://www.internet2.edu/communities-groups/security/manrs/