The NextGen track is meticulously designed to equip participants with a thorough understanding of NextGen, spanning from fundamental concepts to advanced experimental aspects. Over the course of three days, attendees will delve into the core principles, tools, and experimental elements of NextGen.
NextGen Workshops
CIROH Cloud Infrastructure and Case Studies
(Cross-listed with the Cross-Cutting Track)
Day 1 Session 1
Arpita Patel
James Halgren
Purushotham Bangalore
Jeff Carver
The objective of this workshop is to delve into the P29 case studies conducted on CIROH Cloud for NextGen related applications. Attendees will be provided with an overview of the CIROH Cloud on on-premise Infrastructure and glean insights from practical cloud and on-premise case studies. Please note that this is not a hands-on workshop; rather, it is focused on presentations and understanding CIROH IT Infrastructure. PIs are encouraged to join this workshop. This will cover information about CIROH DocuHub (docs.ciroh.org) as well. Existing PIs using cloud and on-premise for their project will come and present for 5-10 mins each.
The NextGen Hydrofabric: What is it, how to get it, and how to make your own?
(Cross-listed with the Hydroinformatics Track)
Day 1 Session 1
Mike Johnson
Participants will be introduced to the NextGen hydrofabric, how it is created, and how to subset model domains for their own experiments. Part one will focus on describing the data model, attributes, and concepts of the hydrofabric and walk users through subsetting a basin of their own from the official NextGen data products. The second part will walk users through the creation of the NextGen hydrofabric and the tools available to make their own NextGen compliant hydrofabric for experimentation and application.
Get your model ready for NextGen with BMI
(Cross-listed with the Cross-Cutting Track)
Day 1 Session 2
Keith Jennings
Nels Frazier
The Next Generation Water Resources Modeling Framework (NextGen) uses the Basic Model Interface (BMI) to control model run time and pass data from one model to another. Although NextGen is model-agnostic, any model plugged into the framework must have a complete, functional BMI implementation. In this workshop we will overview the basics of how NextGen utilizes BMI and detail its important functions, and we will demonstrate how to take a simple Python model from its initial state of being non-BMI-compliant to its finished, NextGen-ready state with BMI. We will discuss the importance of modularization and separation of concerns so you can apply the lessons from this workshop to your own model.
NGIAB, or NextGen In a Box, is a containerized version of the NextGen National Water Resource Modeling Framework. This workshop aims to provide participants with a comprehensive understanding of NextGen and how to effectively run NGIAB. Participants will learn about the methodologies and technologies involved, as well as where to find documentation for NGIAB using Docker and Singularity.
In order to prepare simple baseline NextGen framework configurations for exploratory simulation and in order to manage complex longer-term simulations for deep analysis in a relatively efficient and reproducible fashion, we have created a pair of complementary community accessible tools called NextGen_data_preprocessor and ngen-datstream. These software automates the process of collecting and formatting input data for NextGen, orchestrating the NextGen run through NextGen In a Box (NGIAB), and handling outputs for these two different by related cases. The preprocessor is designed for simple one-off explorations. The datastream supports more in depth metadata services that track configuration changes across a global network of NextGen runs. These metadata also inform operating cost through compute optimization. The datastream is designed to scale within cloud-based HPC architecture, while still being deployable on a laptop.
Building and Executing Cloud Workflows to Support NGEN Modeling Applications
Day 3 Session 1
Tony Castronova
Irene Garousi-Nejad
This workshop introduces scientists to a cloud-based workflow tool for preparing, configuring, executing, and archiving model simulations in the cloud. It builds from prior workshops and demonstrates how CIROH developed tools, libraries, and datasets can be leveraged to create reproducible data and modeling workflows. This workshop will introduce participants to the Argo workflow engine and will consist of hands-on activities, including model input data pre-processing, model simulation, and model output post-processing. Participants will learn how to leverage the cloud for working with existing NGEN tools and software libraries, as well as best practices for sharing data and models within CIROH.
Day 3 Session 2
Matt Denno
Katie van Werkhoven
Sam Larmont
In this workshop attendees will complete an evaluation of multiple hydrologic model formulations from beginning to end, comparing a baseline (retrospective 3.0) to two other research formulations (as they related to observations). This will include model output processing and data formatting, exploring the model output, setting up a TEEHR database, adding attributes and user defined fields, building metric queries, and building visualizations comparing the performance of each model formulation to the baseline. The TEEHR documentation will be referenced heavily.
An introduction on how to use the Actor Model of concurrent computation to parallelize and increase fault tolerance for hydrological simulations.