AMDADOS: Adaptive Meshing and Data Assimilation for the Deepwater Horizon Oil Spill (IBM)

The Deepwater Horizon oil spill is the largest accidental spill in the history of the petroleum industry. The BP blow-out lasted for 87 days, releasing approximately 4.9 million barrels (780,000 m3) into the surrounding environment. Authorities collected huge volumes of data concerning the extent and evolution of the oil spill. While previous research has made use of some of the data, a system that harnesses the full potential of the dataset by integrating it with a set of highly accurate, adaptive models and meta-models is yet to be put in place. AMDADOS is a pilot application that combines data assimilation and adaptive meshing techniques, with complex models to simulate the Deepwater Horizon accident at a level of detail and precision that is unprecedented.

Data assimilation (DA) is a mathematical technique that incorporates physical observations within numerical models. DA utilizes our physical process knowledge as encapsulated in a physics-based model, together with information from observations describing the current state of the system. Information from both model and observations are imperfect and the goal of DA is to use both in combination to obtain a more accurate solution. Figure 1 encapsulates the fundamental basis of the approach. In each simulation step, observation data is combined with output from the model, yielding results which are considered as 'the best' estimate of the current state of the system. Adaptive Meshing (AM), on the other hand, is a method of dynamically changing the precision of a model by targeted refinement of the numerical grid resolution. Based on predefined triggers, the resolution of the model grid is amended at specific locations to provide greater accuracy focused on regions where this is required.


Figure 1: Numerical models contain errors that increase with time due to model imperfections and uncertainties in initial and boundary conditions. Data assimilation minimizes these errors by correcting the model stats using new observations (from “Approaches to Disaster Management – Examining the Implications of Hazards, Emergencies and Disasters”)

 

In the AMDADOS pilot application, DA and AM are used jointly and embedded in a modelling implementation of the advection diffusion equations for simulating the Deepwater Horizon accident (Figure 2). The system autonomously increases with AM the resolution at targeted locations, while DA incorporates observations into the model forecast. While advection diffusion codes for transport phenomena (such as oil spills) exist and are well developed, a novel one, that embeds AM-DA and is scalable to harness all the available data, is unprecedented and significantly improves resolutions and accuracy in order to better understand the impact of the Deepwater Horizon accident. The proposed environmental study is not possible at the current Petascale level of compute performance.


Figure 2: details of the region for DA-AM (black contour) and sources of observations (dots)

 

The AMDADOS pilot application presents a major challenge from a data and scientific computation perspective. The configuration of the application, encapsulating both DA and AM components, generates more than 1 Petabyte of data per day. Consequently, and as mandated by the requirements of the Deepwater Horizon use-case, the total volume of modelling outputs over a two-months’ time window will require processing of data in the order of 100 Petabytes. This data-intensive processing requirements is in conjunction with the extreme scale scientific compute operations required to provide solution to the AM-DA modelling system discretised on a non-uniform adaptive mesh. Depending on the state of the hydro-environmental system and the availability of observational data, the AM-DA scheme covers a range of implementations, from simulating and/or assimilating only the global model to including all AM regions (targeting oil trajectory, high density observations, important ecological locations, etc.). Across the various options, the computational cost varies from 1 ExaFLOP to 104 ExaFLOPs per time step. The required amount of computational resources to process any of these combinations demands Exascale systems.

AMDADOS plans to achieve the following objectives: (A) a modelling resolution of 4 meters in the key affected areas of Deepwater Horizon; (B) the assimilation of the entire NOAA dataset collected during the event, into the AM-DA system. The resulting prototype will constitute a benchmark for years to come in terms of both operational response planning and volume of modelling datasets for detailed environmental analysis. Beyond the Deepwater Horizon incident itself, the planned work through the use of a coupled DA and AM modelling approach will create a novel paradigm for operational oil spill response systems, particularly in areas with complex and highly sensitive ecosystems.

 

AllScale Potential

AMDADOS consists of a transport model based on a linear advection diffusion PDE (partial differential equation) that assimilates observations from the environment with the help of a library of data assimilation techniques. The feedback model is discretized with a Finite Elements Method (FEM) that makes use of Domain Decomposition Methods (DDM) to allow the distribution of the numerical computation on distributed memory HPC architectures via MPI.

Implementing AMDADOS within the AllScale environment exploits the domain decomposition paradigm of the application to leverage recursive parallelism. Namely, parallelism is implemented by distributing individual sub-domains of the application across compute cores with synchronization and latency hidden to the user. Contrary to the MPI parallel application where synchronization must be handled by the user via repeated MPI calls to exchange information between subdomains, the AllScale prototype implementation has a much closer feel to a serial application.

The code segment below demonstrates the utility of this approach. It presents a simplified overview of how the code structured implements boundary synchronization (the main parallel paradigm within a traditional domain decomposition MPI implementation). The first step is the identification of neighbouring domains in each direction (if they exist), done via Boolean structures, at model initialization routines (not presented). Next, flow direction (here, flow direction is defined relative to the subdomain where negative flow indicates flow into the domain and positive represents outwards flow) at each boundary defines whether values are to be updated. If flow is into the domain then boundary values are replaced with that of their direct neighbour while if flow is positive, boundary values remain unchanged (and in fact will serve to update direct neighbor’s boundary nodes). What is apparent from this structure is the increased simplicity of the code versus an MPI parallelization. In traditional domain decomposition MPI, the user is tasked with curating data packing for transfer, repeated MPI calls to send/receive data and careful orchestration to ensure each MPI send is met with a corresponding MPI receive from the appropriate neighbour. This demonstrates one of the primary strengths of the AllScale environment: a significant reduction in development effort to deploy application in parallel.

Example of code for computing boundary synchronization across sub-domains:

// compute next time step => store it in B
utils::pfor(zero, size, [&](const utils::Coordinate<2>& idx) {

	// init result with current solution state from previous timestep
	res = cur;

	// 1) update boundaries
	for( Direction dir : { Up, Down, Left, Right }) {
		// obtain the local boundary
		auto local_boundary = cur.getBoundary(dir);

		// obtain the neighboring boundary
		auto remote_boundary =
		(dir == Up)   ? A[idx + utils::Coordinate<2>{-1,0}].getBoundary(Down):
		(dir == Down) ? A[idx + utils::Coordinate<2>{ 1,0}].getBoundary(Up):
		(dir == Left) ? A[idx + utils::Coordinate<2>{0,-1}].getBoundary(Right):
				 A[idx + utils::Coordinate<2>{0, 1}].getBoundary(Left);

		if (flow_boundary < 0) {  
			for(size_t i = 0; i<local_boundary.size(); i++) {
				local_boundary[i] = remote_boundary[i];
			}
		}
	// update boundary in result
	res.setBoundary(dir,local_boundary);
}

 

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 671603

Contact Details

General Coordinator

Thomas Fahringer

Scientific Coordinator

Herbert Jordan