You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm opening this issue following detailed description below provided to me by @amolod. This is a crucial step in terms of computational performance improvements with the Coupled DA runs and testing with different OCEAN_DT and HEARTBEAT_DT combinations.
The model’s heartbeat is the fastest time step that the GCM advances and updates its prognostic fields. It is possible to call submodels either more frequently or less frequently than the heartbeat.
When we talk about “calling the ocean” we are referring to calling the ocean infrastructure, which includes the communication among all the other components and the ocean, and entails all the needed grid interpolations. This is the frequency, for instance, that the ocean and the sea ice see each other. This is referred to in the model as OCEAN_DT and MOM needs to know this number also. In MOM5 we need to supply it to one of MOM’s .data files. Not sure how MOM6 knows about it. It is also not clear how much of the total “call the ocean” cost is in the communication vs inside MOM. I am going to distinguish this from the “internal” ocean timestep that MOM needs to take for accuracy and stability.
In order to call the ocean at less than the model heartbeat, we need two things to happen correctly, neither of which is correct now in the code as of the last time we checked.
This is needed for energy conservation: If we are going to call the ocean less frequently than the atmosphere surface (which is always going to be called at the highest frequency the model uses, ie., the heartbeat), we need to accumulate the fluxes (radiative and turbulent) over the longer time period that we use to call the ocean. If we call it once an hour, we need the TOTAL of what the surface computed over that hour. This is what surface gave/got from/to the atmosphere. There is a functionality in MAPL to do this, but it is not working properly. This needs to be working and checked for conservation. Ie., what the ocean “gave” in latent heat in an hour better be exactly the same as what the atmosphere got.
This is needed to make sure the ocean can run with the accuracy and stability required: If we call the ocean (ie., atm-ocean communication) once an hour, and the ocean model or actually part of the ocean dynamics, need to be called more frequently, we need to “subcycle” the ocean, ie., run a sequence of steps that advance the ocean and get it to the time at the end of the OCEAN_DT timestep. In addition, the fields that have to go back to the atmosphere or the sea ice need to be the averaged or accumulated fields through the subcycling. I am thinking, for example, about the info related to the ocean’s behavior near freezing and what the sea ice thermodynamics needs to know. It is my understanding that this subcycling is not functioning properly.
@Dooruk The code to do the subcycling is already in place (it has been there for at least few years). What is missing, is the ability to do appropriate time-averaging and/or accumulation. It is on my todo list to add the missing functionality, and I hope to get to this is the very near future
I'm opening this issue following detailed description below provided to me by @amolod. This is a crucial step in terms of computational performance improvements with the Coupled DA runs and testing with different
OCEAN_DT
andHEARTBEAT_DT
combinations.The model’s heartbeat is the fastest time step that the GCM advances and updates its prognostic fields. It is possible to call submodels either more frequently or less frequently than the heartbeat.
When we talk about “calling the ocean” we are referring to calling the ocean infrastructure, which includes the communication among all the other components and the ocean, and entails all the needed grid interpolations. This is the frequency, for instance, that the ocean and the sea ice see each other. This is referred to in the model as OCEAN_DT and MOM needs to know this number also. In MOM5 we need to supply it to one of MOM’s .data files. Not sure how MOM6 knows about it. It is also not clear how much of the total “call the ocean” cost is in the communication vs inside MOM. I am going to distinguish this from the “internal” ocean timestep that MOM needs to take for accuracy and stability.
In order to call the ocean at less than the model heartbeat, we need two things to happen correctly, neither of which is correct now in the code as of the last time we checked.
This is needed for energy conservation: If we are going to call the ocean less frequently than the atmosphere surface (which is always going to be called at the highest frequency the model uses, ie., the heartbeat), we need to accumulate the fluxes (radiative and turbulent) over the longer time period that we use to call the ocean. If we call it once an hour, we need the TOTAL of what the surface computed over that hour. This is what surface gave/got from/to the atmosphere. There is a functionality in MAPL to do this, but it is not working properly. This needs to be working and checked for conservation. Ie., what the ocean “gave” in latent heat in an hour better be exactly the same as what the atmosphere got.
This is needed to make sure the ocean can run with the accuracy and stability required: If we call the ocean (ie., atm-ocean communication) once an hour, and the ocean model or actually part of the ocean dynamics, need to be called more frequently, we need to “subcycle” the ocean, ie., run a sequence of steps that advance the ocean and get it to the time at the end of the OCEAN_DT timestep. In addition, the fields that have to go back to the atmosphere or the sea ice need to be the averaged or accumulated fields through the subcycling. I am thinking, for example, about the info related to the ocean’s behavior near freezing and what the sea ice thermodynamics needs to know. It is my understanding that this subcycling is not functioning properly.
FYI @wmputman
The text was updated successfully, but these errors were encountered: