A skill assessment framework for the Fisheries and Marine Ecosystem
Model Intercomparison Project
Abstract
Understanding climate change impacts on global marine ecosystems and
fisheries requires complex marine ecosystem models, forced by global
climate projections, that can robustly detect and project changes. The
Fisheries and Marine Ecosystems Model Intercomparison Project (FishMIP)
uses an ensemble modelling approach to fill this crucial gap. Yet
FishMIP does not have a standardised skill assessment framework to
quantify the ability of member models to reproduce past observations and
to guide model improvement. In this study, we apply a comprehensive
model skill assessment framework to a subset of global FishMIP models
that produce historical fisheries catches. We consider a suite of
metrics and assess their utility in illustrating the models’ ability to
reproduce observed fisheries catches. Our findings reveal improvement in
model performance at both global and regional (Large Marine Ecosystem)
scales from the Coupled Model Intercomparison Project Phase 5 and 6
simulation rounds. Our analysis underscores the importance of employing
easily interpretable, relative skill metrics to estimate the capability
of models to capture temporal variations, alongside absolute error
measures to characterise shifts in the magnitude of these variations
between models and across simulation rounds. The skill assessment
framework developed and tested here provides a first objective
assessment and a baseline of the FishMIP ensemble’s skill in reproducing
historical catch at the global and regional scale. This assessment can
be further improved and systematically applied to test the reliability
of FishMIP models across the whole model ensemble from future simulation
rounds and include more variables like fish biomass or production.