I’m glad that our study has prompted Colin to start blogging about climate change – or, more precisely, about bioclimatic envelope models and their uncertainties under climate change. There has been much discussion around these models and the conceptual frameworks that underpin them. But we all (seem to) agree that these models have various uncertainties, and that their results and the use of these results are contingent on a series of assumptions.here) is also worth looking at, so I've posted it here on the right too. The previous figure 7 was the proportion of the total species of each group in all of Africa that lost climate space (which is going to be very strongly related to species richness, and much less informative about the local impacts of climate change). This is the proportion of the species present in each square that are expected to retain climate suitability. And, as you can see, with only two significant exceptions, almost everywhere is retaining 75% or more of the current species. That's much more in keeping with my gut-feeling on the issue, so I'm already much happier!
The focus of the study is on uncertainties
First of all, the focus of the study was to explore model uncertainties. We examine three sources of uncertainty in projections of potential climatic suitability for African vertebrates (more on the projections themselves below). Just as Colin and many other researchers argue, it is important to try to identify and quantify the uncertainty surrounding the results from models. Doing that helps us to know where we should put our efforts to reduce the uncertainty, and can guide the interpretation and use of results.
Our study explores the variability in projections from the choice of modelling technique, climate model and emissions scenario. It quantifies the relative importance of each, and maps them spatially. Of these three sources of uncertainty, the modelling technique was the major one, but the importance of climate projections increases with longer time horizons. The study further cautions against projections onto novel climates, where these models are less reliable. For the variables used in our study, climates beyond the current climatic range were pervasive in northern sub-Saharan Africa by the end of the century.
If we cannot identify the most appropriate model or climate scenario, the safest is to use a range of alternatives. How to then interpret the ensuing range of projections has been a topic of debate. We too caution that the simple averages depicted in Colin’s cartoon may imply loss of information. But one of the consensus techniques we apply overcomes this problem, to some extent, by first grouping similar projections, and then averaging each group. Our study compares these and other techniques to combine projections. The aim was to contribute to the debate around building consensus as one possible way to explore the uncertainty that surrounds ensembles of projections – another way being a probabilistic approach.
What are we modelling?
Now to the projections we used to explore uncertainties. If we ignore model uncertainties or assumptions, it is easy (I should say inevitable) to interpret model outputs as saying something that they can’t say, or to use them for something that they cannot be used for. If we closed our eyes to the assumptions behind our study, we would say that we are forecasting the fate of African vertebrates by the end of the century, and showing on the map where species will remain or disappear. Reading Colin’s blog, it may seem this is what we were saying. But it was not. The assumptions are clearly stated, reflected in the interpretation of the results, and revisited in the discussion. Our models are not built to provide forecasts of actual distributions but rather potential areas of climatic suitability for species.
And these two parameters, just like Colin argues, can be very different indeed. At the large scale of our study, climate is one determinant of species distributions. But climatically suitable areas and actual distributions are different because of a range of factors operating at different scales. We mention some factors highlighted by Colin, such as dispersal, fire, and biotic interactions. These factors were not accounted for in our projections of climatic suitability, yet they would surely need to be considered in projections of species distributions. When assessing shifts in climatic suitability, there is uncertainty also in the species data used. Past history and biogeography can limit the extent to which the species data available reflect the full climatic tolerance levels of the species. We also mention the contingency of the results on the climatic variables selected. Given the large number of species, we have compiled a set of bioclimatic variables (including important seasonal variables, such as precipitation seasonality or the precipitation of the driest quarter), and selected uncorrelated variables that explain a large part of the variation over the study area.
Colin bases most of his discussion about the plausibility of the projections on the maps of climatic suitability retention shown in the blog (Figure 7 in the paper). Colin assumed that these maps depict local persistence as a proportion of local species richness. But in fact what they show is local persistence in relation to total richness (as the caption in the paper explains). Persistence calculated in this manner is thus much lower than otherwise, and emphasises areas of high current richness. Maps with the more commonly used metric of in situ persistence, as we present in an addendum available on the list of publications of our group (http://www.ibiochange..../Garcia-et-al-2012-Addendum.pdf), show a very different picture for African vertebrates.
The blog also reproduces the maps of turnover that we used to show projected changes over time for all species. Our percentage of species turnover, for which a reference is provided in the text, was calculated as the ratio of the sum of local gains and losses (of climatic suitability) to the sum of local baseline richness and gains (of climatic suitability). Evaluating the projections is difficult, among other reasons, because the models might rightly predict that a site is suitable when the data indicate absence. What our study does is simply to select one measure of accuracy and compare the relative accuracy of the different projections. The focus is on relative, not absolute terms.
One thing to remember also, when interpreting projections like these, is that the patterns shown are mostly for wide-ranging species (the ones that could be statistically modelled). Our projections refer to about two thirds of the species and do not necessarily reflect the patterns for small-ranging species.
Fit for the purpose?
Only highly complex models could begin to tell us where African vertebrate species might die off or survive under changing climates. Until we have the knowledge and data to build these super-models for thousands of species, we are left with imperfect models explaining part of the reality. If we are clear about what part of the reality the models are explaining (assumptions) and where the imperfections are (uncertainties), they will serve a purpose. And the purpose in our paper was to explore the challenges posed by the variety of climatic projections and modelling techniques available when examining broad patterns of climatic suitability for these species.
But let's look at the exceptions: the first one is in the southern Sahel region, and that's an area where we know the predictions are unreliable - there's no analog of the predicted future conditions in that area, so we shouldn't interpret these results much. The second, and much more interesting region, is in Namibia, where at least under the more extreme climate scenarios we're still predicting all species to lose climate space. I think this is a good example of where we're likely to see the statistical problems I mentioned originally: the Namib is hot and dry, but not quite as hot and dry as the Sahara. Like the Sahara, it's expected to get hotter and drier - but whilst the Sahara will get hotter and drier and become a no-analogue region (where, if we do forecast distributions at all know it's just a guess), the Namib will become more like the Sahara is now. For that reason it's not noted to be a no-analog region for the modelling, but since there are few species in common between the Sahara and the Namib (they're a continent apart, after all!) all the hotest, driest current locations are associated with absence of the Namib species. All the models fitted in this paper to these data are therefore guaranteed to predict loss of climate space for species in the Namib simply because they're not found in the Sahara and regardless of whether or not it's actually likely. So it's not that it's certain these species are safe - far from it in fact - but that the uncertainty in the prediction is vastly underestimated: we simply don't know if this is a statistical artifact or not. Somewhat more interesting is the higher level of change predicted for Mozambique: there's no obvious statistical artifact I can think of here, so that's clearly the area to watch if we want to see how well the models are working, which is potentially very useful information.
So, what do I think about Raquel's other points in the comment above? Well, I think there are three further issues where we'll probably disagree. Firstly, this line in Raquel's post is extremely important:
If we cannot identify the most appropriate model or climate scenario, the safest is to use a range of alternatives.This caveat - that the use of ensemble models is appropriate if and only if we can't identify the most appropriate model - is very important, because if we can identify better or worse methods - and particularly when the models make different assumptions about the form of the relatinships between variables - then the value of using a range of alternative models (an ensemble) is lost. This is, of course, why the cartoon I included in the original post is funny: it's so obvious which model is the most appropriate and that the other models are unimportant. It might not immediately seem so obvious which models are better than other for the types of models ecologists are using, but my opinion is that whilst we're stll at such a basic stage in modelling, instead spending time creating more and more sophisticated ensembles, we'd be better working out which models are better. And there are lots of ways to do that (start by asking a statistician!). This is quite different from the field of climate modelling, where there currently is no way to chose between the latest models, and most ensemble are built of models with different parameter values, not functional forms (though note that climate modellers don't include their old 1990s models in the ensemble any more, because it's obvious that these models have been improved upon).
Next, Raquel gives a clear explanation of why climate suitability might not be the same as species distribution - with which I agree completely. In other papers the authors have gone to length to explain how climate suitability (as a continuous probability) can be used to make estimates of prediction (a good example here), but here they chose to do what many others do and classify a continuous probability into areas described as 'suitable' or 'unsuitable'. I think that once we divide a map up into discrete areas like this we're saying that an organism can or can't live in each square. Even so, Raquel is correct to say that there's a difference between suitability and distribution: if we idetify a square as suitable, it doesn't mean that the species will be found there, for any number of reasons - I'm certain Europe would be suitable for a lot of N American species, but they don't live there. So a change in climate suitability from not-suitable to suitable doesn't necessarily mean it will certainly be colonised. But there's an asymmetry here in the places we say initially are suitable, but become non-suitable. Here, I think, we are talking about local extinction of one form or another - it might not be immediate, and if there are suitable source populations all around it might not actually occur, but in some demographic sense local extinction has happened none-the-less. Taking it to the extreme where everywhere that is currently suitable becomes unsuitable, and no-where else becomes suitable, that is usually taken as signs of imminent extinction by plenty of people in the literature (e.g. here). So whilst I agree that focusing on the gains in climate space is not so useful when predicting actualy range change, the focus on losses is far more important and, I would content, if the predictions of loss in climate space don't also hold for extinction, then the model of climate space is flawed. Or more simply, if unsuitable doesn't mean that the square is not suitable for that species, it means nothing.
Finally, now I understand how percent turnover was calculated (for each square it is = (colonisation + extinctions) / (continued presences + colonisations + extinctions)), I recognise the index as one of the ones that fails several basic tests of what an index of turnover should do (e.g. it tends to emphasise changes in richness, rather than turnover per se, resulting in the same sort of richness bias that is seen in Figure 7 because it gives an identical score (0.9) to squares where initial richness is low (say 1), but 9 species are expected to colonise, as it does to squares where initial richness was 55, but 45 species go extinct and 45 more colonise which is clearly a much bigger real change). I doubt that's going to make a huge difference to the overall picture, but it does mean I'll take the turnover results with a pinch of salt too.
And that, with much thanks to Raquel for joining in the discussion, is all for now. Anyone else want to make some comments?
Garcia, R., Burgess, N., Cabeza, M., Rahbek, C., & Araújo, M. (2012). Exploring consensus in 21st century projections of climatically suitable areas for African vertebrates Global Change Biology, 18 (4), 1253-1269 DOI: 10.1111/j.1365-2486.2011.02605.x