Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deriving mangrove cover from Hamilton 2016 global data-sets #34

Open
jules32 opened this issue Aug 23, 2018 · 4 comments
Open

Deriving mangrove cover from Hamilton 2016 global data-sets #34

jules32 opened this issue Aug 23, 2018 · 4 comments

Comments

@jules32
Copy link
Contributor

jules32 commented Aug 23, 2018

Question from OHI+ Kenya @jmbugua:

We downloaded the tiff files for the year 2008-2012 and derived the mangrove cover in m2 (see datasets in github) and summed up the values for each region. However, we have noticed the following:

  1. The national mangrove cover and consequently for each county is quite low going by national reports and by comparison with the newly released 2010 mangrove baseline layer from GFW.
  2. Physically, there are visible gaps of mangrove cover quite conspicuous in Pate Island

From these observations, could you please comment on the Hamilton data-set and also comment on our method of deriving the cell values. Note that I have used an alternative method or ArcGIS and still get similar results.

From your analysis you, indicate that you did modify the global data-set from Hamilton and also indicate that summing the raster cell in a region provides km sq of mangrove forest.

My question is,

  • Are we supposed to sum the cells or the cell values?
  • Based on your evaluation of Hamilton 30x30 m raster data-sets, how should we proceed? i.e. should we use your modified layers and if yes where can they be accessed?

Looking forward to hear from you.

@Melsteroni
Copy link
Contributor

I don't have a good sense of how accurate the Hamilton data are, so I can't be of much assistance on that account. Based on the papers, it seems like they did a fairly good job of checking the data, but that doesn't mean every region is accurate.

For the analysis, we converted the 30m rasters to 500m to make them easier to deal with at the global scale. We then summed the values in the raster cells within each region to get an estimate of total mangrove area (km2).

At the scale you are working, it would be better to use the 30 m rasters. But, it might be easier to explore the data at the 500m scale. These data, and more information, are available here:
https://mazu.nceas.ucsb.edu/data/#mangrove_data

I've also attached a snippet of Hamilton data describing km2 mangrove for each country. You can see whether your values are aligning. It also provides some generalized trend data for the entire country.

image

@jmbugua
Copy link

jmbugua commented Aug 24, 2018

@Melsteroni @jules32 @mishal089
Many thanks Melsteroni for your response. From the snipet, the national values i.e. 230.04 Km2 (23,000 hectares) is still very low. Lamu county alone has an estimated mangrove cover of 33,000 ha and the national estimates is between 52,000 hectares and 55,000 hectares. The 2010 GFW mangrove layer (now available at WCMC website) gives a more accurate estimate that is in tandem with what has been quted before.
From this, I tend to think that the Hamilton data might not be accurate for our region and this is a gap that we will document.
To move forward, I am intending to use the 2010 layer from GFW and perform a simple overlay analysis using the Global forest watch dashboard to quantify the amount of mangrove. The assumption here will be that any loss/gain below the 2010 layer /mask is mangrove. We will then deduct the amount of forest cover loss from the 2010 layer to find cover for the subsequent year.
Please let me know if this is a noble idea .
Thanks,
James.

@Melsteroni
Copy link
Contributor

Hi @jmbugua

I agree that using the GFW data is best for evaluating extent.

I don't quite understand this: "The assumption here will be that any loss/gain below the 2010 layer /mask is mangrove. We will then deduct the amount of forest cover loss from the 2010 layer to find cover for the subsequent year."

How will you determine loss/gain below the 2010 layers/mask? What exactly are you comparing? (I poked around on the Global Forest Watch Dashboard but it wasn't clear how to get "below" the 2010 layer/mask)

And, what is the ultimate goal of finding the cover for the subsequent year? Are you aiming to use this info. to calculate trend? Or, are you trying to get a more current estimate of extent? If the 2nd, I think the 2010 data is probably adequate for extent (unless you know of events that have had large effects on mangrove between 2010 and now).

@jmbugua
Copy link

jmbugua commented Sep 4, 2018

Hi @Melsteroni
Thanks for your feedback on this issue. Actually, we have decided to abandon the approach or together due to the numerous assumption it entails. Furthermore, I found out the GFW query tools are no longer functional.
At the moment, we are exploring other avenues i.e. conducting quick classification of mangrove data using cloud free landsat images from Global Land Analysis & discovery archive archive. We have gone ahead to do the classification of the 2011-2014 images and derived mangrove cover extent for each year. Unfortunately, there are some high variation in mangrove cover in the year 2012 and 2013 (difference of approx 1900 ha) nationally which is making us to be hesitant on using the data. The trend is however visible and its on a decline generally. I feel like we are now exhausting all our option and might come to you later professional advise on the best way forward.

In addition to this, I did ask Julia to comment on some question that I think you are in a better position to handle. This is an issue with using the crop function versus the mask function in R to delineate ROI. i.e. while cropping a raster layer using a coastal buffer of width e.g. 25 miles, the crop function clips the data by extent and not to the exact polygon. This technique has been used in OHI raster extraction analysis. The mask function seems to clip data to the exact shape and was wondering wchich among the two methods you would consider accurate. Please advise on this as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants