Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Degurba Example #29

Draft
wants to merge 36 commits into
base: main
Choose a base branch
from
Draft
Changes from 1 commit
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
8734791
update docs
elbeejay Feb 28, 2024
0bcd030
Merge branch 'main' of github.com:worldbank/GOSTurban into update-docs
elbeejay Feb 29, 2024
2eb4db3
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 29, 2024
9f660ff
remove docs/notebooks
elbeejay Feb 29, 2024
c6ce3ca
Merge branch 'update-docs' of github.com:worldbank/GOSTurban into upd…
elbeejay Feb 29, 2024
be10311
Revert "[pre-commit.ci] auto fixes from pre-commit.com hooks"
elbeejay Feb 29, 2024
9420f8d
Revert "remove docs/notebooks"
elbeejay Feb 29, 2024
545ab38
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 29, 2024
5a8a7fa
modify toc to only include notebook tutorials, clear notebook outputs
elbeejay Feb 29, 2024
ebeb0bb
Merge branch 'update-docs' of github.com:worldbank/GOSTurban into upd…
elbeejay Feb 29, 2024
ed08bff
fix deps, imports, and ci
elbeejay Feb 29, 2024
c9f852f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 29, 2024
419648a
generate conf.py as part of docs build workflow
elbeejay Feb 29, 2024
5e3578d
Merge branch 'update-docs' of github.com:worldbank/GOSTurban into upd…
elbeejay Feb 29, 2024
555845a
have workflow clean notebooks
elbeejay Feb 29, 2024
861c786
add extensions, options etc to _config.yml
elbeejay Feb 29, 2024
7c60f12
cleaned notebooks
elbeejay Feb 29, 2024
7ec9b65
remove non-functional exclusions
elbeejay Feb 29, 2024
4ebf1f5
build docs on prs, only deploy on main
elbeejay Feb 29, 2024
081bf9a
require nbconvert for docs
elbeejay Mar 1, 2024
a66f8c5
doc-strings for urbanraster.py
elbeejay Mar 12, 2024
0306d5e
doc-strings for step2
elbeejay Mar 12, 2024
3ec3ebd
lei.py docstrings
elbeejay Mar 12, 2024
522c775
urb doc-strings
elbeejay Mar 12, 2024
594004e
country_helper.py docstrings
elbeejay Mar 12, 2024
9f2f8ad
ignore words
elbeejay Mar 12, 2024
f5fa9c0
ignore more words
elbeejay Mar 12, 2024
88872dd
fix spellings
elbeejay Mar 12, 2024
8a66c39
fix spellings
elbeejay Mar 12, 2024
41ec26b
ignore another word
elbeejay Mar 12, 2024
66eb8cf
tomli to codepsell
elbeejay Mar 12, 2024
8773323
clean up doc-strings
elbeejay Mar 13, 2024
29bc9c8
clean up notebooks
elbeejay Mar 13, 2024
4bfd5f3
exclude implementation notebooks from ruff
elbeejay Mar 16, 2024
8b7c61f
finish cleaning up doc-strings
elbeejay Mar 16, 2024
9590759
new degurba example
elbeejay Mar 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix spellings
elbeejay committed Mar 12, 2024
commit 88872dd95a221efee9903633e8891be1c71bcdfa
Original file line number Diff line number Diff line change
@@ -67,7 +67,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vizualize raster data - GHSL"
"# Visualize raster data - GHSL"
]
},
{
Original file line number Diff line number Diff line change
@@ -5,7 +5,7 @@
"metadata": {},
"source": [
"# Combine All Urban Metrics\n",
"Run the 4 seperate notebooks seperately, and then this notebook last to combine all of the results into one shapefile\n",
"Run the 4 separate notebooks separately, and then this notebook last to combine all of the results into one shapefile\n",
"\n",
"Check to see that all of the results have the same number of rows"
]
Original file line number Diff line number Diff line change
@@ -138,7 +138,7 @@
"\n",
" input_shapes_gpd = gpd.read_file(shpName)\n",
"\n",
" # psuedocode\n",
" # pseudocode\n",
" # For each Shape:\n",
" # Select all built-up pixels that are mostly within shape\n",
" # Area of shape = sum of all pixels * area of each pixel\n",
@@ -197,7 +197,7 @@
" # print(\"print metrics_scalar\")\n",
" # print(metrics_scalar)\n",
"\n",
" # and concatinate it with the row's shape\n",
" # and concatenate it with the row's shape\n",
" new_temp_gdf = pd.concat([temp_gdf.reset_index(drop=True), metrics_df], axis=1)\n",
"\n",
" # print(\"print new_temp_gdf\")\n",
Original file line number Diff line number Diff line change
@@ -379,7 +379,7 @@
" metrics_scalar[k] = [metrics[k]]\n",
" metrics_df = pd.DataFrame(metrics_scalar)\n",
"\n",
" # and concatinate it with the row's shape\n",
" # and concatenate it with the row's shape\n",
" new_temp_gdf_proj = pd.concat(\n",
" [temp_gdf_proj.reset_index(drop=True), metrics_df], axis=1\n",
" )\n",
Original file line number Diff line number Diff line change
@@ -134,7 +134,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Psuedocode\n",
"# Pseudocode\n",
"\n",
"# pop_values = []\n",
"# For each Shape/FUA:\n",
@@ -193,7 +193,7 @@
" # geometry\n",
" d[\"geometry\"] = d.apply(lambda row: Point(row[\"x\"], row[\"y\"]), axis=1)\n",
"\n",
" # exlude pixels with value less than 77\n",
" # exclude pixels with value less than 77\n",
" print(len(d))\n",
"\n",
" # print(d)\n",
@@ -238,7 +238,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Psuedocode\n",
"# Pseudocode\n",
"\n",
"# for each Shape/FUA:\n",
"# pixel_count_below_median = 0\n",
@@ -288,7 +288,7 @@
"\n",
" d = gpd.GeoDataFrame({\"col\": col, \"row\": row, \"val\": val})\n",
"\n",
" # exlude pixels with value less than 77\n",
" # exclude pixels with value less than 77\n",
" d = d[d.val > 77]\n",
" d_count = len(d)\n",
" # print(f\"d_count is {d_count}\")\n",
@@ -322,7 +322,7 @@
" # print(\"print metrics_scalar\")\n",
" # print(metrics_scalar)\n",
"\n",
" # and concatinate it with the row's shape\n",
" # and concatenate it with the row's shape\n",
" new_temp_gdf = pd.concat([temp_gdf.reset_index(drop=True), metrics_df], axis=1)\n",
"\n",
" # print(\"print new_temp_gdf\")\n",
Original file line number Diff line number Diff line change
@@ -158,7 +158,7 @@
" # print(\"print metrics_scalar\")\n",
" # print(metrics_scalar)\n",
"\n",
" # and concatinate it with the row's shape\n",
" # and concatenate it with the row's shape\n",
" new_temp_gdf = pd.concat([temp_gdf.reset_index(drop=True), metrics_df], axis=1)\n",
"\n",
" # print(\"print new_temp_gdf\")\n",
@@ -185,7 +185,7 @@
" # metrics_scalar['intersection_density_km'] = 0\n",
" # metrics_scalar['street_density_km'] = 0\n",
" # metrics_df = pd.DataFrame(metrics_scalar)\n",
" # and concatinate it with the row's shape\n",
" # and concatenate it with the row's shape\n",
" # new_temp_gdf = pd.concat([temp_gdf.reset_index(drop=True), metrics_df], axis=1)\n",
" # output_new_temp_gdf = output_new_temp_gdf.append(new_temp_gdf, ignore_index=True)\n",
" continue"
8 changes: 4 additions & 4 deletions notebooks/Implementations/Slum_Mapping/slumML/STEP1.ipynb
Original file line number Diff line number Diff line change
@@ -106,7 +106,7 @@
"# Prepare the original shape file\n",
"original = gpd.read_file(f) # Read ESEI shapefile\n",
"if original.crs != WGS:\n",
" original = original.to_crs(WGS) # Convert the spatial referenct to WGS if it is not\n",
" original = original.to_crs(WGS) # Convert the spatial referenced to WGS if it is not\n",
"\n",
"original[\"PID\"] = original.index + 1\n",
"\n",
@@ -115,7 +115,7 @@
"fil = original.copy()\n",
"\n",
"fil = fil.to_crs(UTM) # Convert the spatial reference to UTM\n",
"# Adding attributes to the shapefile: area, geomerty, and PID (unique IDs)\n",
"# Adding attributes to the shapefile: area, geometry, and PID (unique IDs)\n",
"fil[\"area\"] = fil.area\n",
"fil[\"centroid\"] = fil[\"geometry\"].centroid\n",
"\n",
@@ -262,8 +262,8 @@
" print(\"%s rows completed at %s\" % (counter, time.ctime()))\n",
"\n",
" \"\"\"\n",
" # this functionality saves progress in case the process cannot be finished in one sitting. \n",
" # ideally, finish the processing in one sitting. \n",
" # this functionality saves progress in case the process cannot be finished in one sitting.\n",
" # ideally, finish the processing in one sitting.\n",
" old = 0\n",
" if counter % save_thresh == 0:\n",
" saver = pd.DataFrame(bundle)\n",
2 changes: 1 addition & 1 deletion notebooks/Implementations/Slum_Mapping/slumML/STEP2.ipynb
Original file line number Diff line number Diff line change
@@ -98,7 +98,7 @@
"metadata": {},
"outputs": [],
"source": [
"pth = \"/content/drive/MyDrive/Colab Notebooks/slumML/data/Yaounde/\" # Directory to save model, ouputs\n",
"pth = \"/content/drive/MyDrive/Colab Notebooks/slumML/data/Yaounde/\" # Directory to save model, outputs\n",
"building_file = \"/content/drive/MyDrive/Colab Notebooks/slumML/data/Yaounde/Yaounde_DA_morphology.shp\" # Specify the processed building footprint data\n",
"sample_file = \"/content/drive/MyDrive/Colab Notebooks/slumML/data/Yaounde/Yaounde_sample_data.shp\" # Specify the sample data"
]
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Exploring DEGURBA\n",
"The Degree of Urbanization methodology developed by the European Commission provides a consistent definition of urban through application of population density and total population theresholds to gridded population data."
"The Degree of Urbanization methodology developed by the European Commission provides a consistent definition of urban through application of population density and total population thresholds to gridded population data."
]
},
{
@@ -253,7 +253,7 @@
" xx = shape(cShape)\n",
" xx = Polygon(xx.exterior)\n",
" cShape = xx.__geo_interface__\n",
" # If the shape is urban, claculate total pop\n",
" # If the shape is urban, calculate total pop\n",
" mask = rasterize(\n",
" [(cShape, 0)],\n",
" out_shape=data[0, :, :].shape,\n",
Original file line number Diff line number Diff line change
@@ -98,7 +98,7 @@
" dou_urban_1k_files.append(os.path.join(root, f))\n",
" if f.endswith(\"_cc.tif\") or f.endswith(\"_co.tif\") or f.endswith(\"_ur.tif\"):\n",
" db_urban_1k_files.append(os.path.join(root, f))\n",
" \n",
"\n",
"pop_files = list(set([\"_\".join(os.path.basename(x).split(\"_\")[:2]) + \".tif\" for x in dou_urban_files]))\n",
"pop_files = [os.path.join(urban_folder, x) for x in pop_files]"
]
@@ -168,10 +168,9 @@
" cur_name = os.path.basename(urban_file).replace(\".tif\", \"\")\n",
" cur_res_2018 = [x[0] for x in list(curR.sample(hh_2018_pairs))]\n",
" out_hh_2018[cur_name] = cur_res_2018\n",
" \n",
"\n",
" cur_res_2022 = [x[0] for x in list(curR.sample(hh_2022_pairs))]\n",
" out_hh_2022[cur_name] = cur_res_2022\n",
" "
" out_hh_2022[cur_name] = cur_res_2022\n"
]
},
{
@@ -268,17 +267,17 @@
"source": [
"final_res = adm1_bounds.copy()\n",
"for pop_layer in pop_files:\n",
" # zonal stats on DOU filess\n",
" pop_name = os.path.basename(pop_layer)[:-4] \n",
" # zonal stats on DOU files\n",
" pop_name = os.path.basename(pop_layer)[:-4]\n",
" dou_urban_file = os.path.join(urban_folder, f'{pop_name}_urban.tif')\n",
" dou_hd_urban_file = os.path.join(urban_folder, f'{pop_name}_urban_hd.tif')\n",
" \n",
"\n",
" help_xx = helper.summarize_population(pop_layer, adm1_bounds, dou_urban_file, dou_hd_urban_file)\n",
" zonal_res = help_xx.calculate_zonal()\n",
" zonal_res = zonal_res.loc[:,[x for x in zonal_res.columns if \"SUM\" in x]]\n",
" for col in zonal_res.columns:\n",
" final_res[col] = zonal_res[col]\n",
" \n",
"\n",
" # zonal stats on DB files\n",
" db_cc_file = os.path.join(urban_folder, f'{pop_name}d10b3000_cc.tif')\n",
" db_co_file = os.path.join(urban_folder, f'{pop_name}d10b3000_co.tif')\n",
@@ -297,7 +296,7 @@
" final_res[col] = zonal_res[col]\n",
" else:\n",
" tPrint(f\"Cannot process {pop_name} for DB\")\n",
" \n",
"\n",
" tPrint(pop_name)"
]
},
Original file line number Diff line number Diff line change
@@ -143,7 +143,7 @@ def generate_combo_layer(self, pop_type="gpo", res="", debug=False):
print(p)

if len(sel_rasters) > 0:
# Open all the ratser files and covert to pixel-level summary numbers
# Open all the ratser files and convert to pixel-level summary numbers
idx = 0
for cur_raster in sel_rasters:
curR = rasterio.open(cur_raster)
10 changes: 8 additions & 2 deletions notebooks/Tutorials/LEI_Example.ipynb
Original file line number Diff line number Diff line change
@@ -8,7 +8,7 @@
"\n",
"More details on the wiki - https://github.com/worldbank/GOST_Urban/wiki/Landscape-Expansion-Index\n",
"\n",
"The Landscape Expansion Index measures the nature of urbanization, quantifying the new urban landscape as one of the following three categories. The process works by isolating the areas of new urban footprint in your study area, buffering those by a set amount (300 m) and intersecting the buffer donut with the original urban area. LEI is calculated as the ratio of the area of the buffer to the area of the old built area within the buffer (the threshold for each class is customizeable). \n",
"The Landscape Expansion Index measures the nature of urbanization, quantifying the new urban landscape as one of the following three categories. The process works by isolating the areas of new urban footprint in your study area, buffering those by a set amount (300 m) and intersecting the buffer donut with the original urban area. LEI is calculated as the ratio of the area of the buffer to the area of the old built area within the buffer (the threshold for each class is customizable). \n",
"\n",
"| Expansion Type | Description | \n",
"| --- | --- |\n",
@@ -255,8 +255,14 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "worldbank",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python"
"name": "python",
"version": "3.10.13"
}
},
"nbformat": 4,