You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The error messages look something like this, repeated forever:
...\climada\lib\site-packages\xarray\core\dataset.py:271: UserWarning: The specified chunks separate the stored chunks along dimension "latitude" starting at index 1377. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
...\climada\lib\site-packages\xarray\core\dataset.py:271: UserWarning: The specified chunks separate the stored chunks along dimension "longitude" starting at index 3480. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
...\climada\lib\site-packages\dask\dataframe\_pyarrow_compat.py:17: FutureWarning: Minimal version of pyarrow will soon be increased to 14.0.1. You are using 11.0.0. Please consider upgrading.
warnings.warn(
...\climada\lib\site-packages\xarray\core\dataset.py:271: UserWarning: The specified chunks separate the stored chunks along dimension "latitude" starting at index 1377. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
...\climada\lib\site-packages\xarray\core\dataset.py:271: UserWarning: The specified chunks separate the stored chunks along dimension "longitude" starting at index 3480. This could degrade performance. Instead, consider rechunking after loading.
warnings.warn(
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "...\climada\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "...\climada\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "...\climada\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "...\climada\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "...\climada\lib\runpy.py", line 288, in run_path
return _run_module_code(code, init_globals, run_name,
File "...\climada\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "...\climada\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\me\git\climada\climada_petals\xxx.py", line 4, in <module>
rf.download_forecast(
File "C:\Users\me\git\climada\climada_petals\climada_petals\hazard\rf_glofas\river_flood_computation.py", line 354, in download_forecast
forecast = download_glofas_discharge(
File "C:\Users\me\git\climada\climada_petals\climada_petals\hazard\rf_glofas\transform_ops.py", line 318, in download_glofas_discharge
files = glofas_request(
File "C:\Users\me\git\climada\climada_petals\climada_petals\hazard\rf_glofas\cds_glofas_downloader.py", line 289, in glofas_request
return glofas_request_multiple(
File "C:\Users\me\git\climada\climada_petals\climada_petals\hazard\rf_glofas\cds_glofas_downloader.py", line 173, in glofas_request_multiple
with mp.Pool(num_proc) as pool:
File "...\climada\lib\multiprocessing\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "...\climada\lib\multiprocessing\pool.py", line 212, in __init__
self._repopulate_pool()
File "...\climada\lib\multiprocessing\pool.py", line 303, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "...\climada\lib\multiprocessing\pool.py", line 326, in _repopulate_pool_static
w.start()
File "...\climada\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "...\climada\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "...\climada\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "...\climada\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "...\climada\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
(Execution of above snippet cannot even be stopped with ctrl-c no matter how often it's pressed.)
The text was updated successfully, but these errors were encountered:
I think this is due to Jupyter Notebooks not supporting multiprocessing properly. It's quite curious that I did not run into these problems lately. One quick solution is not to rely on multiprocessing when only downloading with a single process, which is the default. I'll come up with a fix.
That's what I thought first too, but the code snippet above yields this very error above when run as a python script from command line, not from within a notebook. So it's probably not the Notebook's support of multiprocessing but the system's istself...
On a Windows 10 computer, the following piece of code leads to a process running wild with a never ending stream of Error messages:
The error messages look something like this, repeated forever:
(Execution of above snippet cannot even be stopped with
ctrl-c
no matter how often it's pressed.)The text was updated successfully, but these errors were encountered: