Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GREW-pose-pkl生成heatmap时出现了类似维度不匹配的报错 #270

Open
Niannian-Li opened this issue Feb 28, 2025 · 0 comments
Open

Comments

@Niannian-Li
Copy link

教授(开发人员们)您好,我严格按照了https://github.com/ShiqiYu/OpenGait/tree/086f9f0129aa07c7b31df73593f07326c0ada26d/datasets/GREW 里的步骤,分别将轮廓数据和姿势数据重新排序,并分别对轮廓数据和姿势数据进行预处理成pkl文件,
python datasets/GREW/rearrange_GREW_pose.py --input_path grew --output_path GREW-pose-rearranged;python datasets/pretreatment.py --input_path GREW-pose-rearranged --output_path GREW-pose-pkl --pose --dataset GREW
之后将处理完的GREW-pose-pkl执行heatmap.py
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 datasets/pretreatment_heatmap.py --pose_data_path=GREW-pose-pkl --save_root=GREW-heatmap-pkl --dataset_name=GREW
但是出现了以下报错,请问我是忽略了什么操作


/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
  0%|                                                                                                       | 0/31722 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "datasets/pretreatment_heatmap.py", line 709, in <module>
    for _, tmp in tqdm(enumerate(dataloader), total=len(dataloader)):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
  0%|                                                                                                       | 0/31722 [00:00<?, ?it/s]    data.reraise()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "datasets/pretreatment_heatmap.py", line 634, in __getitem__
    pose_data = pose_data[:,2:].reshape(-1, 17, 3)
ValueError: cannot reshape array of size 4320 into shape (17,3)

  0%|                                                                                                       | 0/31722 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "datasets/pretreatment_heatmap.py", line 709, in <module>
    for _, tmp in tqdm(enumerate(dataloader), total=len(dataloader)):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
    data.reraise()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "datasets/pretreatment_heatmap.py", line 634, in __getitem__
    pose_data = pose_data[:,2:].reshape(-1, 17, 3)
ValueError: cannot reshape array of size 5445 into shape (17,3)

  0%|                                                                                                       | 0/31722 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "datasets/pretreatment_heatmap.py", line 709, in <module>
    for _, tmp in tqdm(enumerate(dataloader), total=len(dataloader)):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
    data.reraise()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "datasets/pretreatment_heatmap.py", line 634, in __getitem__
    pose_data = pose_data[:,2:].reshape(-1, 17, 3)
ValueError: cannot reshape array of size 2655 into shape (17,3)

  0%|                                                                                                       | 0/31722 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "datasets/pretreatment_heatmap.py", line 709, in <module>
    for _, tmp in tqdm(enumerate(dataloader), total=len(dataloader)):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
    for obj in iterable:
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
    data.reraise()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "datasets/pretreatment_heatmap.py", line 634, in __getitem__
    pose_data = pose_data[:,2:].reshape(-1, 17, 3)
ValueError: cannot reshape array of size 2610 into shape (17,3)

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 82894) of binary: /usr/local/miniconda3/bin/python
Traceback (most recent call last):
  File "/usr/local/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in <module>
    main()
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main
    launch(args)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 176, in launch
    run(args)
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
    elastic_launch(
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
datasets/pretreatment_heatmap.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2025-02-28_13:19:59
  host      : I1fc5afc8ff00201d7c
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 82895)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2025-02-28_13:19:59
  host      : I1fc5afc8ff00201d7c
  rank      : 2 (local_rank: 2)
  exitcode  : 1 (pid: 82896)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2025-02-28_13:19:59
  host      : I1fc5afc8ff00201d7c
  rank      : 3 (local_rank: 3)
  exitcode  : 1 (pid: 82897)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2025-02-28_13:19:59
  host      : I1fc5afc8ff00201d7c
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 82894)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant