You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
and similarly to alignment algorithms we don't know which is the best generally/in specific use cases. It would be nice to establish a fully automated pipeline which would grow with newer algorithms and possibly datasets
could be without knowing ground truth -- just impact (difference) in the results of the pipeline (e.g. simple BET or some other morphometrics) across defacers
could use some curated dataset(s) with ground truth known as e.g. pipeline + annotated datasets in https://mindboggle.info/ to judge the impact
The number of defacing and refacing* algorithms grows. Data acquisition changes. New analysis approaches etc.
Publications investigating effects targeting current defacing pipelines on some datasets already exist:
and similarly to alignment algorithms we don't know which is the best generally/in specific use cases. It would be nice to establish a fully automated pipeline which would grow with newer algorithms and possibly datasets
Anyone interested to join @con/obc ?
The text was updated successfully, but these errors were encountered: