Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get rid of the legacy demo data #697

Merged
merged 5 commits into from
Sep 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,5 @@ docs/source/examples/_notebooks
.#*
\#*#
# Exclude files starting with exclude (local test and development)
exclude*
exclude*
data/demo_data.fits
Binary file removed data/demo/000001.fits
Binary file not shown.
Binary file removed data/demo/000002.fits
Binary file not shown.
Binary file removed data/demo/000003.fits
Binary file not shown.
Binary file removed data/demo/000004.fits
Binary file not shown.
Binary file removed data/demo/000005.fits
Binary file not shown.
Binary file removed data/demo/000006.fits
Binary file not shown.
Binary file removed data/demo/000007.fits
Binary file not shown.
Binary file removed data/demo/000008.fits
Binary file not shown.
Binary file removed data/demo/000009.fits
Binary file not shown.
82 changes: 0 additions & 82 deletions data/demo_config.yml

This file was deleted.

File renamed without changes.
21 changes: 0 additions & 21 deletions data/demo_times.dat

This file was deleted.

3 changes: 1 addition & 2 deletions data/readme.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
The data directory contains a few data sets used in example python files and notebooks:
- data/demo: Contains 10 image files created using fake_data_creator.py and containing a single fake object.
- data/demo_times.dat: The external time file for the images in data/demo.
- data/demo_image.fits: Contains an image file created using fake_data_creator.py and containing a single fake object.
- data/small: Contains 10 small image files created using fake_data_creator.py and containing a single fake object.
- data/fake_results: Contains the results of running the KBMOD_Demo notebook on the data in data/demo. For a description of the files see the KBMOD documentation.
78 changes: 51 additions & 27 deletions notebooks/KBMOD_Demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,11 @@
"\n",
"from pathlib import Path\n",
"\n",
"from kbmod.configuration import SearchConfiguration\n",
"from kbmod.fake_data.demo_helper import make_demo_data\n",
"from kbmod.run_search import *\n",
"from kbmod.search import *"
"from kbmod.search import *\n",
"from kbmod.work_unit import WorkUnit"
]
},
{
Expand All @@ -33,13 +36,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"There are at least two file paths you need to setup in order to run kbmod:\n",
"1. The im_filepath provides a path to the input images.\n",
"1. The res_filepath provides a path to the directory where the output results will be stored.\n",
"In order to run KBMOD you need to have the location of the input data and a `res_filepath` that provides a path to the directory where the output results will be stored. Input data can come from a variety of formats including Rubin’s Bulter, fits files, and `WorkUnit` files. In this demo we use the `WorkUnit` file which is an internal storage format used. For more information on generating a `WorkUnit` from the Butler or fits, see the standardizer notebooks.\n",
"\n",
"A time and psf file can optionally be specified.\n",
"\n",
"If you already have data files, you can use those. Below we use the data in `data/demo`. You can also create your own fake data using `fake_data_creator.py`."
"If you already have data files, you can use those. Below we create and use data in `data/demo_data.fits`."
]
},
{
Expand All @@ -48,9 +47,11 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the path for the data.\n",
"im_filepath = \"../data/demo\"\n",
"print(os.listdir(im_filepath))"
"input_filename = \"../data/demo_data.fits\"\n",
"\n",
"# Create the fake data usering a helper function.\n",
"if not Path(input_filename).is_file():\n",
" make_demo_data(filename=input_filename)"
]
},
{
Expand Down Expand Up @@ -108,21 +109,19 @@
"results_suffix = \"DEMO\"\n",
"\n",
"# The demo data has an object moving at x_v=10 px/day\n",
"# and y_v = 0 px/day. So we search velocities [0, 20].\n",
"# and y_v = 0 px/day. So we search velocities [0, 19].\n",
"v_min = 0\n",
"v_max = 20\n",
"v_steps = 21\n",
"v_steps = 20\n",
"v_arr = [v_min, v_max, v_steps]\n",
"\n",
"# and angles [-0.5, 0.5]\n",
"# and angles [-0.5, 0.5)\n",
"ang_below = 0.5\n",
"ang_above = 0.5\n",
"ang_steps = 11\n",
"ang_steps = 10\n",
"ang_arr = [ang_below, ang_above, ang_steps]\n",
"\n",
"input_parameters = {\n",
" # Input parameters\n",
" \"im_filepath\": im_filepath,\n",
" # Grid search parameters\n",
" \"v_arr\": v_arr,\n",
" \"ang_arr\": ang_arr,\n",
Expand All @@ -134,12 +133,15 @@
" \"do_mask\": True, # <-- Apply the masks specified in the FITS files.\n",
" \"mask_num_images\": 10,\n",
" # Basic filtering (always applied)\n",
" \"num_obs\": 7, # <-- Filter anything with fewer than 7 observations\n",
" \"num_obs\": 15, # <-- Filter anything with fewer than 15 observations\n",
" \"lh_level\": 10.0, # <-- Filter anything with a likelihood < 10.0\n",
" # SigmaG clipping parameters\n",
" \"sigmaG_lims\": [15, 60], # <-- Clipping parameters (lower and upper percentile)\n",
" \"gpu_filter\": True, # <-- Apply clipping and filtering on the GPU\n",
" \"clip_negative\": True,\n",
" # Some basic stamp filtering limits.\n",
" \"mom_lims\": [37.5, 37.5, 1.5, 1.0, 1.0],\n",
" \"peak_offset\": [3.0, 3.0],\n",
" # Override the ecliptic angle for the demo data since we\n",
" # know the true angle in pixel space.\n",
" \"average_angle\": 0.0,\n",
Expand All @@ -160,7 +162,24 @@
"metadata": {},
"outputs": [],
"source": [
"# print(config)"
"print(config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We load the data as a `WorkUnit` object. In general `WorkUnit`'s include a copy of their own configuration so they have all the information they need for a full run. We overwrite the stored configuration with the one we defined above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_data = WorkUnit.from_fits(input_filename)\n",
"input_data.config = config"
]
},
{
Expand Down Expand Up @@ -188,7 +207,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we have defined the search parameters, we can create a ``SearchRunner`` and use one of the run_search functions. In this case we use ``run_search_from_config`` which uses the config to search for the input files."
"Once we have defined the search parameters, we can create a `SearchRunner` and use one of the run_search functions. In this case we use `run_search_from_work_unit` which uses the `WorkUnit` to define both the image data and the configuration information."
]
},
{
Expand All @@ -198,7 +217,7 @@
"outputs": [],
"source": [
"rs = SearchRunner()\n",
"results = rs.run_search_from_config(config)"
"results = rs.run_search_from_work_unit(input_data)"
]
},
{
Expand Down Expand Up @@ -289,13 +308,11 @@
"outputs": [],
"source": [
"# Turn on filtered tracking\n",
"input_parameters[\"track_filtered\"] = True\n",
"input_data.config.set(\"track_filtered\", True)\n",
"\n",
"# Turn up filtering of stamp filtering. This will require 100% of the stamp's flux\n",
"# to be at the center pixel and effectively filter every candidate trajectory.\n",
"input_parameters[\"center_thresh\"] = 1.0\n",
"\n",
"config = SearchConfiguration.from_dict(input_parameters)"
"input_data.config.set(\"center_thresh\", 1.0)"
]
},
{
Expand All @@ -312,7 +329,7 @@
"outputs": [],
"source": [
"rs = SearchRunner()\n",
"results = rs.run_search_from_config(config)\n",
"results = rs.run_search_from_work_unit(input_data)\n",
"print(f\"Search found {len(results)} results.\")"
]
},
Expand Down Expand Up @@ -355,7 +372,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we search for one of the expected trajectories (starting at pixel (106, 44) at the first time step) by using the table's search functionality."
"Now we search for one of the expected trajectories (starting at pixel (50, 40) at the first time step) by using the table's search functionality."
]
},
{
Expand All @@ -364,7 +381,7 @@
"metadata": {},
"outputs": [],
"source": [
"subset = results.table[(results.table[\"x\"] == 106) & (results.table[\"y\"] == 44)]\n",
"subset = results.table[(results.table[\"x\"] == 50) & (results.table[\"y\"] == 40)]\n",
"print(subset)"
]
},
Expand All @@ -374,6 +391,13 @@
"source": [
"As we can see all of the potential trajectories were filtered by the stamp filter. We can use this information to help tune different filtering stages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
4 changes: 2 additions & 2 deletions notebooks/Kbmod_Reference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
"import matplotlib.pyplot as plt\n",
"import os\n",
"\n",
"im_path = \"../data/demo/\"\n",
"im_file = \"../data/demo_image.fits\"\n",
"res_path = \"./results\""
]
},
Expand Down Expand Up @@ -183,7 +183,7 @@
"source": [
"from kbmod.data_interface import load_deccam_layered_image\n",
"\n",
"im = load_deccam_layered_image(im_path + \"000000.fits\", p)\n",
"im = load_deccam_layered_image(im_file, p)\n",
"print(f\"Loaded a {im.get_width()} by {im.get_height()} image at time {im.get_obstime()}\")"
]
},
Expand Down
Loading
Loading