Render instance annoatation, RGB image and depth in one line code
- Render depth
- Render annotations for semantic segmentation, instance segmentation, and panoptic segmentation
- Generate 6DoF pose ground truth
- Pre-define domain randomization: light
- Pre-define domain randomization: background
- Pre-define domain randomization: distractors
- Pre-define domain randomization: textures
- Support docker:
docker run -v /tmp:/tmp diyer22/bpycv
- To Cityscapes format
- To COCO format
News: We win 🥈2nd place in IROS 2020 Open Cloud Robot Table Organization Challenge (OCRTOC)
bpycv
support Blender 2.8, 2.9
Example for Blender 2.92:
cd <path to blender>/2.92/python/bin
./python3.7m -m ensurepip # get pip
./python3.7m -m pip install -U pip setuptools wheel
./python3.7m -m pip install -U bpycv
Copy-paste this code to Scripting/Text Editor
and click Run Script
button(or Alt+P
)
import cv2
import bpy
import bpycv
import random
import numpy as np
# remove all MESH objects
[bpy.data.objects.remove(obj) for obj in bpy.data.objects if obj.type == "MESH"]
for index in range(1, 20):
# create cube and sphere as instance at random location
location = [random.uniform(-2, 2) for _ in range(3)]
if index % 2:
bpy.ops.mesh.primitive_cube_add(size=0.5, location=location)
categories_id = 1
else:
bpy.ops.mesh.primitive_uv_sphere_add(radius=0.5, location=location)
categories_id = 2
obj = bpy.context.active_object
# set each instance a unique inst_id, which is used to generate instance annotation.
obj["inst_id"] = categories_id * 1000 + index
# render image, instance annoatation and depth in one line code
# result["ycb_meta"] is 6d pose GT
result = bpycv.render_data()
# save result
cv2.imwrite(
"demo-rgb.jpg", result["image"][..., ::-1]
) # transfer RGB image to opencv's BGR
# save instance map as 16 bit png
# the value of each pixel represents the inst_id of the object to which the pixel belongs
cv2.imwrite("demo-inst.png", np.uint16(result["inst"]))
# convert depth units from meters to millimeters
depth_in_mm = result["depth"] * 1000
cv2.imwrite("demo-depth.png", np.uint16(depth_in_mm)) # save as 16bit png
# visualization inst_rgb_depth for human
cv2.imwrite("demo-vis(inst_rgb_depth).jpg", result.vis()[..., ::-1])
Open ./demo-vis(inst_rgb_depth).jpg
:
Inculding domain randomization for background, light, and distractor.
mkdir ycb_demo
cd ycb_demo/
# prepare demo code and data
git clone https://github.com/DIYer22/bpycv
git clone https://github.com/DIYer22/bpycv_example_data
cd bpycv/example/
blender -b -P ycb_demo.py
cd dataset/vis/
ls . # visualize result here
Open 0.jpg
:
(instance_map | RGB | depth)
YCB demo code: example/ycb_demo.py
Generate and visualize 6DoF pose GT: example/6d_pose_demo.py
Blender may can't direct load
.obj
and.dea
file from YCB and ShapeNet dataset.
It's better to transefer and format usingmeshlabserver
by runmeshlabserver -i raw.obj -o for_blender.obj -m wt
suggestion and pull request are welcome 😊